Lumina: Swift Camera Library for CoreML Integrated Imaging

What is Lumina?

Lumina is an open‑source Swift package that turns an iPhone into a fully‑featured camera system with a single line of code. The library is built on top of Apple’s AVFoundation, Vision, and CoreML frameworks, but hides the low‑level plumbing behind a clean, Swift‑friendly API. The result is an SDK that:

  • Captures still photos, live photos, video and depth data.
  • Streams every frame to a delegate – perfect for real‑time image processing.
  • Integrates any CoreML‑compatible model and streams predictions alongside the video frames.
  • Detects QR codes, barcodes, and faces out of the box.
  • Exposes an easy‑to‑tweak UI and lets you replace the default camera controls.

Why use Lumina? • Eliminate boilerplate AVFoundation code. • Quickly prototype ML‑powered camera features. • Focus on your app logic instead of camera internals. • Benefit from a well‑maintained, MIT‑licensed library that works on iOS 13+.


Quick Start

1. Install the package

Open Xcode → File → Add Packages… → enter the URL:

https://github.com/dokun1/Lumina.git

Choose the Latest Release and add the library to your target.

Tip: You can also pin a specific tag if you want deterministic builds.

2. Add the required permissions

Add the following keys to your Info.plist:

<key>NSCameraUsageDescription</key>
<string>We need camera access to scan products.</string>
<key>NSMicrophoneUsageDescription</key>
<string>We need microphone access to record video.</string>

3. Drop the view controller

import UIKit
import Lumina

class CameraDemoViewController: UIViewController, LuminaDelegate {
    private var cameraVC: LuminaViewController!

    override func viewDidLoad() {
        super.viewDidLoad()
        cameraVC = LuminaViewController()
        cameraVC.delegate = self
        cameraVC.setShutterButton(visible: true)
        cameraVC.setTorchButton(visible: true)
        // Optional: plug in a CoreML model
        if let model = try? MobileNet().model {
            cameraVC.streamingModels = [LuminaModel(model: model, type: "MobileNet")]
        }
        present(cameraVC, animated: true, completion: nil)
    }

    // MARK: – LuminaDelegate
    func captured(stillImage: UIImage, livePhotoAt: URL?, depthData: Any?, from controller: LuminaViewController) {
        print("Photo captured: \(stillImage.size)")
        controller.dismiss(animated: true, completion: nil)
    }

    func streamed(videoFrame: UIImage, with predictions: [LuminaRecognitionResult]?, from controller: LuminaViewController) {
        guard let predictions = predictions else { return }
        var overlayText = ""
        for pred in predictions {
            guard let best = pred.predictions?.first else { continue }
            overlayText += "\(pred.type): \(best.name) (\(String(format: "%.2f", best.probability * 100))%)\n"
        }
        controller.textPrompt = overlayText
    }

    func dismissed(controller: LuminaViewController) {
        controller.dismiss(animated: true, completion: nil)
    }
}

That’s it! Tapping the shutter button takes a photo, the delegate receives the image, and if you plugged in a model, you’ll see live predictions overlaid on every frame.


Feature Rundown

Feature How to enable What it gives you
Still Photo cameraVC.recordsVideo = false Take a quick snapshot
Live Photo cameraVC.captureLivePhotos = true; cameraVC.resolution = .photo Rich media with depth
Video Recording cameraVC.recordsVideo = true Capture 1080p or 4K video
Depth Stream cameraVC.captureDepthData = true; cameraVC.resolution = .photo Post‑processing depth maps
QR/Barcode cameraVC.trackMetadata = true Automatic metadata callbacks
Face Detection Same as QR/Barcode Highlight faces in frame
CoreML Streaming cameraVC.streamingModels = [...] Real‑time object recognition
Custom UI cameraVC.setShutterButton(visible: false) etc. Replace default controls
Delegate Implement LuminaDelegate Hook into all capture events

The sample app in the repository demonstrates every feature out of the box, so experiment with the settings in Xcode’s Scene Editor.


Extending Lumina

  1. Add a new CoreML model – Copy your .mlmodel into the project, create a LuminaModel instance, and append it to streamingModels.
  2. Integrate with SwiftUI – Wrap LuminaViewController in a UIViewControllerRepresentable and expose a @Binding for the camera state.
  3. Performance tuning – Adjust frameRate, resolution, or maxZoomScale to match your device’s capabilities.
  4. Open issues & contribute – The repo follows a standard‑readme specification; feel free to fork, add tests, or enhance the documentation.

Why Lumina Stands Out

Criterion Lumina Competitors
License MIT (open, permissive) Many are commercial or permissive, but Lumina’s license is one of the most straightforward
Boilerplate Minimal (single file controller) Others require several classes for session management
Feature‑set CoreML, depth, live photo, QR/face in one lib Most focus on either camera or ML, not both
Documentation Sample app + README with code snippets Documentation quality varies across similar projects

If you’re building an iOS app that needs camera‑and‑AI capabilities, Lumina lets you get a prototype up in minutes and keeps your codebase lean.


Get Started Today

  • Clone from GitHub: git clone https://github.com/dokun1/Lumina.git
  • Explore the Sample app for a hands‑on demo.
  • Read the contributing guide if you’d like to add new features or fix bugs.
  • Drop Lumina into your own project and start building smarter cameras tomorrow.

Question? Post a question on the GitHub Discussions tab, or reach out on Twitter via @dokun1. Happy coding!

Original Article: View Original

Share this article