Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.go.gbgplc.com/llms.txt

Use this file to discover all available pages before exploring further.

This tutorial picks up where Part 1 left off. You already have a working iOS app that runs a GBG Go identity journey with stub camera views. Now you will replace those stubs with the real Smart Capture SDKs — adding guided document scanning, face capture with liveness detection, and encrypted biometric blobs. The change is small. The bridge architecture you built in Part 1 was designed so that swapping camera views is a minimal edit. The handler setup, awaitCompletion()/complete() pattern, and all bridge wiring stay the same. Only the views inside fullScreenCover change.
Reference app: The complete source code is on the part-2-smart-capture branch of the reference repository. To see exactly what changed from Part 1, diff the branches:
git diff main..part-2-smart-capture

Prerequisites

Everything from Part 1, plus:
RequirementNotes
Smart Capture SDKsFour XCFramework bundles, obtained from your GBG account representative. These are not included in the repository.
Physical iOS deviceSmart Capture SDKs require camera hardware. The Simulator falls back to stubs automatically.

Required frameworks

Contact your GBG account representative to obtain these frameworks:
FrameworkPurpose
Document.xcframeworkDocument scanning with guided capture, auto-crop, and quality scoring
FaceCamera.xcframeworkFace capture with liveness detection and encrypted biometric blobs
IDLiveFaceCamera.xcframeworkRuntime dependency of FaceCamera
IDLiveFaceIAD.xcframeworkRuntime dependency of FaceCamera
FaceCamera.xcframework links against IDLiveFaceCamera and IDLiveFaceIAD at runtime. If you add FaceCamera but forget the other two, the app crashes at launch with a “Library not loaded” error.

Start from Part 1

Check out the Part 2 branch:
git clone https://github.com/gbgplc/gbg-go-ios-reference.git
cd gbg-go-ios-reference
git checkout part-2-smart-capture
Or if you already have the repo:
git checkout part-2-smart-capture
The companion server is unchanged — start it the same way as Part 1:
cd server
npm install
node index.mjs

Add the Smart Capture SDKs

The Smart Capture SDKs ship as four .xcframework bundles. Drop them into the project, embed and sign them, then enable the compiler flag — the rest of the section walks through each step.

1. Place the frameworks

Copy all four .xcframework bundles into GBGGoReference/Frameworks/:
GBGGoReference/
└── Frameworks/
    ├── Document.xcframework
    ├── FaceCamera.xcframework
    ├── IDLiveFaceCamera.xcframework
    └── IDLiveFaceIAD.xcframework

2. Add to Xcode

  1. Open GBGGoReference/GBGGoReference.xcodeproj in Xcode.
  2. Select the GBGGoReference target.
  3. Go to General > Frameworks, Libraries, and Embedded Content.
  4. Click +, then Add Other > Add Files.
  5. Select all four .xcframework bundles from the Frameworks/ directory.
  6. Set each to Embed & Sign.
The project already includes $(PROJECT_DIR)/Frameworks in its framework search paths. You only need to add the frameworks to the target’s embedded content.

Enable the Compiler Flag

The Smart Capture integration is gated behind a compile-time flag called SMART_CAPTURE_ENABLED. This flag is off by default, so the app builds and runs with stubs even if you haven’t added the frameworks yet.
  1. In Xcode, select the GBGGoReference target.
  2. Go to Build Settings and make sure “All” is selected, not “Basic”.
  3. Search for Active Compilation Conditions (SWIFT_ACTIVE_COMPILATION_CONDITIONS).
  4. Add SMART_CAPTURE_ENABLED to both the Debug and Release configurations.
With the flag set, the app compiles the Smart Capture wrapper views and uses them on a physical device. On Simulator, it still falls back to stubs regardless of the flag.

Document Capture with SmartCapture

Create SmartCaptureDocumentView.swift in the Sources/Capture/ group. This is a thin SwiftUI wrapper around the Document SDK:
#if SMART_CAPTURE_ENABLED

import Document
import SwiftUI

struct SmartCaptureDocumentView: View {
    let onCaptured: (Data, Int, Int) -> Void
    let onFailed: (String) -> Void

    @StateObject private var sdk: DocumentSDK

    init(
        documentSide: DocumentSide = .front,
        documentType: DocumentType = .unknown,
        onCaptured: @escaping (Data, Int, Int) -> Void,
        onFailed: @escaping (String) -> Void
    ) {
        let config = DocumentScannerConfig(
            autoCaptureToggleConfig: .showDelayed(durationMs: 10_000),
            documentSide: documentSide,
            documentType: documentType
        )
        _sdk = StateObject(wrappedValue: DocumentSDK(documentScannerConfig: config))
        self.onCaptured = onCaptured
        self.onFailed = onFailed
    }

    var body: some View {
        sdk.mainView
            .onReceive(sdk.$documentScannerResult) { newValue in
                guard let scannerResult = newValue else { return }
                switch scannerResult.result {
                case .success(let success):
                    onCaptured(
                        success.image.image,
                        success.image.width,
                        success.image.height
                    )
                case .failure(let failure):
                    onFailed(failure.message)
                }
            }
    }
}

#endif
The entire file is wrapped in #if SMART_CAPTURE_ENABLED. When the flag is off, this file is invisible to the compiler — no import Document, no dependency on the framework.

How it works

  • DocumentSDK is a @StateObject that manages the camera session and document detection.
  • DocumentScannerConfig controls capture behaviour:
    • autoCaptureToggleConfig: .showDelayed(durationMs: 10_000) — shows a manual capture button after 10 seconds if auto-capture hasn’t triggered.
    • documentSide — which side of the document to capture (.front or .back).
    • documentType — classification hint (.passport, .idcard, .unknown, etc.).
  • sdk.mainView renders the camera viewfinder with real-time document detection overlays.
  • sdk.$documentScannerResult publishes when the SDK completes — either a success with image data or a failure with a message.

What you get vs the stub

FeatureStubDocumentCameraViewSmartCaptureDocumentView
Document edge detectionNoYes
Auto-crop and perspective correctionNoYes
Blur / glare / quality scoringNoYes
Guided capture overlayNoYes
Auto-capture on quality thresholdNoYes

Face Capture with Liveness

Selfie capture follows the same pattern as document capture: a SwiftUI wrapper around the SDK’s view that builds a SelfieCaptureResult from the raw output and forwards it to the bridge slot. The wrapper handles SDK initialisation, runs the liveness flow, and surfaces failures as recoverable bridge errors. Create SmartCaptureFaceView.swift in the Sources/Capture/ group:
#if SMART_CAPTURE_ENABLED

import FaceCamera
import SwiftUI
import UIKit

struct SmartCaptureFaceView: View {
    let onCaptured: (UIImage, Data, Data) -> Void
    let onFailed: (String) -> Void
    let onCancelled: () -> Void

    var body: some View {
        FaceCameraSDK.controllerSwiftUIWrapper(
            delegate: FaceCameraDelegateHandler(
                onCaptured: onCaptured,
                onFailed: onFailed,
                onCancelled: onCancelled
            )
        )
    }
}

class FaceCameraDelegateHandler: NSObject, FaceCameraListenable {
    let onCaptured: (UIImage, Data, Data) -> Void
    let onFailed: (String) -> Void
    let onCancelled: () -> Void

    init(
        onCaptured: @escaping (UIImage, Data, Data) -> Void,
        onFailed: @escaping (String) -> Void,
        onCancelled: @escaping () -> Void
    ) {
        self.onCaptured = onCaptured
        self.onFailed = onFailed
        self.onCancelled = onCancelled
    }

    func didCapture(_ result: FaceCameraResult) {
        onCaptured(result.previewPhoto, result.encryptedBlob, result.unencryptedBlob)
    }

    func didEncounterError(_ error: FaceCameraError) {
        onFailed(error.description)
    }

    func didCancel() {
        onCancelled()
    }

    func didTapBack() {
        onCancelled()
    }
}

#endif

How it works

The FaceCamera SDK uses a delegate pattern instead of Combine publishers:
  • FaceCameraSDK.controllerSwiftUIWrapper(delegate:) returns a SwiftUI view wrapping the face capture controller.
  • FaceCameraDelegateHandler implements FaceCameraListenable and forwards each callback to a closure.
  • didCapture delivers three values:
    • previewPhoto — a UIImage for display in the app.
    • encryptedBlob — encrypted biometric data for server-side liveness verification.
    • unencryptedBlob — unencrypted biometric data.
  • didEncounterError fires when capture fails (e.g. camera hardware issue).
  • didCancel and didTapBack both fire when the user dismisses.

What you get vs the stub

FeatureStubSelfieCameraViewSmartCaptureFaceView
Face detection and positioningNoYes
Liveness detectionNoYes (passive)
Guided selfie overlayNoYes
Encrypted biometric blobsPlaceholder (raw JPEG)Real encrypted data
Server-side liveness validationFailsPasses
The stub views return the raw JPEG data in both encryptedBlob and unencryptedBlob as a placeholder. This is structurally valid so the bridge protocol works, but it will not pass server-side liveness verification. The real FaceCamera SDK produces properly encrypted biometric data.

The Swap Pattern

Open JourneyView.swift. The camera view computed properties now use conditional compilation to choose between stubs and real SDKs:
private var documentCameraView: some View {
    Group {
        #if SMART_CAPTURE_ENABLED && !targetEnvironment(simulator)
        SmartCaptureDocumentView(
            onCaptured: { imageData, width, height in
                host.documentCapture.complete(.document(
                    DocumentCaptureResult(imageData: imageData, width: width, height: height)
                ))
            },
            onFailed: { message in
                host.documentCapture.complete(
                    .failed(code: "CAPTURE_FAILED", message: message, recoverable: true)
                )
            }
        )
        #else
        StubDocumentCameraView(
            onCaptured: { result in
                host.documentCapture.complete(.document(result))
            },
            onCancelled: {
                host.documentCapture.cancelIfBusy(reason: "User dismissed camera")
            }
        )
        #endif
    }
}
The selfie camera follows the same pattern:
private var selfieCameraView: some View {
    Group {
        #if SMART_CAPTURE_ENABLED && !targetEnvironment(simulator)
        SmartCaptureFaceView(
            onCaptured: { previewImage, encryptedBlob, unencryptedBlob in
                guard let imageData = previewImage.jpegData(compressionQuality: 0.85) else {
                    host.selfieCapture.complete(
                        .failed(code: "ENCODE_FAILED", message: "Failed to encode face preview image", recoverable: true)
                    )
                    return
                }
                host.selfieCapture.complete(.selfie(SelfieCaptureResult(
                    previewImageData: imageData,
                    width: Int(previewImage.size.width),
                    height: Int(previewImage.size.height),
                    encryptedBlob: encryptedBlob,
                    unencryptedBlob: unencryptedBlob
                )))
            },
            onFailed: { message in
                host.selfieCapture.complete(
                    .failed(code: "CAPTURE_FAILED", message: message, recoverable: true)
                )
            },
            onCancelled: {
                host.selfieCapture.cancelIfBusy(reason: "User cancelled face capture")
            }
        )
        #else
        StubSelfieCameraView(
            onCaptured: { result in
                host.selfieCapture.complete(.selfie(result))
            },
            onCancelled: {
                host.selfieCapture.cancelIfBusy(reason: "User dismissed camera")
            }
        )
        #endif
    }
}

Why #if instead of runtime switching?

Conditional compilation (#if) has three advantages over a runtime toggle:
  1. Zero overhead. When the flag is off, the Smart Capture code does not exist in the binary. There are no unused framework imports and no dead code.
  2. No accidental dependency. Without the flag, the project compiles without the Smart Capture frameworks. A runtime check would still require the frameworks to be linked.
  3. Clear separation. The #if/#else blocks make it obvious which code path runs in each configuration. Reviewers see the stub and real implementations side by side.

What didn’t change

Look at what surrounds the #if blocks — nothing changed:
  • The BridgeHost initialization is identical.
  • The handler assignment in configureHandlers() is identical.
  • The onChange listeners driving fullScreenCover presentation are identical.
  • The bridge protocol, message format, and server code are all identical.
This is the whole point of the architecture. The bridge integration layer is stable. Only the capture views swap.

Camera Permissions

Part 2 also adds permission state detection in configureHandlers():
private func configureHandlers() {
    let camera = CameraDetector.check()
    host.documentCapture.permissionState = camera.permissionState
    host.selfieCapture.permissionState = camera.permissionState

    // ... handler assignment (unchanged from Part 1)
}
CameraDetector.check() queries the device’s camera hardware availability and permission state. Setting permissionState on the typed slots means the bridge’s built-in capability.query handler can report accurate permission information to the web journey — allowing the journey to adapt its flow if camera access is denied or restricted.

Test on a Physical Device

  1. Make sure the companion server is running.
  2. Find your Mac’s local IP:
    ipconfig getifaddr en0
    
  3. Connect a physical iOS device and select it in Xcode.
  4. Press Cmd+R to build and run.
  5. On the Setup screen, enter http://<your-mac-ip>:3000 as the server URL.
  6. Tap Start Journey.
  7. When the journey requests a document capture, the SmartCapture document scanner appears — with a guided overlay, real-time edge detection, and auto-capture.
  8. When the journey requests a selfie, the FaceCamera SDK appears — with face positioning guidance and liveness detection.
On Simulator, stubs are used automatically. This is by design.

Common Pitfalls

Missing runtime dependencies

If you add FaceCamera.xcframework but forget IDLiveFaceCamera.xcframework or IDLiveFaceIAD.xcframework, the app crashes at launch:
dyld: Library not loaded: @rpath/IDLiveFaceCamera.framework/IDLiveFaceCamera
Add all four frameworks to fix this.

Flag not set

If you add the frameworks but forget to set SMART_CAPTURE_ENABLED in Active Compilation Conditions, the app compiles and runs — but silently uses stubs. There is no error. Check Build Settings if the Smart Capture views are not appearing.

Simulator with flag enabled

Even with SMART_CAPTURE_ENABLED set, the Simulator always uses stubs. The #if !targetEnvironment(simulator) condition ensures this. Smart Capture SDKs require real camera hardware.

Framework signing

All four frameworks must be set to Embed & Sign in the target’s Frameworks, Libraries, and Embedded Content. “Embed Without Signing” or “Do Not Embed” causes runtime crashes.

What’s Next

  • API Reference — Full documentation for BridgeHost, CaptureCapability, and result types.
  • Stub Camera Views — Details on the stubs and the swap pattern.
  • Capability Handling — Deep dive into typed slots, custom capabilities, and permission states.
  • NFC Reading — Add passport chip reading as a custom capability.