This tutorial picks up where Part 1 left off. You already have a working iOS app that runs a GBG Go identity journey with stub camera views. Now you will replace those stubs with the real Smart Capture SDKs — adding guided document scanning, face capture with liveness detection, and encrypted biometric blobs. The change is small. The bridge architecture you built in Part 1 was designed so that swapping camera views is a minimal edit. The handler setup,Documentation Index
Fetch the complete documentation index at: https://docs.go.gbgplc.com/llms.txt
Use this file to discover all available pages before exploring further.
awaitCompletion()/complete() pattern, and all bridge wiring stay the same. Only the views inside fullScreenCover change.
Reference app: The complete source code is on thepart-2-smart-capturebranch of the reference repository. To see exactly what changed from Part 1, diff the branches:
Prerequisites
Everything from Part 1, plus:| Requirement | Notes |
|---|---|
| Smart Capture SDKs | Four XCFramework bundles, obtained from your GBG account representative. These are not included in the repository. |
| Physical iOS device | Smart Capture SDKs require camera hardware. The Simulator falls back to stubs automatically. |
Required frameworks
Contact your GBG account representative to obtain these frameworks:| Framework | Purpose |
|---|---|
Document.xcframework | Document scanning with guided capture, auto-crop, and quality scoring |
FaceCamera.xcframework | Face capture with liveness detection and encrypted biometric blobs |
IDLiveFaceCamera.xcframework | Runtime dependency of FaceCamera |
IDLiveFaceIAD.xcframework | Runtime dependency of FaceCamera |
Start from Part 1
Check out the Part 2 branch:Add the Smart Capture SDKs
The Smart Capture SDKs ship as four.xcframework bundles. Drop them into the project, embed and sign them, then enable the compiler flag — the rest of the section walks through each step.
1. Place the frameworks
Copy all four.xcframework bundles into GBGGoReference/Frameworks/:
2. Add to Xcode
- Open
GBGGoReference/GBGGoReference.xcodeprojin Xcode. - Select the GBGGoReference target.
- Go to General > Frameworks, Libraries, and Embedded Content.
- Click +, then Add Other > Add Files.
- Select all four
.xcframeworkbundles from theFrameworks/directory. - Set each to Embed & Sign.
The project already includes
$(PROJECT_DIR)/Frameworks in its framework search paths. You only need to add the frameworks to the target’s embedded content.Enable the Compiler Flag
The Smart Capture integration is gated behind a compile-time flag calledSMART_CAPTURE_ENABLED. This flag is off by default, so the app builds and runs with stubs even if you haven’t added the frameworks yet.
- In Xcode, select the GBGGoReference target.
- Go to Build Settings and make sure “All” is selected, not “Basic”.
- Search for Active Compilation Conditions (
SWIFT_ACTIVE_COMPILATION_CONDITIONS). - Add
SMART_CAPTURE_ENABLEDto both the Debug and Release configurations.
Document Capture with SmartCapture
CreateSmartCaptureDocumentView.swift in the Sources/Capture/ group. This is a thin SwiftUI wrapper around the Document SDK:
#if SMART_CAPTURE_ENABLED. When the flag is off, this file is invisible to the compiler — no import Document, no dependency on the framework.
How it works
DocumentSDKis a@StateObjectthat manages the camera session and document detection.DocumentScannerConfigcontrols capture behaviour:autoCaptureToggleConfig: .showDelayed(durationMs: 10_000)— shows a manual capture button after 10 seconds if auto-capture hasn’t triggered.documentSide— which side of the document to capture (.frontor.back).documentType— classification hint (.passport,.idcard,.unknown, etc.).
sdk.mainViewrenders the camera viewfinder with real-time document detection overlays.sdk.$documentScannerResultpublishes when the SDK completes — either a success with image data or a failure with a message.
What you get vs the stub
| Feature | StubDocumentCameraView | SmartCaptureDocumentView |
|---|---|---|
| Document edge detection | No | Yes |
| Auto-crop and perspective correction | No | Yes |
| Blur / glare / quality scoring | No | Yes |
| Guided capture overlay | No | Yes |
| Auto-capture on quality threshold | No | Yes |
Face Capture with Liveness
Selfie capture follows the same pattern as document capture: a SwiftUI wrapper around the SDK’s view that builds aSelfieCaptureResult from the raw output and forwards it to the bridge slot. The wrapper handles SDK initialisation, runs the liveness flow, and surfaces failures as recoverable bridge errors.
Create SmartCaptureFaceView.swift in the Sources/Capture/ group:
How it works
The FaceCamera SDK uses a delegate pattern instead of Combine publishers:FaceCameraSDK.controllerSwiftUIWrapper(delegate:)returns a SwiftUI view wrapping the face capture controller.FaceCameraDelegateHandlerimplementsFaceCameraListenableand forwards each callback to a closure.didCapturedelivers three values:previewPhoto— aUIImagefor display in the app.encryptedBlob— encrypted biometric data for server-side liveness verification.unencryptedBlob— unencrypted biometric data.
didEncounterErrorfires when capture fails (e.g. camera hardware issue).didCancelanddidTapBackboth fire when the user dismisses.
What you get vs the stub
| Feature | StubSelfieCameraView | SmartCaptureFaceView |
|---|---|---|
| Face detection and positioning | No | Yes |
| Liveness detection | No | Yes (passive) |
| Guided selfie overlay | No | Yes |
| Encrypted biometric blobs | Placeholder (raw JPEG) | Real encrypted data |
| Server-side liveness validation | Fails | Passes |
The stub views return the raw JPEG data in both
encryptedBlob and unencryptedBlob as a placeholder. This is structurally valid so the bridge protocol works, but it will not pass server-side liveness verification. The real FaceCamera SDK produces properly encrypted biometric data.The Swap Pattern
OpenJourneyView.swift. The camera view computed properties now use conditional compilation to choose between stubs and real SDKs:
Why #if instead of runtime switching?
Conditional compilation (#if) has three advantages over a runtime toggle:
- Zero overhead. When the flag is off, the Smart Capture code does not exist in the binary. There are no unused framework imports and no dead code.
- No accidental dependency. Without the flag, the project compiles without the Smart Capture frameworks. A runtime check would still require the frameworks to be linked.
- Clear separation. The
#if/#elseblocks make it obvious which code path runs in each configuration. Reviewers see the stub and real implementations side by side.
What didn’t change
Look at what surrounds the#if blocks — nothing changed:
- The
BridgeHostinitialization is identical. - The handler assignment in
configureHandlers()is identical. - The
onChangelisteners drivingfullScreenCoverpresentation are identical. - The bridge protocol, message format, and server code are all identical.
Camera Permissions
Part 2 also adds permission state detection inconfigureHandlers():
CameraDetector.check() queries the device’s camera hardware availability and permission state. Setting permissionState on the typed slots means the bridge’s built-in capability.query handler can report accurate permission information to the web journey — allowing the journey to adapt its flow if camera access is denied or restricted.
Test on a Physical Device
- Make sure the companion server is running.
- Find your Mac’s local IP:
- Connect a physical iOS device and select it in Xcode.
- Press Cmd+R to build and run.
- On the Setup screen, enter
http://<your-mac-ip>:3000as the server URL. - Tap Start Journey.
- When the journey requests a document capture, the SmartCapture document scanner appears — with a guided overlay, real-time edge detection, and auto-capture.
- When the journey requests a selfie, the FaceCamera SDK appears — with face positioning guidance and liveness detection.
Common Pitfalls
Missing runtime dependencies
If you addFaceCamera.xcframework but forget IDLiveFaceCamera.xcframework or IDLiveFaceIAD.xcframework, the app crashes at launch:
Flag not set
If you add the frameworks but forget to setSMART_CAPTURE_ENABLED in Active Compilation Conditions, the app compiles and runs — but silently uses stubs. There is no error. Check Build Settings if the Smart Capture views are not appearing.
Simulator with flag enabled
Even withSMART_CAPTURE_ENABLED set, the Simulator always uses stubs. The #if !targetEnvironment(simulator) condition ensures this. Smart Capture SDKs require real camera hardware.
Framework signing
All four frameworks must be set to Embed & Sign in the target’s Frameworks, Libraries, and Embedded Content. “Embed Without Signing” or “Do Not Embed” causes runtime crashes.What’s Next
- API Reference — Full documentation for
BridgeHost,CaptureCapability, and result types. - Stub Camera Views — Details on the stubs and the swap pattern.
- Capability Handling — Deep dive into typed slots, custom capabilities, and permission states.
- NFC Reading — Add passport chip reading as a custom capability.