// Research / Computer Vision
Liveness detection& biometric authentication that holds up.
Passive presentation-attack detection (PAD), depth-based 3D verification, face verification against ID, deepfake detection, and end-to-end eKYC - built to ISO 30107-3, deployed on-device or server-side, and stress-tested against the spoofing attacks fraud teams actually see.
// Why custom liveness
Off-the-shelf SDKs miss the long tail of spoofs. Yours won't.
01
Real PAD coverage, not marketing claims
Print, replay, 2D mask, 3D silicone, deepfake video injection, and emerging GAN-based attacks. We benchmark against ISO 30107-3 Level 1 and 2 attack instruments - and against the spoofs your fraud team is actually seeing.
02
Privacy by design
Templates instead of raw biometrics, on-device matching where possible, encrypted-at-rest features, and retention windows that actually match GDPR. We design for the audit before the auditor shows up.
03
Mobile and server, one decision graph
On-device passive liveness for low-friction UX, server-side heavy verification for high-risk transactions. Same risk model, same telemetry, no duplicated logic.
// Case Study
Active liveness model trained for Identt's eKYC stack
Identt's eKYC product asks the user to perform three short gesture challenges - picked at random from five (blink, look left, look right, look up, look down) - and verifies them server-side. We trained the gesture-recognition and face-presence model behind it, hardened against photo and recording replays and tuned to perform reliably across cohorts and lighting conditions.
5
gestures the model recognizes
3 of 5
randomized per session - replay-safe
Server-side
in Identt's perimeter

// What we deliver
End-to-end identity verification, stress-tested.
Liveness, document verification, face match, and decisioning. We build the model, harden it against attacks, and ship the SDK or service.
Passive and active liveness
Passive PAD where UX matters, active challenge-response where risk demands it. Often layered for high-value flows.
- Single-frame and multi-frame passive PAD models
- Active challenge-response: head movement, smile, blink
- Texture, reflection, and moiré-pattern analysis
- Adversarially trained against the latest deepfake generators
3D depth verification
When 2D isn't enough. We use parallax from video, structured-light cameras, or ToF sensors to confirm a real face in space.
- Parallax-based depth from monocular video
- Structured light / ToF integration on supported devices
- Stereo capture for kiosk and desktop deployments
- Geometric consistency checks across frames
ID document authenticity
Detecting forged, photoshopped, or replayed ID documents - at the same level of rigor as the face check.
- MRZ and barcode parsing with checksum validation
- NFC chip read for ePassports and eIDs where supported
- Tamper detection via texture, font, and hologram analysis
- Document template library covering 200+ countries
Face matching
Selfie-to-ID-photo matching with calibrated thresholds tuned to your FAR/FRR risk tolerance.
- Modern face recognition backbones (ArcFace, AdaFace)
- Demographic bias evaluation and per-cohort calibration
- 1:1 verification and 1:N search modes
- Hard-negative mining on near-duplicate faces
Privacy and compliance
GDPR, eIDAS, AML/KYC. We design the pipeline so the data minimization story is straightforward.
- Template-only storage with irreversible feature extraction
- On-device processing for low-risk flows
- ISO 27001 / SOC 2-compatible logging and access control
- Configurable retention with cryptographic deletion proofs
Decisioning and telemetry
A score is not a decision. We build the layer that turns model outputs into a verified/rejected/review verdict.
- Configurable risk policies with per-channel thresholds
- Human review queues for borderline cases
- Attack telemetry feeding back into model retraining
- Audit logs suitable for regulator review
// Method fit
Not every identity check is a liveness problem.
skip it if
You only need device fingerprinting
If risk is mostly about session reuse, IP reputation, or device intelligence, a behavioral fraud signal is the right tool. Liveness is the wrong layer.
You're verifying documents, not faces
Document authenticity (MRZ, NFC chip, hologram) is its own pipeline. We ship that too - but if there's no biometric step, the liveness model isn't pulling its weight.
Document AI & OCROff-the-shelf SDKs already pass your fraud team's bar
If a vendor SDK clears your attack telemetry on your customer cohorts and your data residency story, ship it. Custom liveness pays back when off-the-shelf misses the long tail of attacks you actually see.
use it if
Custom liveness fits when off-the-shelf SDKs miss the spoofs your fraud team is seeing, when your data residency story rules out a third-party in the biometric data path, or when you need on-device + server-side from the same risk model with shared telemetry.
// How we work
Train against the attack catalogue. Calibrate per cohort. Hand off the test harness.
Every engagement starts with a shared attack benchmark and ends with your team running it on every release candidate. Bias evaluation and per-attack-class breakdowns are part of the standard delivery - not a separate audit you have to commission.
01
Attack catalogue as the contract
Week one, we build the shared attack benchmark with your fraud team - ISO 30107-3 Level 1 + 2 instruments plus the attacks your customers are actually flagging. The catalogue becomes the spec. No PAD claim ships without a measured per-attack-class delta against it.
02
Iterate against bona-fide and per-cohort splits
Adversarial training against fresh attacks each epoch, with per-cohort evaluation (age, skin tone, eyewear, ambient lighting) baked into the test loop. We mitigate cohort gaps before sign-off, not after a regulator points them out.
03
Hand off code and the test harness
Final delivery is the model bundle, the iBeta-style test harness in your CI, per-attack-class pass/fail dashboards, and a runbook your on-call can read at 11pm. Slack for 30 days for the questions that come up after we leave.

// Expert insight
“The interesting attack vector right now isn't physical spoofs - it's video injection with a real-time deepfake. The defense isn't a single PAD model; it's a layered system with environmental and behavioral signals that a generator can't fake without leaving artifacts.”
Norbert Ropiak
Co-founder @ bards.ai
// Why bards.ai
Vision researchers who think like fraud teams.
We build identity systems with the rigor of academic CV and the paranoia of a fraud analyst. ISO 30107-3 ready, GDPR-clean, deployable on-device.
ISO 30107-3 aligned by default
We design the pipeline against the standard from day one, not as a retrofit. iBeta-style PAD evaluation built into our test harness.
Adversarial mindset
Our team includes researchers who actively study the latest spoofing techniques - including diffusion-based deepfakes and physical 3D mask attacks.
Privacy-by-design specialists
GDPR, eIDAS, and Polish data protection laws shape every architecture decision. We've shipped systems audited by both regulators and customers.
10+ peer-reviewed CV publications
Our team publishes at the conferences that matter - and brings that rigor to evaluation, calibration, and bias auditing.
On-device and server-side
iOS/Android SDKs, browser WASM, and server inference. Same model, three runtimes, one risk model.
Senior team, no juniors
Every engineer has shipped CV systems to paying customers in regulated environments. No ramp-up tax on your project.
// FAQ
Common questions about liveness and biometric verification
ISO 30107-3 Level 1 (printed photo, screen replay, video replay, paper mask) and Level 2 (3D paper masks, latex/silicone masks, adversarial wearables). For deepfake injection we add specific defenses: temporal consistency, environmental light analysis, and detection of generation artifacts. We benchmark against the latest open-source generators in our test harness.
Yes - native iOS (Swift/Obj-C) and Android (Kotlin/Java) SDKs with on-device inference via Core ML and TensorFlow Lite, plus a WASM build for browser flows. The SDK handles capture, quality gating, on-device passive liveness, and either returns a result or pushes encrypted features to a server endpoint.
Use on-device for low-friction, low-risk flows (account login, low-value transactions) where you want speed and privacy. Use server-side for high-risk events (account opening, large transfers) where you want the heaviest models and full audit trail. Most customers run both, with risk policy choosing the path.
We design pipelines to be certifiable against ISO 30107-3 by labs like iBeta and FIME - and we've supported customers through that certification process. Whether the deployed system carries a certification depends on whether the customer chooses to formally test, since labs charge per-version.
For face match alone, FAR of 0.0001-0.001% at FRR of 1-3% is achievable with modern backbones on cooperative captures. For full identity verification (PAD + match + document), end-to-end pass rates of 90-95% with attack rejection above 99% are typical. We benchmark on your data and your demographics before committing numbers.
Per-cohort evaluation across age, skin tone, gender, and eyewear is part of the standard test harness - not an afterthought. When we find disparities (and we always do at first) we mitigate via training data balancing, per-cohort threshold calibration, and architecture changes. We publish per-cohort metrics in the delivery report.
Yes. We've shipped liveness pipelines into customer-controlled VPCs and on-prem GPU clusters for banks, government, and healthcare. The full stack runs without outbound network calls, with signed model bundles for change control.
// Let's ship it
Ship liveness that holds up to your fraud team's worst day.
Tell us your channel mix, your risk tolerance, and your regulatory constraints. We'll come back with an architecture and a benchmark plan, usually within a business day.

Norbert Ropiak
Co-founder @ bards.ai