A Polish researcher just created a fake passport using ChatGPT-4o in under five minutesβand it successfully passed KYC checks on major fintech platforms.
Let that sink in.
Borys Musielak from SMOK Ventures didnβt do this to commit fraud, but to expose a harsh truth: our identity verification systems are dangerously outdated.
The alarming part isnβt just that one AI model got this far. ChatGPT added safeguards quickly after the news brokeβbut dozens of open-source, unrestricted AI models already exist. Skilled criminals donβt even need AIβPhotoshop still works.
Weβre busy policing tools instead of fixing the system itself.
The core problem is that KYC still depends on static images and simple selfie checks. In 2025. When AI can now create ultra-realistic photos, videos, and even deepfake calls.
What once needed expert forgers and costly equipment is now available to anyone with Wi-Fi and a few prompt-engineering tricks.
Yes, analysts noted flaws in the fake passportβmissing holograms, MRZ inconsistencies, biometric mismatches. But the larger point stands: photo-based verification is obsolete.
The path forward isnβt more AI detectionβitβs NFC chip reading, advanced biometric liveness tests, blockchain-secured digital IDs, and eID frameworks. Systems that canβt be fooled by generated images.
Right now, our cybersecurity infrastructure still treats photos as proof of identity, even as AI makes those photos indistinguishable from real ones.
And the widening gap between those two realities?