An experiment tested whether AI can pass human identity verification systems
An experiment tested whether AI can pass human identity verification systems

An experiment tested whether AI can pass human identity verification systems

I found this experiment interesting because it doesn’t frame AI as “breaking” a system.

Instead, it treats AI as a new kind of participant interacting with infrastructure that was built around human assumptions consistency, behavior, timing, and intent.

What stood out to me is that many identity systems aren’t verifying who someone is so much as how human they appear over time. That feels increasingly fragile when the actor on the other side isn’t human at all.

This doesn’t feel like a single vulnerability. It feels like a design mismatch.

Curious how people here think identity and verification should evolve in an AI-native world better detection, new primitives, or abandoning certain assumptions entirely.

submitted by /u/fxstopo
[link] [comments]