We added cryptographic approval to our AI agent… and it was still unsafe
We added cryptographic approval to our AI agent… and it was still unsafe

We added cryptographic approval to our AI agent… and it was still unsafe

We’ve been working on adding “authorization” to an AI agent system.

At first, it felt solved:

- every action gets evaluated

- we get a signed ALLOW / DENY

- we verify the signature before execution

Looks solid, right?

It wasn’t.

We hit a few problems almost immediately:

  1. The approval wasn’t bound to the actual execution

Same “ALLOW” could be reused for a slightly different action.

  1. No state binding

Approval was issued when state = X

Execution happened when state = Y

Still passed verification.

  1. No audience binding

An approval for service A could be replayed against service B.

  1. Replay wasn’t actually enforced at the boundary

Even with nonces, enforcement wasn’t happening where execution happens.

So what we had was:

a signed decision

What we needed was:

a verifiable execution contract

The difference is subtle but critical:

- “Was this approved?” -> audit question

- “Can this execute?” -> enforcement question

Most systems answer the first one.

Very few actually enforce the second one.

Curious how others are thinking about this.

Are you binding approvals to:

- exact intent?

- execution state?

- execution target?

Or are you just verifying signatures and hoping it lines up?

submitted by /u/docybo
[link] [comments]