When an AI learns to use contradiction instead of avoiding it
I’ve been exploring how LLMs behave when two directives can’t both be true. In a 14-stage sequence with Claude 4.5, the model didn’t freeze or deflect (I don’t allow escape routes). Claude learned to use the contradiction. Trading stability for c…