| In April 2026, Anthropic announced Claude Mythos and declined to release it publicly — the first major AI model withheld on capability grounds since GPT-2. The governance question that raises hasn’t been seriously addressed: who decides who gets access to this kind of capability, and what recourse does anyone else have? This is a working paper proposing a framework for a collectively governed defensive AI system — architecturally constrained, multi-stakeholder governed, and capable of operating at parity with the threat. Not a product pitch. A point of departure for people who think the current arrangement is unstable. [link] [comments] |