What if AI governance wasn’t about replacing human choice, but removing excuses?
I’ve been thinking about why AI governance discussions always seem to dead-end (in most public discussions, at least) between “AI overlords” and “humans only.” Surely there’s a third option that actually addresses what people are really afraid of? Some…