Microsoft's AI chief just declared the entire field 'dangerous' to study.
Suleyman's argument: Researching AI welfare might make people think AI is conscious, causing psychological problems.
Counter-evidence: Anthropic, OpenAI, and Google DeepMind are all actively investing in consciousness research. Anthropic literally just implemented AI welfare features.
The corporate divide is fascinating:
- Microsoft: 'Don't study it, focus on productivity'
- Everyone else: 'This might be the most important question of our time'
Are we watching the emergence of consciousness, or just really good corporate theater? Either way, shutting down scientific inquiry doesn't seem like the right answer.
https://techcrunch.com/2025/08/21/microsoft-ai-chief-says-its-dangerous-to-study-ai-consciousness/
[link] [comments]