Ilya Sutskever, Greg Brockman, Sam Altman & Elon Musk were/are all concerned that Google DeepMind’s Demis Hassabis "could create an AGI dictatorship"
submitted by /u/MetaKnowing [link] [comments]
Anthropic’s Chris Olah says "we don’t program neural networks, we grow them" and it’s like studying biological organisms and very different from regular software engineering
submitted by /u/MetaKnowing [link] [comments]
METR report finds no decisive barriers to rogue AI agents multiplying to large populations in the wild and hiding via stealth compute clusters
submitted by /u/MetaKnowing [link] [comments]
Swyx: "blackpill is that influencers know this and are just knowingly hyping up saturation because the content machine must be fed"
submitted by /u/MetaKnowing [link] [comments]
Another senior OpenAI researcher has quit due to "unanswered questions" about recent events and worries about existential risk to humanity
submitted by /u/MetaKnowing [link] [comments]
OpenAI’s Noam Brown says scaling skeptics are missing the point: "the really important takeaway from o1 is that that wall doesn’t actually exist, that we can actually push this a lot further. Because, now, we can scale up inference compute. And there’s so much room to scale up inference compute."
submitted by /u/MetaKnowing [link] [comments]
Anthropic’s Dario Amodei says unless something goes wrong, AGI in 2026/2027
submitted by /u/MetaKnowing [link] [comments]