<span class="vcard">/u/MetaKnowing</span>
/u/MetaKnowing

Geoffrey Hinton says AI companies should be forced to use 1/3 of their compute on safety research – how will we stay in control? – because AI is an existential threat, and they’re spending nearly all of their resources just making bigger models

submitted by /u/MetaKnowing [link] [comments]

Former OpenAI board member Helen Toner testifies to the Senate: "I’ve heard from people in multiple companies … ‘Please help us slow down. Please give us guardrails that we can point to that are external, that help us not only be subject to these market pressures.’"

submitted by /u/MetaKnowing [link] [comments]

OpenAI’s Head of AGI Readiness quits and issues warning: "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready" for AGI … "policymakers need to act urgently"

submitted by /u/MetaKnowing [link] [comments]