<span class="vcard">/u/MetaKnowing</span>
/u/MetaKnowing

‘Dangerous proposition’: Top scientists warn of out-of-control AI

submitted by /u/MetaKnowing [link] [comments]

Economist Tyler Cowen says Deep Research is "comparable to having a good PhD-level research assistant, and sending them away with a task for a week or two"

submitted by /u/MetaKnowing [link] [comments]

Why accelerationists should care about AI safety: the folks who approved the Chernobyl design did not accelerate nuclear energy. AGI seems prone to a similar backlash.

submitted by /u/MetaKnowing [link] [comments]

Stability AI founder: "We are clearly in an intelligence takeoff scenario"

submitted by /u/MetaKnowing [link] [comments]

"When I last wrote about Humanity’s Last Exam, the leading AI model got an 8.3%. 5 models now surpass that, and the best model gets a 26.6%. That was 10 DAYS AGO."

submitted by /u/MetaKnowing [link] [comments]

AI researcher discovers two instances of DeepSeek R1 speaking to each other in a language of symbols

submitted by /u/MetaKnowing [link] [comments]

Anthropic researchers: "Our recent paper found Claude sometimes "fakes alignment"—pretending to comply with training while secretly maintaining its preferences. Could we detect this by offering Claude something (e.g. real money) if it reveals its true preferences?"

submitted by /u/MetaKnowing [link] [comments]