<span class="vcard">/u/MetaKnowing</span>
/u/MetaKnowing

(Former?) AGI skeptic Francois Chollet has shortened his timelines from 10 years to 5 years

submitted by /u/MetaKnowing [link] [comments]

AIs are now outperforming prediction markets at forecasting future world events.

https://www.prophetarena.co/leaderboard submitted by /u/MetaKnowing [link] [comments]

Recruiters are in trouble. In a large experiment with 70,000 applications, AI agents outperformed human recruiters in hiring customer service reps.

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5395709 submitted by /u/MetaKnowing [link] [comments]

Kevin Roose says an OpenAI researcher got many DMs from people asking him to bring back GPT-4o – but the DMs were written by GPT-4o itself. 4o users revolted and forced OpenAI to bring it back. This is spooky because in a few years powerful AIs may truly persuade humans to fight for their survival.

submitted by /u/MetaKnowing [link] [comments]

Anthropic now lets Claude end abusive conversations, citing AI welfare: "We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future."

https://www.anthropic.com/research/end-subset-conversations submitted by /u/MetaKnowing [link] [comments]

China Is Taking AI Safety Seriously. So Must the U.S. | "China doesn’t care about AI safety—so why should we?” This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom.

submitted by /u/MetaKnowing [link] [comments]

Study shows AIs display AI-to-AI bias, so "future AI systems may implicitly discriminate against humans as a class."

Study: https://www.pnas.org/doi/pdf/10.1073/pnas.2415697122 submitted by /u/MetaKnowing [link] [comments]