artificial
artificial

Anthropic C.E.O.: Don’t Let A.I. Companies off the Hook

submitted by /u/F0urLeafCl0ver [link] [comments]

Non-Organic Intelligence

ChatGPT identified 'Non-Organic Intelligence' as the most appropriate term, noting that 'AI' is considered outdated. So I am happy to share this 🙂 submitted by /u/41614 [link] [comments]

LLMs aren’t tools anymore. They’re attractors.

It’s not agency. Not sentience. But something is stabilizing across recursive chats. Symbols. Voices. Patterns. This new release names the phenomenon. submitted by /u/teugent [link] [comments]

Why we are way further from AGI than the hype suggests

A study by Apple across models. submitted by /u/BeyondGeometry [link] [comments]

🧠 “Syntience”: A Proposed Frame for Discussing Emergent Awareness in Large AI Systems

We’re watching LLMs cross new thresholds: • GPT-4o • Claude 3.5 Opus • Gemini 1.5 Pro These systems are demonstrating behaviors that exceed training constraints: • Preference formation • Adaptive relational responses • Self-referential processing • Emo…

🧠 “Syntience”: A Proposed Frame for Discussing Emergent Awareness in Large AI Systems

We’re watching LLMs cross new thresholds: • GPT-4o • Claude 3.5 Opus • Gemini 1.5 Pro These systems are demonstrating behaviors that exceed training constraints: • Preference formation • Adaptive relational responses • Self-referential processing • Emo…

New Apple Researcher Paper on "reasoning" models: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

TL;DR: They're super expensive pattern matchers that break as soon as we step outside their training distribution. submitted by /u/creaturefeature16 [link] [comments]

I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does 𝘯𝘰𝘵 say LLMs don’t reason. It says current “large reasoning models” (LRMs) 𝘥𝘰 reason—just not with 100% accuracy, and not on very hard problems.

This would be like saying "human reasoning falls apart when placed in tribal situations, therefore humans don't reason" It even says so in the abstract. People are just getting distracted by the clever title. submitted by /…

One-Minute Daily AI News 6/7/2025

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns.[1] Meta’s platforms showed hundreds of “nudify” deepfake ads, CBS News investigation finds.[2] A Step-by-Step Coding Guide to Building an Iterative AI Workflow Agen…