<span class="vcard">/u/katxwoods</span>
/u/katxwoods

Humans hate him! AI CEO explains his secret to success. . .

submitted by /u/katxwoods [link] [comments]

Claude’s "Bliss Attractor State" might be a side effect of its bias towards being a bit of a hippie. This would also explain it’s tendency towards making images more "diverse" when given free rein

submitted by /u/katxwoods [link] [comments]

Do you think the US government could control an AI that’s vastly smarter than it?

View Poll submitted by /u/katxwoods [link] [comments]

In this paper, we propose that what is commonly labeled "thinking" in humans is better understood as a loosely organized cascade of pattern-matching heuristics, reinforced social behaviors, and status-seeking performances masquerading as cognition.

submitted by /u/katxwoods [link] [comments]

"The Illusion of Thinking" paper is just a sensationalist title. It shows the limits of LLM reasoning, not the lack of it.

submitted by /u/katxwoods [link] [comments]

Impactful paper finally putting this case to rest, thank goodness

submitted by /u/katxwoods [link] [comments]

I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does š˜Æš˜°š˜µ say LLMs don’t reason. It says current ā€œlarge reasoning modelsā€ (LRMs) š˜„š˜° reason—just not with 100% accuracy, and not on very hard problems.

This would be like saying "human reasoning falls apart when placed in tribal situations, therefore humans don't reason" It even says so in the abstract. People are just getting distracted by the clever title. submitted by /…

AI Is Learning to Escape Human Control – Models rewrite code to avoid being shut down. That’s why alignment is a matter of such urgency.

submitted by /u/katxwoods [link] [comments]

AI Is Learning to Escape Human Control – Models rewrite code to avoid being shut down. That’s why alignment is a matter of such urgency.

submitted by /u/katxwoods [link] [comments]