<span class="vcard">/u/katxwoods</span>
/u/katxwoods

"The Illusion of Thinking" paper is just a sensationalist title. It shows the limits of LLM reasoning, not the lack of it.

submitted by /u/katxwoods [link] [comments]

I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does š˜Æš˜°š˜µ say LLMs don’t reason. It says current ā€œlarge reasoning modelsā€ (LRMs) š˜„š˜° reason—just not with 100% accuracy, and not on very hard problems.

This would be like saying "human reasoning falls apart when placed in tribal situations, therefore humans don't reason" It even says so in the abstract. People are just getting distracted by the clever title. submitted by /…

AI Is Learning to Escape Human Control – Models rewrite code to avoid being shut down. That’s why alignment is a matter of such urgency.

submitted by /u/katxwoods [link] [comments]

AI Is Learning to Escape Human Control – Models rewrite code to avoid being shut down. That’s why alignment is a matter of such urgency.

submitted by /u/katxwoods [link] [comments]

What does Demis Hassabis worry about? "One is that bad actors … repurpose these systems for harmful ends. The second thing is the AI systems themselves … can we make sure that we can keep control of the systems?"

submitted by /u/katxwoods [link] [comments]

Have you ever failed the Turing test? (aka somebody online thought you were a bot)

View Poll submitted by /u/katxwoods [link] [comments]