I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does š˜Æš˜°š˜µ say LLMs don’t reason. It says current ā€œlarge reasoning modelsā€ (LRMs) š˜„š˜° reason—just not with 100% accuracy, and not on very hard problems.
I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does š˜Æš˜°š˜µ say LLMs don’t reason. It says current ā€œlarge reasoning modelsā€ (LRMs) š˜„š˜° reason—just not with 100% accuracy, and not on very hard problems.

I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does š˜Æš˜°š˜µ say LLMs don’t reason. It says current ā€œlarge reasoning modelsā€ (LRMs) š˜„š˜° reason—just not with 100% accuracy, and not on very hard problems.

This would be like saying "human reasoning falls apart when placed in tribal situations, therefore humans don't reason"

It even says so in the abstract. People are just getting distracted by the clever title.

submitted by /u/katxwoods
[link] [comments]