Reasoning skills of large language models are often overestimated
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
Associate Professor Jonathan Ragan-Kelley optimizes how computer graphics and images are processed for the hardware of today and tomorrow.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
For the first time, researchers use a combination of MEG and fMRI to map the spatio-temporal human brain dynamics of a visual image being recognized.
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
MIT Center for Transportation and Logistics Director Matthias Winkenbach uses AI to make vehicle routing more efficient and adaptable for unexpected events.