Natural language boosts LLM performance in coding, planning, and robotics
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
For the first time, researchers use a combination of MEG and fMRI to map the spatio-temporal human brain dynamics of a visual image being recognized.
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
MIT Center for Transportation and Logistics Director Matthias Winkenbach uses AI to make vehicle routing more efficient and adaptable for unexpected events.
The MIT Schwarzman College of Computing building will form a new cluster of connectivity across a spectrum of disciplines in computing and artificial intelligence.
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
Researchers developed a simple yet effective solution for a puzzling problem that can worsen the performance of large language models such as ChatGPT.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
MIT researchers propose “PEDS” method for developing models of complex physical systems in mechanics, optics, thermal transport, fluid dynamics, physical chemistry, climate, and more.