Computer Science and Artificial Intelligence Laboratory (CSAIL)
Computer Science and Artificial Intelligence Laboratory (CSAIL)

Improving AI models’ ability to explain their predictions

A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.

Mixing generative AI with physics to create personal items that work in the real world

To help generative AI models create durable, real-world accessories and decor, the PhysiOpt system runs physics simulations and makes subtle tweaks to its 3D blueprints.

Helping AI agents search to get the best results out of large language models

EnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.

Antonio Torralba, three MIT alumni named 2025 ACM fellows

Torralba’s research focuses on computer vision, machine learning, and human visual perception.

The philosophical puzzle of rational artificial intelligence

As AI technology advances, a new interdisciplinary course seeks to equip students with foundational critical thinking skills in computing.

Why it’s critical to move beyond overly aggregated machine-learning metrics

New research detects hidden evidence of mistaken correlations — and provides a method to improve accuracy.

Generative AI tool helps 3D print personal items that sustain daily use

“MechStyle” allows users to personalize 3D models, while ensuring they’re physically viable after fabrication, producing unique personal items and assistive technology.

MIT scientists investigate memorization risk in the age of clinical AI

New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.

Guided learning lets “untrainable” neural networks realize their potential

CSAIL researchers find even “untrainable” neural nets can learn effectively when guided by another network’s built-in biases using their guidance method.

A new way to increase the capabilities of large language models

MIT-IBM Watson AI Lab researchers developed an expressive architecture that provides better state tracking and sequential reasoning in LLMs over long texts.