National Science Foundation (NSF)
National Science Foundation (NSF)

MIT researchers introduce Boltz-1, a fully open-source model for predicting biomolecular structures

With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.

Researchers reduce bias in AI models while preserving or improving accuracy

A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.

A new way to create realistic 3D shapes using generative AI

Researchers propose a simple fix to an existing technique that could help artists, designers, and engineers create better 3D models.

Photonic processor could enable ultrafast AI computations with extreme energy efficiency

This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.

MIT researchers develop an efficient way to train more reliable AI agents

The technique could make AI systems better at complex tasks that involve variability.

Despite its impressive output, generative AI doesn’t have a coherent understanding of the world

Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.

Enhancing LLM collaboration for smarter, more efficient solutions

“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.

Helping robots practice skills independently to adapt to unfamiliar environments

New algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.

MIT researchers advance automated interpretability in AI models

MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.

Reasoning skills of large language models are often overestimated

New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.