Research
Research

Enhancing LLM collaboration for smarter, more efficient solutions

“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.

A fast and flexible approach to help doctors annotate medical scans

“ScribblePrompt” is an interactive AI framework that can efficiently highlight anatomical structures across different medical scans, assisting medical workers to delineate regions of interest and abnormalities.

Study: Transparency is often lacking in datasets used to train large language models

Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.

A framework for solving parabolic partial differential equations

A new algorithm solves complicated partial differential equations by breaking them down into simpler problems, potentially guiding computer graphics and geometry processing.


New open-source tool helps to detangle the brain

The software tool NeuroTrALE is designed to quickly and efficiently process large amounts of brain imaging data semi-automatically.

LLMs develop their own understanding of reality as their language abilities improve

In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry.

MIT researchers use large language models to flag problems in complex systems

The approach can detect anomalies in data recorded over time, without the need for any training.

Helping robots practice skills independently to adapt to unfamiliar environments

New algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.

Method prevents an AI model from being overconfident about wrong answers

More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.

Study: When allocating scarce resources with AI, randomization can improve fairness

Introducing structured randomization into decisions based on machine-learning model predictions can address inherent uncertainties while maintaining efficiency.