MIT researchers develop an efficient way to train more reliable AI agents
The technique could make AI systems better at complex tasks that involve variability.
The technique could make AI systems better at complex tasks that involve variability.
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.
New algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.
MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
The SPARROW algorithm automatically identifies the best molecules to test as potential new medicines, given the vast number of factors affecting each choice.
With generative AI models, researchers combined robotics data from different sources to help robots learn better.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.