AI agents help explain other AI systems
MIT researchers introduce a method that uses artificial intelligence to automate the explanation of complex neural networks.
MIT researchers introduce a method that uses artificial intelligence to automate the explanation of complex neural networks.
A new study finds that language regions in the left hemisphere light up when reading uncommon sentences, while straightforward sentences elicit little response.
Master’s students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.
This new method draws on 200-year-old geometric foundations to give artists control over the appearance of animated characters.
“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.
Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.
Twelve teams of students and postdocs across the MIT community presented innovative startup ideas with potential for real-world impact.
With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.
Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems.
By focusing on causal relationships in genome regulation, a new AI method could help scientists identify new immunotherapy techniques or regenerative therapies.