A faster, better way to train general-purpose robots
Inspired by large language models, researchers develop a training technique that pools diverse data to teach robots new skills.
Inspired by large language models, researchers develop a training technique that pools diverse data to teach robots new skills.
By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
A new method can train a neural network to sort corrupted data while anticipating next steps. It can make flexible plans for robots, generate high-quality video, and help AI agents navigate digital environments.
Associate Professor Julian Shun develops high-performance algorithms and frameworks for large-scale graph processing.
MIT CSAIL researchers created an AI-powered method for low-discrepancy sampling, which uniformly distributes data points to boost simulation accuracy.
By enabling users to chat with an older version of themselves, Future You is aimed at reducing anxiety and guiding young people to make better choices.
New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
The program will invite students to investigate new vistas at the intersection of music, computing, and technology.
Researchers argue that in health care settings, “responsible use” labels could ensure AI systems are deployed appropriately.
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.