Enabling AI to explain its predictions in plain language
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
MIT CSAIL director and EECS professor named a co-recipient of the honor for her robotics research, which has expanded our understanding of what a robot can be.
Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.
MIT engineers developed the largest open-source dataset of car designs, including their aerodynamics, that could speed design of eco-friendly cars and electric vehicles.
First organized MIT delegation highlights the Institute’s growing commitment to addressing climate change by showcasing research on biodiversity conservation, AI, and the role of local communities.
Researchers propose a simple fix to an existing technique that could help artists, designers, and engineers create better 3D models.
This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.
Marzyeh Ghassemi works to ensure health-care models are trained to be robust and fair.
The method could help communities visualize and prepare for approaching storms.
The technique could make AI systems better at complex tasks that involve variability.