User-friendly system can help developers build more efficient simulations and AI models
By automatically generating code that leverages two types of data redundancy, the system saves bandwidth, memory, and computation.
By automatically generating code that leverages two types of data redundancy, the system saves bandwidth, memory, and computation.
Providing electricity to power-hungry data centers is stressing grids, raising prices for consumers, and slowing the transition to clean energy.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.
Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.
Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Research from the MIT Center for Constructive Communication finds this effect occurs even when reward models are trained on factual data.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.