Improving AI models’ ability to explain their predictions
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
The approach could help engineers tackle extremely complex design problems, from power grid optimization to vehicle design.
By leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.
To help generative AI models create durable, real-world accessories and decor, the PhysiOpt system runs physics simulations and makes subtle tweaks to its 3D blueprints.
By providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.
Strahinja Janjusevic brings an international perspective and US Naval Academy education to his graduate research in the MIT Technology and Policy Program.
By minimizing the need to drive around looking for a parking spot, this technique can save drivers up to 35 minutes — and give them a realistic estimate of total travel time.
The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.
Associate Professor Rafael Gómez-Bombarelli has spent his career applying AI to improve scientific discovery. Now he believes we are at an inflection point.
Driven by overuse and misuse of antibiotics, drug-resistant infections are on the rise, while development of new antibacterial tools has slowed.