New technique makes AI models leaner and faster while they’re still learning
Researchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.
Researchers use control theory to shed unnecessary complexity from AI models during training, cutting compute costs without sacrificing performance.
Researchers developed a system that intelligently balances workloads to improve the efficiency of flash storage hardware in a data center.
MIT Sea Grant works with the Woodwell Climate Research Center and other collaborators to demonstrate a deep learning-based system for fish monitoring.
An MIT-led team is designing artificial intelligence systems for medical diagnosis that are more collaborative and forthcoming about uncertainty.
Operations research expert Dimitris Bertsimas delivered the annual Killian Lecture, providing a look at the past and future of his work.
With this new technique, a robot could more accurately detect hidden objects or understand an indoor scene using reflected Wi-Fi signals.
Researchers at MIT, Mass General Brigham, and Harvard Medical School developed a deep-learning model to forecast a patient’s heart failure prognosis up to a year in advance.
Professor Jesse Thaler describes a vision for a two-way bridge between artificial intelligence and the mathematical and physical sciences — one that promises to advance both.
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
By leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.