AI learns how vision and sound are connected, without human intervention
This new machine-learning model can match corresponding audio and visual data, which could someday help robots interact in the real world.
This new machine-learning model can match corresponding audio and visual data, which could someday help robots interact in the real world.
Researchers are developing algorithms to predict failures when automation meets the real world in areas like air traffic scheduling or autonomous vehicles.
Sendhil Mullainathan brings a lifetime of unique perspectives to research in behavioral economics and machine learning.
Trained with a joint understanding of protein and cell behavior, the model could help with diagnosing disease and developing new drugs.
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
The CausVid generative AI tool uses a diffusion model to teach an autoregressive (frame-by-frame) system to rapidly produce stable, high-resolution videos.
“IntersectionZoo,” a benchmarking tool, uses a real-world traffic problem to test progress in deep reinforcement learning algorithms.
New type of “state-space model” leverages principles of harmonic oscillators.
A new method helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.
MAD Fellow Alexander Htet Kyaw connects humans, machines, and the physical world using AI and augmented reality.