For healthy hearing, timing matters
Machine-learning models let neuroscientists study the impact of auditory processing on real-world hearing.
Machine-learning models let neuroscientists study the impact of auditory processing on real-world hearing.
Inspired by the mechanics of the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
Using this model, researchers may be able to identify antibody drugs that can target a variety of infectious diseases.
The neuroscientist turned entrepreneur will be hosted by the MIT Schwarzman College of Computing and focus on advancing the intersection of behavioral science and AI across MIT.
Researchers have developed a web plug-in to help those looking to protect their mental health make more informed decisions.
The method could help communities visualize and prepare for approaching storms.
In a talk at MIT, White House science advisor Arati Prabhakar outlined challenges in medicine, climate, and AI, while expressing resolve to tackle hard problems.
Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo will start postgraduate studies at Oxford next fall.
A new design tool uses UV and RGB lights to change the color and textures of everyday objects. The system could enable surfaces to display dynamic patterns, such as health data and fashion designs.
The new Tayebati Postdoctoral Fellowship Program will support leading postdocs to bring cutting-edge AI to bear on research in scientific discovery or music.