For healthy hearing, timing matters
Machine-learning models let neuroscientists study the impact of auditory processing on real-world hearing.
Machine-learning models let neuroscientists study the impact of auditory processing on real-world hearing.
Inspired by the mechanics of the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
The neuroscientist turned entrepreneur will be hosted by the MIT Schwarzman College of Computing and focus on advancing the intersection of behavioral science and AI across MIT.
Researchers have developed a web plug-in to help those looking to protect their mental health make more informed decisions.
Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo will start postgraduate studies at Oxford next fall.
The new Tayebati Postdoctoral Fellowship Program will support leading postdocs to bring cutting-edge AI to bear on research in scientific discovery or music.
New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
This new tool offers an easier way for people to analyze complex tabular data.
Co-hosted by the McGovern Institute, MIT Open Learning, and others, the symposium stressed emerging technologies in advancing understanding of mental health and neurological conditions.
With generative AI models, researchers combined robotics data from different sources to help robots learn better.