Researchers enhance peripheral vision in AI models
By enabling models to see the world more like humans do, the work could help improve driver safety and shed light on human behavior.
By enabling models to see the world more like humans do, the work could help improve driver safety and shed light on human behavior.
A new study finds that language regions in the left hemisphere light up when reading uncommon sentences, while straightforward sentences elicit little response.
Study shows computational models trained to perform auditory tasks display an internal organization similar to that of the human auditory cortex.
By analyzing bacterial data, researchers have discovered thousands of rare new CRISPR systems that have a range of functions and could enable gene editing, diagnostics, and more.
MIT CSAIL researchers combine AI and electron microscopy to expedite detailed brain network mapping, aiming to enhance connectomics research and clinical pathology.
Two studies find “self-supervised” models, which learn about their environment from unlabeled data, can show activity patterns similar to those of the mammalian brain.
Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems.
A new study bridging neuroscience and machine learning offers insights into the potential role of astrocytes in the human brain.
Training artificial neural networks with data from real brains can make computer vision more robust.
MIT students share ideas, aspirations, and vision for how advances in computing stand to transform society in a competition hosted by the Social and Ethical Responsibilities of Computing.