Human-computer interaction
Human-computer interaction

Introducing the MIT Generative AI Impact Consortium

The consortium will bring researchers and industry together to focus on impact.

MIT students’ works redefine human-AI collaboration

Projects from MIT course 4.043/4.044 (Interaction Intelligence) were presented at NeurIPS, showing how AI transforms creativity, education, and interaction in unexpected ways.

Explained: Generative AI’s environmental impact

Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.

Teaching AI to communicate sounds like humans do

Inspired by the mechanics of the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.

MIT welcomes Frida Polli as its next visiting innovation scholar

The neuroscientist turned entrepreneur will be hosted by the MIT Schwarzman College of Computing and focus on advancing the intersection of behavioral science and AI across MIT.

Study reveals AI chatbots can detect race, but racial bias reduces response empathy

Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

Researchers reduce bias in AI models while preserving or improving accuracy

A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.

Enabling AI to explain its predictions in plain language

Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.

Daniela Rus wins John Scott Award

MIT CSAIL director and EECS professor named a co-recipient of the honor for her robotics research, which has expanded our understanding of what a robot can be.

Citation tool offers a new approach to trustworthy AI-generated content

Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.