Computer Science and Artificial Intelligence Laboratory (CSAIL)
Computer Science and Artificial Intelligence Laboratory (CSAIL)

AI learns how vision and sound are connected, without human intervention

This new machine-learning model can match corresponding audio and visual data, which could someday help robots interact in the real world.

Study shows vision-language models can’t handle queries with negation words

Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.

Hybrid AI model crafts smooth, high-quality videos in seconds

The CausVid generative AI tool uses a diffusion model to teach an autoregressive (frame-by-frame) system to rapidly produce stable, high-resolution videos.

Novel AI model inspired by neural dynamics from the brain

New type of “state-space model” leverages principles of harmonic oscillators.

Making AI models more trustworthy for high-stakes settings

A new method helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.

The MIT-Portugal Program enters Phase 4

New phase will support continued exploration of ideas and solutions in fields ranging from AI to nanotech to climate — with emphasis on educational exchanges and entrepreneurship.

“Periodic table of machine learning” could fuel AI discovery

Researchers have created a unifying framework that can help scientists combine existing ideas to improve AI models or create new ones.

3D modeling you can feel

TactStyle, a system developed by CSAIL researchers, uses image prompts to replicate both the visual appearance and tactile properties of 3D models.

Making AI-generated code more accurate in any language

A new technique automatically guides an LLM toward outputs that adhere to the rules of whatever programming language or other format is being used.

Training LLMs to self-detoxify their language

A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.