Electrical engineering and computer science (EECS)
Electrical engineering and computer science (EECS)

Like human brains, large language models reason about diverse data in a general way

A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.

AI model deciphers the code in proteins that tells them where to go

Whitehead Institute and CSAIL researchers created a machine-learning model to predict and generate protein localization, with implications for understanding and remedying disease.

Bridging philosophy and AI to explore computing ethics

In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.

Creating a common language

New faculty member Kaiming He discusses AI’s role in lowering barriers between scientific fields and fostering collaboration across scientific disciplines.

Validation technique could help scientists make more accurate forecasts

MIT researchers developed a new approach for assessing predictions with a spatial dimension, like forecasting weather or mapping air pollution.

Streamlining data collection for improved salmon population management

Assistant Professor Sara Beery is using automation to improve monitoring of migrating salmon in the Pacific Northwest.

Aligning AI with human values

“We need to both ensure humans reap AI’s benefits and that we don’t lose control of the technology,” says senior Audrey Lorvo.

Introducing the MIT Generative AI Impact Consortium

The consortium will bring researchers and industry together to focus on impact.

User-friendly system can help developers build more efficient simulations and AI models

By automatically generating code that leverages two types of data redundancy, the system saves bandwidth, memory, and computation.

3 Questions: Modeling adversarial intelligence to exploit AI’s security vulnerabilities

MIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.