Computer science and technology
Computer science and technology

Explained: Generative AI’s environmental impact

Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.

Algorithms and AI for a better world

Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.

New computational chemistry techniques accelerate the prediction of molecules and materials

With their recently-developed neural network architecture, MIT researchers can wring more information out of electronic structure calculations.

Q&A: The climate impact of generative AI

As the use of generative AI continues to grow, Lincoln Laboratory’s Vijay Gadepally describes what researchers and consumers can do to help mitigate its environmental impact.

Teaching AI to communicate sounds like humans do

Inspired by the mechanics of the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.

Ecologists find computer vision models’ blind spots in retrieving wildlife images

Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.

Need a research hypothesis? Ask AI.

MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials.

MIT researchers introduce Boltz-1, a fully open-source model for predicting biomolecular structures

With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.

Study reveals AI chatbots can detect race, but racial bias reduces response empathy

Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

Teaching a robot its limits, to complete open-ended tasks safely

The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.