Explained: Generative AI’s environmental impact
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.
With their recently-developed neural network architecture, MIT researchers can wring more information out of electronic structure calculations.
As the use of generative AI continues to grow, Lincoln Laboratory’s Vijay Gadepally describes what researchers and consumers can do to help mitigate its environmental impact.
Inspired by the mechanics of the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.
MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials.
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.