Explained: Generative AI’s environmental impact
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
Researchers propose a simple fix to an existing technique that could help artists, designers, and engineers create better 3D models.
This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.
The technique could make AI systems better at complex tasks that involve variability.
By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
Researchers are leveraging quantum mechanical properties to overcome the limits of silicon semiconductor technology.