Training LLMs to self-detoxify their language
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
A new method lets users ask, in plain language, for a new molecule with certain properties, and receive a detailed description of how to synthesize it.
The framework helps clinicians choose phrases that more accurately reflect the likelihood that certain conditions are present in X-rays.
Ana Trišović, who studies the democratization of AI, reflects on a career path that she began as a student downloading free MIT resources in Serbia.
Researchers fuse the best of two popular methods to create an image generator that uses less energy and can run locally on a laptop or smartphone.
New research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.
Agreement between MIT Microsystems Technology Laboratories and GlobalFoundries aims to deliver power efficiencies for data centers and ultra-low power consumption for intelligent devices at the edge.
A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.
Whitehead Institute and CSAIL researchers created a machine-learning model to predict and generate protein localization, with implications for understanding and remedying disease.