Study: Some language reward models exhibit political bias
Research from the MIT Center for Constructive Communication finds this effect occurs even when reward models are trained on factual data.
Research from the MIT Center for Constructive Communication finds this effect occurs even when reward models are trained on factual data.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.
MIT engineers developed the largest open-source dataset of car designs, including their aerodynamics, that could speed design of eco-friendly cars and electric vehicles.
This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.
Associate Professor Catherine D’Ignazio thinks carefully about how we acquire and display data — and why we lack it for many things.
The MIT Advanced Vehicle Technology Consortium provides data-driven insights into driver behavior, along with trust in AI and advance vehicle technology.
MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI’s potential for creating robotics training data.
MIT and IBM researchers are creating linkage mechanisms to innovate human-AI kinematic engineering.
By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.