Researchers reduce bias in AI models while preserving or improving accuracy
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.
Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.
Marzyeh Ghassemi works to ensure health-care models are trained to be robust and fair.
The technique could make AI systems better at complex tasks that involve variability.
By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.
Researchers show that even the best-performing large language models don’t form a true model of the world and its rules, and can thus fail unexpectedly on similar tasks.
A new method called Clio enables robots to quickly map a scene and identify the items they need to complete a given set of tasks.
Researchers argue that in health care settings, “responsible use” labels could ensure AI systems are deployed appropriately.
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.
The approach can detect anomalies in data recorded over time, without the need for any training.