Solving the “Whac-a-mole dilemma”: A smarter way to debias AI vision models
A new debiasing technique called WRING avoids creating or amplifying biases that can occur with existing debiasing approaches.
A new debiasing technique called WRING avoids creating or amplifying biases that can occur with existing debiasing approaches.
Researchers at MIT, Mass General Brigham, and Harvard Medical School developed a deep-learning model to forecast a patient’s heart failure prognosis up to a year in advance.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
BoltzGen generates protein binders for any biological target from scratch, expanding AI’s reach from understanding biology toward engineering it.
A deep neural network called CHAIS may soon replace invasive procedures like catheterization as the new gold standard for monitoring heart health.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.
Most antibiotics target metabolically active bacteria, but with artificial intelligence, researchers can efficiently screen compounds that are lethal to dormant microbes.
Although artificial intelligence in health has shown great promise, pressure is mounting for regulators around the world to act, as AI tools demonstrate potentially harmful outcomes.
An interdisciplinary team of researchers thinks health AI could benefit from some of the aviation industry’s long history of hard-won lessons that have created one of the safest activities today.