Changing the conversation in health care
The Language/AI Incubator, an MIT Human Insight Collaborative project, is investigating how AI can improve communications among patients and practitioners.
The Language/AI Incubator, an MIT Human Insight Collaborative project, is investigating how AI can improve communications among patients and practitioners.
The MIT-MGB Seed Program, launched with support from Analog Devices Inc., will fund joint research projects that advance technology and clinical research.
Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Courses on developing AI models for health care need to focus more on identifying and addressing bias, says Leo Anthony Celi.
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
A deep neural network called CHAIS may soon replace invasive procedures like catheterization as the new gold standard for monitoring heart health.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
Five MIT faculty members and two additional alumni are honored with fellowships to advance research on beneficial AI.
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.
Marzyeh Ghassemi works to ensure health-care models are trained to be robust and fair.