LLMs factor in unrelated information when recommending medical treatments
Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Presentations targeted high-impact intersections of AI and other areas, such as health care, business, and education.
Caitlin Morris, a PhD student and 2024 MAD Fellow affiliated with the MIT Media Lab, designs digital learning platforms that make room for the “social magic” that influences curiosity and motivation.
In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.
The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.
The winning essay of the Envisioning the Future of Computing Prize puts health care disparities at the forefront.
SketchAgent, a drawing system developed by MIT CSAIL researchers, sketches up concepts stroke-by-stroke, teaching language models to visually express concepts on their own and collaborate with humans.
PhD student Sarah Alnegheimish wants to make machine learning systems accessible.
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
MAD Fellow Alexander Htet Kyaw connects humans, machines, and the physical world using AI and augmented reality.