LLMs factor in unrelated information when recommending medical treatments
Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.
Trained with a joint understanding of protein and cell behavior, the model could help with diagnosing disease and developing new drugs.
Words like “no” and “not” can cause this popular class of AI models to fail unexpectedly in high-stakes settings, such as medical diagnosis.
The model could help clinicians assess breast cancer stage and ultimately help in reducing overtreatment.
MIT CSAIL researchers develop advanced machine-learning models that outperform current methods in detecting pancreatic ductal adenocarcinoma.