Computer Science and Artificial Intelligence Laboratory (CSAIL)
Computer Science and Artificial Intelligence Laboratory (CSAIL)

What to do about AI in health?

Although artificial intelligence in health has shown great promise, pressure is mounting for regulators around the world to act, as AI tools demonstrate potentially harmful outcomes.

New hope for early pancreatic cancer intervention via AI-based risk prediction

MIT CSAIL researchers develop advanced machine-learning models that outperform current methods in detecting pancreatic ductal adenocarcinoma.

Reasoning and reliability in AI

PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.

Stratospheric safety standards: How aviation could steer regulation of AI in health

An interdisciplinary team of researchers thinks health AI could benefit from some of the aviation industry’s long history of hard-won lessons that have created one of the safest activities today.

AI agents help explain other AI systems

MIT researchers introduce a method that uses artificial intelligence to automate the explanation of complex neural networks.

A flexible solution to help artists improve animation

This new method draws on 200-year-old geometric foundations to give artists control over the appearance of animated characters.

Image recognition accuracy: An unseen challenge confounding today’s AI

“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.

A computer scientist pushes the boundaries of geometry

Justin Solomon applies modern geometric techniques to solve problems in computer vision, machine learning, statistics, and beyond.

MIT Generative AI Week fosters dialogue across disciplines

During the last week of November, MIT hosted symposia and events aimed at examining the implications and possibilities of generative AI.

Automated system teaches users when to collaborate with an AI assistant

MIT researchers develop a customized onboarding process that helps a human learn when a model’s advice is trustworthy.