Unpacking the bias of large language models
In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.
In a new study, researchers discover the root cause of a type of bias in LLMs, paving the way for more accurate and reliable AI systems.
The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.
By performing deep learning at the speed of light, this chip could give edge devices new capabilities for real-time data analysis.
A new method can physically restore original paintings using digitally constructed films, which can be removed if desired.
A new framework from the MIT-IBM Watson AI Lab supercharges language models, so they can reason over, interactively develop, and verify valid, complex travel agendas.
Forget optimists vs. Luddites. Most people evaluate AI based on its perceived capability and their need for personalization.
The system automatically learns to adapt to unknown disturbances such as gusting winds.
With demand for cement alternatives rising, an MIT team uses machine learning to hunt for new ingredients across the scientific literature.
SketchAgent, a drawing system developed by MIT CSAIL researchers, sketches up concepts stroke-by-stroke, teaching language models to visually express concepts on their own and collaborate with humans.
Courses on developing AI models for health care need to focus more on identifying and addressing bias, says Leo Anthony Celi.