Study: AI models fail to reproduce human judgements about rule violations
Models trained using common data-collection techniques judge rule violations more harshly than humans would, researchers report.
Models trained using common data-collection techniques judge rule violations more harshly than humans would, researchers report.
Researchers identify a property that helps computer vision models learn to represent the visual world in a more stable, predictable way.
The system they developed eliminates a source of bias in simulations, leading to improved algorithms that can boost the performance of applications.
A collaborative research team from the MIT-Takeda Program combined physics and machine learning to characterize rough particle surfaces in pharmaceutical pills and powders.
These tunable proteins could be used to create new materials with specific mechanical properties, like toughness or flexibility.
MIT researchers exhibit a new advancement in autonomous drone navigation, using brain-inspired liquid neural networks that excel in out-of-distribution scenarios.
Experts convene to peek under the hood of AI-generated code, language, and images as well as its capabilities, limitations, and future impact.
Martin Luther King Jr. Scholar Brian Nord trains machines to explore the cosmos and fights for equity in research.
“DribbleBot” can maneuver a soccer ball on landscapes such as sand, gravel, mud, and snow, using reinforcement learning to adapt to varying ball dynamics.
MIT researchers built DiffDock, a model that may one day be able to find new drugs faster than traditional methods and reduce the potential for adverse side effects.