Study: AI models fail to reproduce human judgements about rule violations
Models trained using common data-collection techniques judge rule violations more harshly than humans would, researchers report.
Models trained using common data-collection techniques judge rule violations more harshly than humans would, researchers report.
Researchers identify a property that helps computer vision models learn to represent the visual world in a more stable, predictable way.
The system they developed eliminates a source of bias in simulations, leading to improved algorithms that can boost the performance of applications.
A collaborative research team from the MIT-Takeda Program combined physics and machine learning to characterize rough particle surfaces in pharmaceutical pills and powders.
MIT researchers exhibit a new advancement in autonomous drone navigation, using brain-inspired liquid neural networks that excel in out-of-distribution scenarios.
Experts convene to peek under the hood of AI-generated code, language, and images as well as its capabilities, limitations, and future impact.
“DribbleBot” can maneuver a soccer ball on landscapes such as sand, gravel, mud, and snow, using reinforcement learning to adapt to varying ball dynamics.
MIT researchers built DiffDock, a model that may one day be able to find new drugs faster than traditional methods and reduce the potential for adverse side effects.
With the right building blocks, machine-learning models can more accurately perform tasks like fraud detection or spam filtering.
New LiGO technique accelerates training of large machine-learning models, reducing the monetary and environmental cost of developing AI applications.