<span class="vcard">Rachel Gordon | MIT CSAIL</span>
Rachel Gordon | MIT CSAIL

MIT researchers make language models scalable self-learners

The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.

3 Questions: Jacob Andreas on large language models

The CSAIL scientist describes natural language processing research through state-of-the-art machine-learning models and investigation of how language can enhance other types of artificial intelligence.

Drones navigate unseen environments with liquid neural networks

MIT researchers exhibit a new advancement in autonomous drone navigation, using brain-inspired liquid neural networks that excel in out-of-distribution scenarios.

MIT CSAIL researchers discuss frontiers of generative AI

Experts convene to peek under the hood of AI-generated code, language, and images as well as its capabilities, limitations, and future impact.

A four-legged robotic system for playing soccer on various terrains

“DribbleBot” can maneuver a soccer ball on landscapes such as sand, gravel, mud, and snow, using reinforcement learning to adapt to varying ball dynamics.

Large language models are biased. Can logic help save them?

MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.