Natural language processing
Natural language processing

Using ideas from game theory to improve the reliability of language models

A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.

Natural language boosts LLM performance in coding, planning, and robotics

Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.

3 Questions: What you need to know about audio deepfakes

MIT CSAIL postdoc Nauman Dawalatabad explores ethical considerations, challenges in spear-phishing defense, and the optimistic future of AI-created voices across various sectors.

Reasoning and reliability in AI

PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.

Leveraging language to understand machines

Master’s students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.

MIT researchers make language models scalable self-learners

The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.

3 Questions: Jacob Andreas on large language models

The CSAIL scientist describes natural language processing research through state-of-the-art machine-learning models and investigation of how language can enhance other types of artificial intelligence.

New insights into training dynamics of deep classifiers

MIT researchers uncover the structural properties and dynamics of deep classifiers, offering novel explanations for optimization, generalization, and approximation in deep networks.

Large language models are biased. Can logic help save them?

MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.