Using ideas from game theory to improve the reliability of language models
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
MIT CSAIL postdoc Nauman Dawalatabad explores ethical considerations, challenges in spear-phishing defense, and the optimistic future of AI-created voices across various sectors.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
Master’s students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.
The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.
The CSAIL scientist describes natural language processing research through state-of-the-art machine-learning models and investigation of how language can enhance other types of artificial intelligence.
MIT researchers uncover the structural properties and dynamics of deep classifiers, offering novel explanations for optimization, generalization, and approximation in deep networks.
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.