MIT researchers make language models scalable self-learners
The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.
The scientists used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.
The CSAIL scientist describes natural language processing research through state-of-the-art machine-learning models and investigation of how language can enhance other types of artificial intelligence.
MIT researchers uncover the structural properties and dynamics of deep classifiers, offering novel explanations for optimization, generalization, and approximation in deep networks.
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.