Natural language processing
Natural language processing

Merging design and computer science in creative ways

MAD Fellow Alexander Htet Kyaw connects humans, machines, and the physical world using AI and augmented reality.

Training LLMs to self-detoxify their language

A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.

Teaching a robot its limits, to complete open-ended tasks safely

The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.

Study: Browsing negative content online makes mental health struggles worse

Researchers have developed a web plug-in to help those looking to protect their mental health make more informed decisions.

Using ideas from game theory to improve the reliability of language models

A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.

Natural language boosts LLM performance in coding, planning, and robotics

Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.

3 Questions: What you need to know about audio deepfakes

MIT CSAIL postdoc Nauman Dawalatabad explores ethical considerations, challenges in spear-phishing defense, and the optimistic future of AI-created voices across various sectors.

Reasoning and reliability in AI

PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.

Leveraging language to understand machines

Master’s students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.