Human-computer interaction
Human-computer interaction

Q&A: MIT SHASS and the future of education in the age of AI

As the School of Humanities, Arts, and Social Sciences marks 75 years, Dean Agustín Rayo reflects on how AI is reshaping higher education and why SHASS disciplines continue to be central to MIT’s mission.

Human-machine teaming dives underwater

Researchers are developing hardware and algorithms to improve collaboration between divers and autonomous underwater vehicles engaged in maritime missions.

Evaluating the ethics of autonomous systems

MIT researchers developed a testing framework that pinpoints situations where AI decision-support systems are not treating people and communities fairly.

Wristband enables wearers to control a robotic hand with their own movements

By moving their hands and fingers, users can direct a robot to play piano or shoot a basketball, or they can manipulate objects in a virtual environment.

A better method for identifying overconfident large language models

This new metric for measuring uncertainty could flag hallucinations and help users know whether to trust an AI model.

New MIT class uses anthropology to improve chatbots

MIT computer science students design AI chatbots to help young users become more social, and socially confident.

Improving AI models’ ability to explain their predictions

A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.

Mixing generative AI with physics to create personal items that work in the real world

To help generative AI models create durable, real-world accessories and decor, the PhysiOpt system runs physics simulations and makes subtle tweaks to its 3D blueprints.

Personalization features can make LLMs more agreeable

The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.

Helping AI agents search to get the best results out of large language models

EnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.