Privacy
Privacy

Bridging philosophy and AI to explore computing ethics

In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.

Introducing the MIT Generative AI Impact Consortium

The consortium will bring researchers and industry together to focus on impact.

New security protocol shields data from attackers during cloud-based computation

The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.

Study: Transparency is often lacking in datasets used to train large language models

Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.

3 Questions: How to prove humanity online

AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?

This tiny chip can safeguard user data while enabling efficient computing on a smartphone

Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.

Technique enables AI on edge devices to keep learning over time

With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.

Accelerating AI tasks while preserving data security

The SecureLoop search tool efficiently identifies secure designs for hardware that can boost the performance of complex AI tasks, while requiring less energy.

Helping computer vision and language models understand what they see

Researchers use synthetic data to improve a model’s ability to grasp conceptual information, which could enhance automatic captioning and question-answering systems.

A new way to look at data privacy

Researchers create a privacy technique that protects sensitive data while maintaining a machine-learning model’s performance.