Enabling privacy-preserving AI training on everyday devices
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.
The consortium will bring researchers and industry together to focus on impact.
The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.
Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.