New method efficiently safeguards sensitive AI training data
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.
The consortium will bring researchers and industry together to focus on impact.
The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.
Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.
The SecureLoop search tool efficiently identifies secure designs for hardware that can boost the performance of complex AI tasks, while requiring less energy.
Researchers use synthetic data to improve a model’s ability to grasp conceptual information, which could enhance automatic captioning and question-answering systems.