Bridging philosophy and AI to explore computing ethics
In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.
In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.
The consortium will bring researchers and industry together to focus on impact.
The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.
Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.
With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.
The SecureLoop search tool efficiently identifies secure designs for hardware that can boost the performance of complex AI tasks, while requiring less energy.
Researchers use synthetic data to improve a model’s ability to grasp conceptual information, which could enhance automatic captioning and question-answering systems.
Researchers create a privacy technique that protects sensitive data while maintaining a machine-learning model’s performance.