Enabling privacy-preserving AI training on everyday devices
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
Strahinja Janjusevic brings an international perspective and US Naval Academy education to his graduate research in the MIT Technology and Policy Program.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
The new certificate program will equip naval officers with skills needed to solve the military’s hardest problems.
Optimized for generative AI, TX-GAIN is driving innovation in biodefense, materials discovery, cybersecurity, and other areas of research and development.
Cybersecurity is essential in the rapidly evolving FinTech landscape, ensuring trust in digital finance. Threats like phishing, malware, and insider attacks pose significant risks. Best practices include zero-trust security, behavioral biometrics, and …
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
MIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.
The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?