Don’t worry, folks. Big Tech pinky swears it’ll build safe, trustworthy AI
Don’t worry, folks. Big Tech pinky swears it’ll build safe, trustworthy AI

Don’t worry, folks. Big Tech pinky swears it’ll build safe, trustworthy AI

  • Eight big names in tech, including Nvidia, Palantir, and Adobe, have agreed to red team their AI applications before they're released and prioritize research that will make their systems more trustworthy.

  • The White House has secured voluntary commitments from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI to develop machine-learning software and models in a safe, secure, and trustworthy way. The commitments only cover future generative AI models.

  • Each of the corporations has promised to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused.

  • The organizations agreed to safeguard their intellectual property and make sure things like the weights of their proprietary neural networks don't leak, while giving users a way to easily report vulnerabilities or bugs.

  • All eight companies agreed to focus on research to investigate societal and civil risks AI might pose if they lead to discriminatory decision-making or have weaknesses in data privacy.

  • The US government wants Big Tech to develop watermarking techniques that can identify AI-generated content.

  • The US has asked the corporations to commit to building models for good, such as fighting climate change or improving healthcare.

Source : https://www.theregister.com/2023/09/12/nvidia_adobe_palantir_ai_safety/

submitted by /u/NuseAI
[link] [comments]