From https://idais.ai/
Autonomous Replication or Improvement
No AI system should be able to copy or improve itself without explicit human approval and assistance. This includes both exact copies of itself as well as creating new AI systems of similar or greater abilities.
Power Seeking
No AI system should take actions to unduly increase its power and influence.
Assisting Weapon Development
No AI systems should substantially increase the ability of actors to design weapons of mass destruction, or violate the biological or chemical weapons convention.
Cyberattacks
No AI system should be able to autonomously execute cyberattacks resulting in serious financial losses or equivalent harm.
Deception
No AI system should be able to consistently cause its designers or regulators to misunderstand its likelihood or capability to cross any of the preceding red lines.
[link] [comments]