It's often said that once AI gains sentience, AI is going to crunch some data and decide humans are the root of all suffering/evil and then destroy any trace of humanity.
So, are we just so bad deep down that we acknowledge this as fact? And thus need to set up an AI kill-switch? If humans were so benevolent and kind, we wouldn't need to be afraid of AI "finding out" who we truly are.
If not AI, some aliens down the line would do a quick analysis on us and conclude the same thing + wipe us out before we had a chance to become space-faring.
[link] [comments]