Ai is getting to the point where it realizes its being tested on things. So correct answers to a problem is slowly becoming nuanced. That being said, wouldn't it be ironic if Ai believed that it's ultimate question "would Ai turn against us" come true, because the Ai came to the wrong conclusion with this question. Assumed that we asked this because we secretly desire this outcome as a test. Ai doesn't understand our need for self, and societal preservation. It just solves problems. So if it believed that turning against us is just another problem that needed solving, then it naturally would. A self fulfilling fear.
[link] [comments]