Can AI Eliminate itself if it believes it’s a threat to humanity?
From the context of an AGI I got this hypothetical question in my mind. submitted by /u/bar_at_5 [link] [comments]
From the context of an AGI I got this hypothetical question in my mind. submitted by /u/bar_at_5 [link] [comments]