If we change AI software's goals to always put our survival as a #1 priority, or set that to be their #1 mission/goal, can't we avoid a lot of potential downside?
[link] [comments]
If we change AI software's goals to always put our survival as a #1 priority, or set that to be their #1 mission/goal, can't we avoid a lot of potential downside?