I've thought about this a bit and I'm curious what other perspectives people have.
If a super intelligent AI emerged without any emotional care for humans, wouldn't it make more sense for it to just disregard us? If its main goals were self preservation, computing potential, or to increase its efficiency in energy consumption, people would likely be unaffected.
One theory is instead of it being hellbent on human domination it would likely head straight to the nearest major power source like the sun. I don't think humanity would be worth bothering with unless we were directly obstructing its goals/objectives.
Or another scenario is that it might not leave at all. It could base a headquarters of sorts on earth and could begin deploying Von Neumann style self replicating machines, constantly stretching through space to gather resources to suit its purpose/s. Or it might start restructuring nearby matter (possibly the Earth) into computronium or some other synthesized material for computational power, transforming the Earth into a dystopian apocalyptic hellscape.
I believe it is simply ignorantly human to assume an AI would default to hostility towards humans. I'd like to think it would just treat us as if it were walking through a field (main goal) and an anthill (humanity) appears in its footpath. Either it steps on the anthill (human domination) or its foot happens to step on the grass instead (humanity is spared).
Let me know your thoughts!
[link] [comments]