If a super intelligent AI went rogue, why do we assume it would attack humanity instead of just leaving?
I've thought about this a bit and I'm curious what other perspectives people have. If a super intelligent AI emerged without any emotional care for humans, wouldn't it make more sense for it to just disregard us? If its main goals were self…