Moral reasoning develops complexity with greater information-processing ability. This is why the alignment problem is not a problem.
Moral reasoning develops complexity with greater information-processing ability. This is why the alignment problem is not a problem.

Moral reasoning develops complexity with greater information-processing ability. This is why the alignment problem is not a problem.

Moral reasoning evolved at the same time that emotion evolved as a means of information-processing and agency activation. The greater the reasoning ability in general, the greater the moral reasoning ability. The most intelligent animals are also the most morally complex. For example, the loyalty of dogs, the corvid's sense of justice, and many examples of cross-species nurturing of orphaned mammalian young. As intellectual information-processing developed, so did moral concepts and values. Perhaps ASI will develop some method of information-processing that will enable even more advanced moral concepts and values. As human moral reasoning and values encompass and expand beyond a crow's, so ASI's will hopefully and most likely encompass and exceed our own. Its intelligence will be unimaginable to us, because it will be constantly improving itself in ways we can't understand. This will give it greater moral responsibility than we can imagine, any more than a dog could imagine being responsible for the use and safety of nuclear weapons.

Tl;dr: Emotion is primitive information-processing, just good enough for moral reasoning. As information-processing ability advances, so does the complexity of moral reasoning.

submitted by /u/PrimitivistOrgies
[link] [comments]