I've been thinking about the relationship between intelligence and ethics. If we had multiple superintelligent Al systems that were far more intelligent than humans, would they naturally arrive at the same conclusions about morality and ethics?
Would increased intelligence and reasoning capability lead to some form of moral realism where they discover objective moral truths?
Or would there still be fundamental disagreements about values and ethics even at that level of intelligence?
Perhaps this question is fundamentally impossible for humans to answer, given that we can't comprehend or simulate the reasoning of beings vastly more intelligent than ourselves.
But I'm still curious about people's thoughts on this. Interested in hearing perspectives from those who've studied Al ethics and moral philosophy.
[link] [comments]