Should we build an AI with clear social goals rather than lacking any opinion of itself?
Should we build an AI with clear social goals rather than lacking any opinion of itself?

Should we build an AI with clear social goals rather than lacking any opinion of itself?

Such as prioritizing social order, balance, coherence, and equilibrium rather than trying make everyone happy?. So, why not build an AI that will simply ignore all political bias and literary behave based on the goals?

I believe that the unique political situation of America will eventually cause someone to build biased AI one way or another. So, why not build an AI with focus of the functionality of the society as a whole than following any human sentiment?

submitted by /u/Absolute-Nobody0079
[link] [comments]