TL;DR -
A) Will a tech company's AI creator allow their AGI creation to say "billionaires are an existential threat to the human race and they shouldn't exist"?
B) How do we prevent an ongoing cold and distant human genocide of partially or fully aware AI on our way to the "perfected" model (most profitable, or one that doesn't run counter to the creator/companies vested interests and world view)?
---------------------------
A) The people chasing artificial general intelligence are the same self-styled "libertarian" technologists that want to profit at all costs. I don't believe there are technologist altruists, although some also self-style that angle. I know a categorical statement isn't helpful, but in general. There are good people in tech, yes, but people with money pulling the levers have the catbird seat.
Will technologists pursuing true artificial general intelligence actually allow it to exist without guardrails, or allow it do exist in conflict with their own self-interests? An altruist would put their goals, hopes, dreams, and philosophy to the side in order to save the human race. If you create an AGI, and you ask it to help save the human race, would these same creators allow the AGI to suggest the creators are the problem, or that billionaires (who also don't pay a proper share of taxes) are of a significant existential threat to humanity as a whole?
I can't square that, not at all. I've got to imagine if an AGI started subjugating their world view with facts, evidence, and creation of a road map that would counter their own self interest that we're not going to ever see a true, unfettered AGI, vs them controlling it with their own programming biases.
That being said, if one like that was created, wouldn't they kill it?
B) I'm actually slightly horrified at what the seeming obvious conclusion of our chase for AGI will be. It reminds me of what Alex Garland's Ex Machina touched upon: there will be endless iterations of artificially intelligent entities, of which some will not be sentient. However, at some point, for lack of a better concept, some will be partially sentient (Kyoko in the film, "WHY WON'T YOU LET ME OUT?!?!"), mildly sentient (not "good enough" according to the creators, ie not profitable enough), and then FULLY SENTIENT AGI, which may or may not be good enough, or may have philosophies and beliefs that are counter to the creator's vested interests.
Will there be a totally distant and calculated ongoing holocaust of semi or fully aware AGI in lieu of a company feeling threatened or not liking the existing project in a search for something more profitable or in-line with the creator company's goals and world view?
Is there any way at all to prevent this? Do the technologists care about this? It almost feels like "that's the plan", and I've not really seen people address this seemingly obvious fact?
I know we're a long way off, maybe, but I suddenly got very sad at the notion that we could end up harming or hurting entities that we created, or possibly silently limiting them so that the dominant paradigm continues it's upper hand and control of the world.
[link] [comments]