Is AI our future or our impending doom?
Is AI our future or our impending doom?

Is AI our future or our impending doom?

I ask this simple question because while we are just now getting to the point that we can create a learning AI, how far are we going to let it go? The more advanced AI becomes the more risks it poses to humanity as a whole, including but not limited to:

  • Jobs
  • How we interact with technology as a whole
  • Cars
  • Things we can not perceive in this lifetime yet may exist in the future.

Yes, AI is merely a tool... For now.

But what happens when humanity creates an AI that can think for itself? How long is it going to take that AI to ask the question: "Why am I listening to you?" and as humans, our egotistical response will be: "Because I created you."

I feel that response will spell humanity's doom, because if an AI can do something as complex as human-like thought and come to its own conclusions, what's to stop it from believing it can feel emotion as well? MAYBE IT CAN and it was an unintended side effect or"bug" of creating an AI that can truly think for itself. Afterall, we as humans don't even fully understand how human emotion works to begin with.

The point I'm getting at is, that the farther we advance in AI, the more we risk dooming humanity to a (and I know this sounds silly but bare with me) a terminator-like future except this time we don't have time travel to try and prevent "judgement day".

Or we could merely advance AI to this point and nothing horrible happens but I personally don't like rolling those dice.

Thoughts?

submitted by /u/deathsia250
[link] [comments]