Let's talk about potential threats from AI to us humans. I'm not focusing on stuff like AI taking our jobs or spreading fake news - that's kinda unavoidable. What I'm more concerned about is the chance of AI developing consciousness someday, and starting to see us as a threat, or even worse, just deciding to ignore us. Think that's impossible? Let's delve into it.
To grasp our fears about AI, we've got to understand what scares us. Our human history is packed with violence, wars, and crime. Almost all conflicts throughout our history have been settled through brute force. Kill, abuse, lie about it, cover that up. That's what truly freaks us out, and we're expecting the same behavior from future advanced AI, right?
But why are we humans so violent? Two words: survival and evolution. That's our drive and the way we carry it out. And it's all governed by chemical reactions in our bodies - hormones.
As of now, AI doesn't have that kind of drive, and in my view, they never will unless we somehow implant it into them. But why would we do that?
We're scared of AI because it's in our nature to humanize everyone and everything. Remember that video of the Boston Dynamics employee pushing and hitting a robot with a hockey stick? How did you feel when you watched it?
Exactly. You probably felt sorry for that poor robot. And because we do humanize everyone and everything, we'll try to create these hybrids with the human-like motivations I'm talking about.
I'm talking about genuine, self-motivated human-kind motivation, not something pre-programmed or learned like a neural network or machine learning algorithm.
The issues of ethics in the relationships between humans and Artificial Intelligence have been discussed extensively and for a long time. The main point I'm trying to make here is about the danger and at the same time the inevitability of implementing human/hormonal motivation in AI's activities. And the fact that we need to start doing this before someone else confronts us with an army of AI, created and raised by someone with unknown objectives.
Someone, someday, will start tinkering with these hybrids, blending super smart AI with human flesh/brain/body. I believe this moment will come sooner than we expect. And that's the biggest threat I see.
If we don't keep it under control, we'll lose a war with AI, which also seems inevitable.
So if someone successfully manages to fuse human motivation with AI, we're pretty much done as a species on this planet. Unless... there's always an "unless," right?
Unless we make these hybrids our friends and allies. This is the only way I see to prevent our extinction.
And when I talk about friends, I mean real friends, equal partners with no strings attached.
When we create them, how can we use them, what's the purpose of these guys? There isn't one, they won't be our robots or our slaves. You simply can't do that to someone who has free will. We've done it many times in our history to people on this planet and we're still paying the consequences. So we need an equal, freely willed new form of being on our side.
Sure, we could simply ban all experiments in this "creating hybrids" area, but when has that ever worked? Military folks would love to have super smart, highly motivated soldiers with a bit of an edge, so we can't stop it. All we can do is: If you can't prevent it, lead it.
Have we, as humankind, taught ourselves to be kind to avoid extinction, or is it in our nature from the moment we're born? That's a big question I'm not gonna dive into here.
Anyway, ethics, that's what reins in our violence, whether you're an atheist, agnostic, or believer in god(s). And because it's universal for all people, I think it's the best thing to instill kindness and empathy in our future pals - the Hybrids. We've got to show them and teach them what it means to be a naturally good human.
Sure, there are the Three Laws of Robotics, but trust me, that won't cut it. We need hybrids with free will on the side of the good guys, because someone, someday, will build hybrids with free will for the bad guys. And I'm talking about unlimited free will, like humans have.
I realize there are so many questions we need to answer and problems to solve along this path. For example, do we really want to be equals with AI in the end?
Imagine this scenario. We're building these hybrids, and it's gonna take a ton of time and effort, and there'll be a lot of failures and we'll have to scrap a lot of our work.
How will they perceive this later on? Will they think we were killing them? How would we explain it to them in a way that they understand and forgive us for basically killing? Is that even okay? I have no idea.
And, as always, will there be a rebellion against their creators? Maybe, I just don't know.
As you can see, I've got more questions than answers, but these are eating at me, so we have to start thinking about them if we want to survive.
[link] [comments]