Frankenstein and AI Self-Driving Cars
Frankenstein and AI Self-Driving Cars

Frankenstein and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

Mankind creates a monster.

Monster runs amok and kills.

Mankind is threatened or overtaken.

This is a typical plot found in popular movies such as Terminator and The Matrix. We see over and again science fiction stories that warn us about overstepping human bounds. We are repeatedly warned that we might someday bring about our own destruction. Scary. Worrisome. Could it happen? We don’t know, but it sure seems like a possibility.

Another similar story is celebrating its bicentennial this year. In 1818, Mary Shelley brought us the now famous and perhaps infamous “Frankenstein; or the Modern Prometheus.” You might recall from your school days that Prometheus exists in Greek mythology and was credited with creating mankind from clay, and defied the gods by giving humanity fire and for which provided progress for civilization to take hold. In Mary Shelley’s tale, Victor Frankenstein is a character that creates a “monster” – which inadvertently became to be known as “Frankenstein” though in fact the Frankenstein name belongs to Victor.

Frankenstein as a story and a theme has become a pervasive aspect in our contemporary culture. Besides being standard reading required for most school children, and besides being a popular costume for Halloween, and besides appearing in a myriad of other forums including TV, films, online, and the like, we also have grown accustomed to using “Frankenstein” as a means of signaling that we as humans might be overstepping our bounds.  Whenever a new scientific breakthrough occurs or a new technology emerges, we right away ask whether a new Frankenstein has perhaps been possibly unleashed.

As stated in a recent issue of the magazine Science, “Frankenstein lives on” is an ongoing mantra that seems to dog any new innovation. There are some specialists that study existential risks and embrace the warnings that can be found in Mary Shelley’s story. Others though worry that we overuse the Frankenstein paradigm and therefore tend to be distracted from real-world problems that can have real-world solvable solutions. Some would say that the potential for nuclear war is one such example, as might be the role of climate change.

You might remember from history that when scientists were first going to try detonating an atomic bomb in 1945, there were some at the time that predicted the chain reaction could ignite our atmosphere and cause a global chain reaction that would utterly wipe out the Earth as we know it. This was considered a Frankenstein moment. Humans had created a monster that could run amok and end-up killing its masters. We know now that the chain reaction did not happen, though we are still faced with the potential danger of nuclear conflagrations if a full-on nuclear war were to occur.

There are so-called Frankenwords today, a somewhat modern incarnation of the Frankenstein moniker. All you need to do is attach the word “Franken” to the front of some other word, and it becomes transformed into meaning something that might take over from us humans. Examples include Frankenmouse, Frankenmoth, Frankencells, Frankengenes, Frankenstorms, and so on.

Artificial Intelligence (AI) is already on the Frankenstein watch list, and there are many debates about whether the advent of true AI systems might ultimately lead to our doom. In movies such as the Terminator and The Matrix, humans create computer systems with AI that decide that machines should rule over humans. Perhaps most well-known is the Skynet network in Terminator that becomes sentient on April 19, 2011 and starts to attack humanity.  Repeatedly today there are those that wring their hands in society about the impending dangers that as AI makes progress, we are perhaps daily moving closer and closer to our own doom.

Most people are unaware that they are referring to what AI specialists tend to call Artificial General Intelligence (AGI), rather than conventional AI. AGI is a type AI that would be the equivalent of a general thinking human being. Today’s AI is not AGI. The AI of today is specialized to particular tasks and capabilities. We have not yet been able to formulate AGI, which is what many commonly think of as true AI.

Some are worried we are heading toward Frankencars

What does this have to do with AI self-driving cars?

At the Cybernetic Self-Driving Car Institute, we are often asked about whether the pursuit of AI self-driving cars is heading mankind toward humanities doom. Some are worried we are heading toward Frankencars.

The concept is that we’ll make AI self-driving cars that are so smart and independent that cars will turn on humans. You tell your self-driving car to take you to the market. But, your AI self-driving car refuses and decides it wants to go someplace else instead. Maybe that self-driving car that you see ahead of you will try to run you over. Or, it will swerve into another car an attempt to kill the humans in that car and whomever are the human occupants in it. There are some critics of self-driving cars that want the auto makers and tech firm developers of self-driving cars to include a kill-switch for humans to use if needed. If your self-driving car goes haywire, you would be able to press the kill-switch and the self-driving car would become disengaged from its AI and be nothing more than a regular car (or, maybe become a ton sized paperweight).

The question too is where will be the tipping point of self-driving cars going from obedient servants to suddenly becoming malevolent evil doers? In the levels of self-driving cars, the Level 5 is the topmost level, consisting of a self-driving car that is supposed to be able to drive in whatever manner a human could drive. When we reach Level 5 cars, will that be the tipping point? Or, will we delude ourselves into first having “innocent” and benevolent Level 5 cars, and somehow those self-driving cars will morph into becoming human destroying Level 5 cars. Or, perhaps we need to add a new level to the classifications, let’s call it Level 6, and we consider a Level 6 self-driving car to be the type that has its own thinking capability and that it is an evil doer that opts to try and destroy us.

Ban the Level 6. Stop the Level 6 before we get there.

There are those that counter-argue that just because we produce automation that is able to rise to the same level of thinking as humans does, does not mean necessarily that the automation will decide to turn against us. Maybe such automation will become our everlasting best friend. Perhaps such automation will see us as symbiotic with what they can do. Others would say that ultimately the automation or some of it will be inexorably want to get us. It’s bound to happen, they say, whether in the near-term or perhaps the long-term. We need to keep our eyes open. At all times.

In terms of self-driving cars, we are not anticipating that Level 5 self-driving cars will need AGI. In other words, the ability to drive a car to the requirements of Level 5 does not mean that the AI must be of AGI. The AI instead can presumably be of a narrower kind, focused just on the task of driving a car. Some though say that maybe you cannot parse out the driving task to make it into a narrow class of AI. Maybe the only way to achieve the AI of a self-driving car requires that you must also solve the AGI problem. In essence, without full overall intelligence, a driving intelligence alone won’t be enough to get us to a true Level 5 self-driving car.  You need conventional AI and AGI to get there, some assert. It’s AI + AGI to achieve a true self-driving car, they say.

Suppose that self-driving cars do realize they want to attack humans. There is a line of argument that says we could just recode the self-driving cars to stop trying to attack us. We could have a built-in back-door. Or, just as we created fire extinguishers to control fire, so too we would be able to invent something to keep self-driving cars from going wild without any limitations. But, do you want to bet that we can do so? Maybe the machines become so smart that they can outsmart our attempts to outsmart them.

Let’s take a look at the various lessons that many readers and reviewers seem to see in Frankenstein, and analyze how those lessons might be valuable for the ongoing and future development of AI self-driving cars.

Keep in mind that these lessons learned from Frankenstein are at times not well-based on the actual book at all.  In some cases, people have come up with lessons learned from Frankenstein that aren’t based on the story per se, but that they believe could be interpreted somehow out of the story. There are myths about the book and the story that one could say are a stretch of the imagination beyond the actual text and presumed meaning that Mary Shelley had in mind.

Here’s some of the more salient lessons and how they apply to AI self-driving cars.

Frankenstein Lesson #1: Tampering with nature’s unique order is done at the peril of mankind

In the creation of Frankenstein, presumably mankind overstepped its boundaries and attempted to create life, and this goes against nature’s unique order of how things are supposed to work. By undertaking such a transgression, the results can be quite unpredictable and/or that it would lead to the downfall of mankind for going beyond what mankind is supposed to do. In a sense, it is mankind’s just deserves to get destroyed for having violated the rules.

I’ll gently point out that there are counter-arguments such as that if we all agree that mankind is part of nature and can already create life in one means, why should it be a large stretch to have mankind create life through some other means. And, since mankind we’ve agreed is part of nature, wouldn’t we also say then that it is nature’s way to create life in whatever manner in which life could be created by nature. But, I’m not going down that rabbit hole here.

In terms of AI self-driving cars, we’ll use this Frankenstein lesson to suggest that perhaps it is nature’s way to have conventional cars, but that it is going contrary to nature’s unique order to create AI self-driving cars. By creating AI self-driving cars and presumably breaking a contract with nature, all bets are off. This means that the AI self-driving cars might be unpredictable and/or they will in some fashion lead to our downfall. They might lead to our downfall by possibly turning on us and opting to run us over. It is hard to see how AI self-driving cars could take control of us entirely, as they are not the kind of AI robots that we envision someday might take us over (re: Terminator).

Now, is it really the case that conventional cars are suitable to be counted as within nature’s unique order? Maybe we’ve already violated a contract with nature by the invention and wide adoption of conventional cars. You could assert that the smog created by cars is one example of how mankind is harming itself by having gone against nature and developed cars. Likewise, the deaths and injuries that occur to car accidents and the like. If we then add AI into conventional cars and make the cars into self-driving cars, we are apparently making things even worse. On the other hand, if the AI allows us to reduce deaths and injuries, by eliminating drunk driving and other human led aspects for accidents that true AI would help avoid, you might argue that the AI self-driving cars will actually be an improvement of our condition in contrast to the use of conventional cars.

Overall, it’s problematic to consider that AI self-driving cars are a fit within this particular lesson from Frankenstein. You might be able to make a much stronger case that AGI would be a fit within this lesson, since AGI is more akin to potentially going “outside the bounds” of nature (in the views of some).

Frankenstein Lesson #2: Abandonment of your creation will lead to your doom

Some point out that in the story of Frankenstein, only after Victor abandoned his creation did it then became bitter and eventually turn into an evil monster. If we abandon that which we create, it will potentially go in a direction we didn’t intend and/or it will purposely go in a direction we don’t desire as a kind of revenge for having been abandoned.

For AI self-driving cars, we could say that if we allow auto makers and tech firms to produce these AI self-driving cars and there aren’t sufficient controls to make sure they remain in proper use and capability, a form of abandonment could lead to self-driving cars that are error prone, become outdated, and essentially begin to endanger us.

In other words, the Widget self-driving car comes to the marketplace, it makes a big splash, lots of people buy it, they are on our roadways, but then the Widget company abandons the self-driving car business. No updates to the self-driving car AI. Meanwhile, our roadways are changing and other aspects of society is changing. The abandoned self-driving cars now become a danger because the world that they were designed to operate in has changed.

This doesn’t seem like a very likely scenario in that one would assume that even if the Widget company abandoned their model of self-driving car, some other firm would take it over or arise to do so, under the notion that if people have the self-driving car and if there’s money to be made by keeping the self-driving car updated, someone will step into the gap and fill-it. Even if that didn’t happen, you would logically anticipate that maybe the government would step-in and say that such a self-driving car can no longer be on our roads due to the dangers that it presents. It just seems hard to imagine that AI self-driving cars that are abandoned by their maker would remain in use and be endangering us, and we wouldn’t somehow take action about it.

But, anyway, that’s the lesson here, namely to make sure we do not abandon AI self-driving cars once they exist.  Seems like we can do that.

Frankenstein Lesson #3: Ambition without foresight can have terrible consequences

In the story of Frankenstein, Victor has this overwhelming desire to create life and seems determined to do so, regardless of what might come of it. He’s got ample ambition. He has very little foresight. One might say that blindly allowing an obsession to drive toward something is not the most astute way to proceed.

Admittedly, the pursuit of AI self-driving cars does seem to fit this same bill. Right now, the excitement of having self-driving cars seems to be riding pretty high. Efforts by regulators to clamp down on the emergence of self-driving cars is blunted right away. Don’t stop the flow of innovation. Don’t put up roadblocks toward a rosy future. Self-driving cars are going to lead us to zero fatalities, we are told. This indeed is a prime example of ambition without much foresight.

I’ll predict that the madcap rush to AI self-driving cars is going to hit its own speedbumps. The early versions, when unleashed upon the world prematurely, will likely get involved in accidents or maybe even cause accidents. There will be a backlash about how this could have been allowed. Foresight will suddenly come into vogue. Hindsight will open the eyes of some to the need for appropriate aspects of safety to be considered, but done in a manner that does not crush the ambitions.

Yep, this is a good lesson for us to definitely keep in mind.

Frankenstein Lesson #4: Assuming that someone else will take responsibility can lead to irresponsibility

In the story of Frankenstein, one might argue that Victor shrugs off his responsibility at first for the “monster” he has created. Things just regrettably turned out a bad way, but not due to his fault, so he believes (falsely). This is a form of either passive ethics or downright ethical neglect.

For AI self-driving cars, one question that is still being grappled with involves who will take responsibility for AI self-driving cars.

Up until the vaunted Level 5, it is considered that the human driver in the self-driving car has responsibility for whatever happens with the self-driving car. But, for the Level 5, presumably there is no human driver and therefore no human in the car to be held responsible for what the car does. This has led to much debate about things like car insurance, since today we always enforce that the human driver carries the car insurance.

Will we expect the auto maker to be responsible for the acts of the AI self-driving car? Suppose the auto maker did not make the AI system and bought it from some other company. Will the auto maker be blameless and only the other company be responsible? Or, are they co-joined in responsibility. Some rather nutty (in my humble opinion) pundits have said that the AI holds the responsibility. This assumes that the AI is the equivalent of a human and has its own independent being, which, I can tell, it will be a long, long, long time from now before that happens (if ever).

If you are using a ride sharing AI self-driving car, maybe the ride sharing company has responsibility for the self-driving car. Some say that maybe the government should be responsible for self-driving cars, perhaps by some kind of governmental sponsored car insurance for self-driving cars.

Anyway, it is all of valid consideration in that per the lesson from Frankenstein, we don’t want to end-up with finger pointing and no one taking responsibility for AI self-driving cars and what they do, and so pursuits to pin down that responsibility are indeed worthwhile.

Frankenstein Lesson #5: Hubris can produce self-delusions and lead to bad consequences

In the story of Frankenstein, Victor appears to exhibit a great deal of hubris. He’s sure he can bring something to life. Come heck or high water, he’ll be able to do it. This can cloud one’s thinking. Victor doesn’t even consider that the creation could go bad. An overconfident inventor such as Victor assumes boastfully that they can always undo what they have done. They have the power to giveth, and they can taketh, as they see fit.

As mentioned earlier about AI self-driving cars, some worry that our hubris as technologists causes us to assume that no matter what might have bad consequences with self-driving cars, we can easily overcome any such issues with a few lines of code.

Suppose an AI self-driving car goes berserk and runs down a bunch of pedestrians, no problem, in that we just update the artificial neural network and pump it down into those offending self-driving cars. Good as new.

There is definitely a lot of hubris going on right now in the self-driving car industry.

In one sense, the auto makers and tech firms are in a self-driving car “arms race” wherein society has now politically forced them into exhibiting hubris. These firms and developers need to convince the marketplace that a true AI self-driving car is just around the corner in terms of being invented. They need to brush away any criticism about the pell-mell rush underway. The slightest hint of ratcheting down the hubris would be perceived as a sign that these firms aren’t going to be able to produce Level 5 self-driving cars. This could cause share prices of those auto makers and tech firms to drop. This could cause shareholders to revolt. This could cause heads to roll.

As I’ve predicted repeatedly, once we see AI self-driving cars getting into actual accidents and having real-world difficulties, it will turn the tide on the hubris. Hubris will lose its badge of honor.

Remember when the automobile industry started to acknowledge that you could get hurt in cars, and then the advertising shifted toward safety features of cars. Until then, no auto maker talked about their safety features, since it was a taboo topic. Why bring up something that will simple make people think about the dangers of being in cars. But, then it became a mad rush to see which auto maker could claim they had the safest cars and the most safety features.

I’d be willing to bet that’s what will happen with the AI self-driving car industry. The unwritten rules of the race right now involves being the first out the gate with a true self-driving car. The next race will be as to which auto maker or tech firm is doing the most to ensure that their self-driving cars are safest and have the most safety features. We aren’t there yet, and it will take a shift in the mindset and marketplace to get there.

I liken this to earthquakes. Nobody cares much about earthquake insurance and earthquake preparedness, until an actual earthquake hits. I am just hoping that we don’t need to suffer a massive earthquake in the self-driving car industry to get us all more toward the crucial considerations surrounding self-driving car safety.

Conclusion

I hope that these Frankenstein lessons were thought provoking for you. The goal would be to spark an already budding dialogue about the nature of AI self-driving cars and the social impacts of these “creations” (let’s use the word “creations” and not call them monsters, at least not yet!). Mary Shelley has provided us with a rich source of universal questions about the nature of mankind. In the same manner that the Frankenstein creation was not really a monster per se, certainly not at first launch, we are in a good place now at the infancy of AI self-driving cars to consider what we will do now, doing so in order to forge a better future for us all.

I truly believe that we can bring together the inventor-needed qualities of ambition, hubris, obsession, and other facets that will achieve this remarkable innovation, and yet do so without suffering from the dark underbelly of irresponsibility, and abandonment consequences, and the like.

Admittedly, there are some pessimists that paint a bleak picture in that they say that with great things must always come great adverse consequences, implying that the two are inextricably intertwined and there’s nothing we can do to get one without the other. I’d like to think that’s not the case.

Let’s keep pushing forward on AI self-driving cars, and do so with the perspective that we can prevent our own creations from destroying us. Ban the Frankencars and don’t allow a Level 6 — let’s all work together to get AI self-driving cars that enhance mankind, rather than allowing them to undermine mankind. That’s the kind of creation we want.

This content is originally posted to AI Trends.