Idealism and AI Self-Driving Cars
Idealism and AI Self-Driving Cars

Idealism and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

You might have seen in the news that Mark Zuckerberg, founder of Facebook and its CEO, recently testified before Congress and included these key points:

“Facebook is an idealistic and optimistic company. For most of our existence, we focused on all the good that connecting people can bring.”

“It’s clear now that we didn’t do enough to prevent these tools from being used for harm as well.

“We didn’t take a broad enough view of our responsibility, and that was a big mistake.”

Some reacted to these points with a sympathetic response. To them, this was simply an example of engineers that were trying to do the right thing and, inadvertently, some not so right things happened along the way. This is the image generally of the Silicon Valley, namely tons of engineers that are all doing the right thing and at times they find themselves puzzled to realize that maybe it wasn’t entirely right. They are so focused on making a better mouse trap, and advancing forward with exciting new technology, we can hardly blame them for their intensity of focus on the matters at hand.

I recall one senior software engineer from a major tech firm in the Bay Area who told me she began to realize after she left the Valley that she suddenly had a revelation that there weren’t any actual professional “ethicists” on any of her engineering teams. It was all engineering, all of the time. She told me that she never even considered any considerations about her work other than the pure engineering of what was being built.

If the above seems somewhat “do gooder” to you, you are not alone. Cynics say that this attempt to create a wholesome image is no more than a protective shield being used to keep others at bay. To these critics, they assert that the use of idealism and optimism is an angelic cloak, used in the cleverest of Machiavellian ways.

There are some that are kind of in-between these viewpoints, and say that maybe there is genuine idealism, and mixed in with some amount of cunningness, but that these engineers are like naïve children that need an adult in the room. Indeed, you might recall that when John Sculley came into Apple, it was thought that finally a responsible parent was at the helm of the firm. The hope was that the fatherly figure would get the company refocused, he’d breath a business and industry perspective into those isolated engineers that didn’t seem to know that anything existed beyond the walls of the company. Engineers are akin to naïve children in their own engineering playground that have not yet faced the real-world, in this view, and they will continue that way until somehow an adult enters into the picture.

AI Self-Driving Cars Led by Idealism

Returning to Zuckerberg’s remarks, allow me a moment to offer somewhat adjusted quotes that we might see in the future by some other top executive.

By the CEO of an unnamed company (fictional):

“We are an idealistic and optimistic company. For most of our existence, we focused on all the good that our AI self-driving car can bring.”

“it’s clear now that we didn’t do enough to prevent our AI self-driving car from being harmful.”

“We didn’t take a broad enough view of our responsibility, and that was a big mistake.”

Yes, this could someday be the remarks that an executive of an AI self-driving car maker might utter while testifying to Congress about what went wrong.

Right now, many in the self-driving car industry are imbued with idealism. They want to make AI self-driving cars for the betterment of society. They believe that the advent of AI self-driving cars will democratize mobility. People that aren’t mobile today will become mobile. They also believe that the AI self-driving car will save lives. You continually hear about how many lives are lost due to drunken drivers that run into people, and the AI won’t ever become drunk so we won’t lose lives anymore in that manner.

One “small” difference with this idealism is that it involves life and death. If you lose some of your privacy due to a social media system, it can be bad, but rarely will it cost you your life. If your AI self-driving car malfunctions due to something in the AI, it could mean that the self-driving car hits a wall, or plows into a pedestrian, or brings about some other dreadful result.

At the Cybernetic Self-Driving Car Institute, we are developing AI self-driving car systems and realize the enormous responsibility involved, and call upon our colleagues and partners to likewise be aware of and take with great import that responsibility.

It’s AI Overall Too

We can also enlarge the scope beyond just AI self-driving cars and consider the entire space of AI efforts.

There is an ongoing dialogue and some heated debate about whether AI developers and AI systems that are being constructed are being undertaken without sufficient regard for the impacts that such systems might have upon people and society. Robots that can walk and talk as humans can  – a good thing, or does it have any adverse consequences too. AI systems that might control our power plants, good or bad. Everyday human performed jobs that might be replaced by AI, good or bad. These are aspects that could utterly change the fabric of how we work, play, and live our lives. Shouldn’t we be considering both the good and the bad?

Many in the AI field would claim that they are simply trying to break boundaries and see how far AI can go. They are as focused on achieving true AI as one might be to achieve scaling a high mountain that all thought impossible to scale. They want AI that can act as humans can, because it can be done (or to prove that it can), and they consider AI as a worthy challenge. It’s a puzzle to be solved. Some are worried that this is setting us up for a Frankenstein-like situation (see my column on Frankenstein and AI Self-Driving Cars.)

One common thread throughout this idealism is the seeming lack of attention to the consequences of the efforts by such developers. Is it a valid excuse to after-the-fact say that it just never occurred to you that someday your creations and inventions could have adverse consequences? The idea that you can later on say that you, darn it, didn’t take a large enough view, shucks, and that your heart was in the right place, oh my, but you just didn’t step back and see the bigger picture – will that continue to cut mustard in today’s world? That’s the big question.

There are some that don’t buy into this after-the-fact retro-perspective that seemingly makes what might be considered a perpetrator into a victim. Someone that created something that had adverse consequences tries to get off-the-hook by shifting the spotlight from them being a perpetrator into being the victim. They were merely trying to do good. They weren’t trying to do bad. In the process of trying to do good, unfortunately and unpredictably things at times went bad. Oops, sorry about that. We were victims of our own naiveté, they contend. Therefore, instead of getting angry at them, the response is supposed to be one of sympathy. How sad for them that their righteous efforts had some unfortunate results, but they are just as much a victim as the rest of us at those “unexpected” results.

Is it really the case that these adverse results are unpredictable and cannot be anticipated?

Is Unpredictable Predictable?

For many, it is an infuriating claim that adverse results are entirely unpredictable. It belies believability, they say. In a world of the worries about climate change, about overpopulation, about nuclear war, and the recent expansion of societal impacts awareness via movements like the MeToo movement, are there really still people that are so living in a cave that they aren’t thinking about what bad things can happen. In terms of AI self-driving cars, I’ve continued to exhort about the real-world real-life consequences and ethical issues that need to be considered in the midst of building and fielding AI self-driving cars, and that somehow waiting until afterwards is both foolhardy and a path of likely failure (see my column on Ethically Ambiguous Self-Driving Cars.)

But, is anyone paying attention to these exhortations, one must ask. The cynics say some of these developers are listening, but choose to ignore it, wanting to get the pot of gold and figuring that they’ll see how things pan out later on. Or, they believe that listening and acting on these considerations are distractions from solving deep problems. They’ll say that they don’t have the time to worry about worrying. They only have enough time to get the job done, so to speak. The computing profession is trying to foster a code of ethics that will help spur developers into considering the consequences of their efforts (see my column on the topic of Algorithmic Transparency.)

A retort often used by firms that get themselves into these kinds of pickles is that once you confront them with the somewhat hollowness of using the purist of idealism as a way to frame what they’ve done, and point out that they perhaps made a lot of money along the way too, and so it isn’t just the sacredness of idealism that was a motivator, they’ll respond by saying that of course they had to make money, but it was done solely to provide the fuel to keep their idealism going. They tie the money back to the idealism. This brings the money aspects cleverly into the cloak.  Maybe they are right, or maybe it’s another ruse. Who’s to know?

It is easy to let the cynic shoot down the seemingly crass aspect of making money, but there’s also the other side of that coin that indeed the money led to some good, though it should have also been used to avoid or mitigate the bad too (see my related column on the topic of AI Greed is Good.)

The Bigger Picture

There’s another angle to this. Some say that maybe if we see the truly macroscopic picture, we might realize that the “bad” that happens is part of the larger journey that ultimately gets us to the bigger good. Perhaps the Facebook privacy aspects are handy reminders of the importance of personal privacy and so it enlightens society and awakens all other social media sites. You have to crawl before you can walk, and during the journey you need to fall down some of the times. That’s ok, and it is happening for the greater good of being able to walk. In that light, what Facebook did is actually helpful and welcomed, whether it happened by happenstance or by design, and either way, it was an important and fortuitous wake-up call, thankfully, since otherwise we’d all still be unawares (or so that logic goes).

In the AI self-driving car field, there is a concern by some of the auto makers and tech firms that if we let a “few deaths” involving self-driving cars today stop us, we are not going to get to the greater good of ultimately reducing future deaths. It’s a net present value type of equation. Suppose we have 1,000 deaths in a year for the first round of AI self-driving cars and this happens for 5 years in a row (due to failures in the AI self-driving cars), but it allows improvements in AI self-driving cars overall that then perfects them. And suppose that without AI self-driving cars we were going to have 30,000 deaths anyway. At some point, the net effect will be that the number of total deaths over a long enough period of time will be lessened (a reduction in conventional car deaths in the long-term overtakes the total number of initial AI self-driving car “additional” deaths of the short-term). Therefore, they say, you need to tolerate some of those near-term deaths happening now, as a trade-off of not having those longer term deaths later on.

If you or a loved one are involved in those early deaths due to AI self-driving cars, it is hard to imagine that you would feel much solace knowing that you died for a greater good. It’s quite a mental and emotional leap to make. You would certainly be asking whether it was absolutely the case that the only way to achieve that greater good was to sacrifice someone now. That’s a hard case to make, and likely only viable to make in hindsight some decades after the whole matter evolves, if at all.

Applying to AI Self-Driving Cars

It’s interesting to note that the words idealism and optimism are being intertwined in this. Let’s consider that wording. If you say that you are idealistic, and thus you made mistakes, it might not be enough of an assertion to seem compelling.  Add in the spice of optimism, and now you’ve got some super power going. You were not only idealistic, you focused that idealism on being optimistic (rather than presumably being pessimistic). Were you an idealistic optimist, or an optimistic idealist?  Some would say that we certainly want either of those, more so that an idealistic pessimist or a pessimistic idealist.

Let’s focus then on what all of this means for AI self-driving cars, here’s some points to consider:

  •         AI self-driving car makers need to consider now the potential adverse consequences of AI self-driving cars — don’t wait until later, and they need to be doing what they can now to prevent or mitigate those future predictable consequences.
  •         The AI developers and engineers involved in the making of AI self-driving cars need to consider too these adverse consequences and be mindful as ethicists would be (get outside-the-box).
  •         Use idealism to give you the strength and spirit to persevere in what is a very arduous pursuit (I call it a moonshot), but don’t let the idealism narrow your thinking such that you are neglecting the bad sides of things too.
  •         Be optimistic that we can achieve safe AI self-driving cars that will provide the desired benefits to society, but temper that with the “pessimistic” elements that with great good is likely to come some amount of bad (and use your energy to curtail it).
  •         Let’s not find ourselves later on in a posture of looking back and saying that if we only knew, and instead be looking forward and predicting what might happen, both good and bad, and doing something about it sooner rather than later.

Most of the AI self-driving car developers that I know are very much the idealistic optimists. For that, we can be grateful, since I am guessing they might not be doing what they are doing. Today’s world though is not like it might have been fifty years ago, in the sense that being naïve about the world is no longer seemingly possible. AI self-driving car makers and the AI self-driving car developers need to own up to the heavy responsibility they have, and with that weighty burden be responsibly developing and fielding safe-and-sound AI self-driving cars. Thanks for caring!

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.