Groupthink Dilemmas for Developing AI Self-Driving Cars
Groupthink Dilemmas for Developing AI Self-Driving Cars

Groupthink Dilemmas for Developing AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

When I was initially out of college with my bachelor’s degree in computer science and electrical engineering, I took a job at a small electronics company that made customized electronic systems using real-time microprocessors for all sorts of interesting and unusual uses. As a new buck into the real-world, I figured it was best for me to do my professional best and allow the wiser and wisdom-laden engineers in their olden thirties to provide guidance as to what we should do in our efforts.

I have numerous important lessons of work and life that occurred over the few years that I was there. We put together new systems that nobody else was doing at the time. We stretched the envelope by mixing together disparate technologies and wrote code that ranged from overarching system control to stuff that had to poke deep down into the operating system to make things work. One minute we were writing using a high-level programming language, and the next we are delving into machine language and assembly language. Our code had to work right, since many of these systems involved interaction with humans and robotic arms or automated gates that could harm a human, plus it all had to meet various real-time timing requirements, so it had to be fast and responsive, besides being safe.

One day, we gathered together the developers and the engineers, and discussed a new project for a client that wanted to use a combination of sonar devices, laser detection devices, and a myriad of other sensors. The head of our engineering group was a strong willed take-no-prisoners kind of manager, and he was known for telling others to do what he said, and no backtalk needed and nor entertained. He acted like he was the all-knowing guru. Admittedly, he’d been there for several years and had shepherded into production some of the most advanced and complex systems that the company had ever made. Let’s just say that his reputation preceded him and his mightier than thou attitude had become legendary there.

So, we were all assembled into a cramped conference room, and he laid out the requirements for this new system. I noticed something that I found somewhat curious, indeed maybe disturbing. The layout indicated that the devices would default to being classified as being in a working state. Only if a device  reported that it was in a failing state or a mode of error, would we know that the device wasn’t working right. In my college engineering and design classes, we had always been exhorted to assume that something doesn’t work by default and that until it proves that it is working would we rely upon it. Thus, the approach that the heralded guru was advocating seem to be counter to what I had learned, and seemingly a risky approach to a real-time system design.

Should I speak up? According to the guru, this meeting was not a two-way street, it was a one-way street. He was telling us what it was. We were to then dutifully shuffle away and get to work. I looked around at my fellow workers and wondered if they saw the same potential hole that I thought was obvious. I subtly pointed at the requirement that was the questionable one, and mouthed words to my fellow “prisoners” that maybe it wasn’t the wisest way to do things. They looked at me with one those “not now” kind of expressions and convinced me without any verbal utterances that I ought to keep my mouth shut (and maybe my eyes too).

To make matters worse, the strongman head engineer decided that he would go around the room and have each person attending signify whether they understood and seemingly agreed with what was to be done. One by one, he pointed at each attendee. One by one, they each said yes. It was getting closer to me and I was completely baffled about what I should say. If I said yes, wasn’t I tacitly and maybe even explicitly agreeing to an engineering no-no? If I said no, would I be punished severely and besides a tongue lashing maybe be tossed out the front door and told to never return?

What would you do?

It wasn’t an error that you could say outright was an error per se. I justified in my own mind that saying yes was not really agreeing to something erroneous. It could just have been designed a better way. Also, I figured that getting successfully out of the meeting should become my immediate goal. I could always ask the others after the meeting about what we should consider doing. The new buck ought to not make waves at the wrong time, and maybe there would be a right time that I could do so.

I said yes.

After the meeting, I went and saw various individuals that had attended the meeting. Many agreed with me that the design was essentially flawed, but they felt it wasn’t something to raise the red flag about. Let it go, they urged me. Just do your job. It will all work out and no one will ever realize that we had proceeded on something that probably could be better engineered. It was like making hamburgers and maybe we let a few get overly burnt or not perfectly shaped, but hey, no one would get food poisoning from it.

We developed the system. The client wanted to do a trial run of it. The client was very excited about the new system and so we sent one of our engineers to go out and make sure it was properly all connected together and working right. After seeing it work, the client then said that they were going to invite various dignitaries to come and see it work. He was proud of what it did and wanted to show it off.

The next morning, sure enough, according to the engineer that was there, a bunch of dignitaries showed up. The client started to run the system. It ran fine. Then, the client said watch how great the sensors are that it will detect when a something gets in the way (such as a human or some other object). He put a pole into the nearby proximity and started the system again. Guess what? The system knocked it to pieces. Everyone was shocked. Why, this thing was dangerous!  People were confused. Our head of marketing (being there to tout how great it was), became furious. How could this have happened?

Upon doing a so-called post-mortem (this just means an afterward analysis of went wrong; please note that no one was harmed by the system), we discovered that during the night, a janitor had come into where the system was installed. The janitor knew that a big showcase was taking place the next day. He opted to make sure he really cleaned well in the area of the system. At one point, he couldn’t sweep in one area, and so he disconnected some of the cables, and then forgot to reconnect them.

As you can imagine, we realized that because the system was engineered to assume that the devices were working and connected, the rest of the system did not realize that the janitor had disconnected the devices. If we had designed it to assume that the sensors weren’t working (by default), it would have been unable to verify that they were in a working state, and thus the system would have produced an audible alert that the system was not ready for use (and wouldn’t have allowed it to start).

We got bit by the engineering aspect of months earlier that had been assumed would never arise as an issue.

Why do I tell this story?

Because it brings up a very important aspect of developing new systems, namely the dangers of groupthink.

You’ve likely heard about groupthink. It’s the circumstance when a group of people fall into a mental trap of all thinking one way. There have been some famous cases of the dangers of groupthink, and perhaps one of the most studied and cited examples is the Cuban Missile Crisis and the Bay of Pigs incident. Studies indicate that the Bay of Pigs fiasco partially can be blamed on the President holding a high-level meeting in which no one was willing to say that what they were going to do was perhaps mistaken.

Why do people that are in a group fall into the groupthink trap? In my case, as I mentioned earlier, I was a junior engineer and software developer and felt that I was supposed to abide by what the seasoned head of engineering told us to do. I caved to peer pressure. I caved to the worry that I would get fired. I caved to the belief that my own expertise was insufficient to override what overwhelmingly everyone else said was ok. I caved to the aspect that the manager didn’t want any input. Etc.

This became an important lesson for me, and I subsequently become quite aware of the dangers of groupthink, and found ways to combat it throughout the rest of my career. Indeed, part of the reason that I went back to school and got my MBA, and later my PhD, partially was due to the realization that as a technologically proficient software engineer that I ought to also know and understand how to work in organizations and how to perform in teams and how to lead and manage teams.

What does this have to do with AI self-driving cars?

At the Cybernetic Self-Driving Car Institute, we are making sure that our efforts don’t fall into the groupthink trap, and we also are advising auto makers and tech firms that are making AI self-driving cars to be aware of and ensure they don’t let groupthink take them down the wrong paths.

There are eight aspects that are often underlying the groupthink phenomena. Well, some researchers say there are more than eight, some say there are less than eight, but anyway I figure let’s go over the eight commonly cited ones here to make sure you know what to look for.

  1.  Invincibility Illusion

You might find yourself in a room of other AI specialists as you are working on the AI self-driving car software and they’ll say something like “this neural network works because we’ve used it before and it always works.” It could be that these are seasoned researchers with hefty PhD’s and other amazing credentials and that have been using neural networks for years, and they have brought to the auto maker or tech firm their favorite time-tested neural network.

They are so confident that they seem invincible. That’s when I make sure to start asking questions. I know you’ll need to be careful, and you’ll face likely the same kinds of pressures that I did in the past. But, keep in mind that you are in the midst of developing a system that involves life and death consequences. Don’t fall for the illusion of invincibility.

  1. Must Prove the Contrary

You are in a meeting and the group says that the use of LIDAR (a type of light related radar device) won’t provide any added value. The tacit expectation is that everyone agrees, and if you don’t agree then you have to provide fifty good reasons why they are “wrong” in their approach. Notice that they didn’t have to provide fifty reasons why they are “right” – it’s you that needs to disprove them.

The burden to disprove can be daunting. If you feel strongly about whatever the matter is, you’ll need to do your homework and come up with a strong case for your side of things.

  1. Collective Rationalization

The AI self-driving car team meets and says that even if the camera goes wacky it doesn’t matter since the sonar will compensate for it. The group rationalizes that the back-up element of having the sonar is sufficient. No one tries to think it through.

You could gently point out that the camera and the sonar are two quite different kinds of sensors and are capturing different kinds of information. It’s an apple and oranges comparison. Also, suppose the sonar also goes wacky – what then?

  1. Stereotyping Out-Groups

Suppose the top management of the auto maker doesn’t really know much about AI self-driving cars. The developers know that top management is out-of-touch and almost acts like a Dilbert cartoon.

During a meeting among just the developers, someone announces that top management wants the team to add some new safety features into the software to make sure that the system won’t mistakenly tell the car to run over pedestrians. The team discards the proclamation because it came from top management. Anything that comes from top management they believe has no merit.

Only in this case, maybe what they are asking does have merit. Even if top management doesn’t know why this is needed, you ought to set aside the “source” question and instead examine the idea on its own merits.

  1. Self-censorship

Suppose the AI of the self-driving car should cause the car to come to a halt if the system is not getting valid data coming from the vehicle sensors such as the speed of the car and the status of the engine. But, during the group meeting to discuss this aspect, there is an indication that only if the vehicle sensor complains will it be needed to start toward halting the car (this is somewhat akin to my example earlier about the engineering design of sensors).

The group doesn’t want to entertain counter-arguments. They are maybe tired and just want to get on with things. You decide to self-censor yourself. You hold back your thoughts. The question will be, can you live with this? Are you willing to potentially be a contributor to something that could one day harm others?

  1. Unanimity Delusion

Let’s revisit the same case as described under self-censorship. Suppose that no one is asked to offer any second opinion. The meeting might have an implied condition that if no one objects, then everyone agrees.

You might find that if you are willing to break the ice and offer a second opinion, what can happen is that suddenly a torrent of others will jump onto your bandwagon. There’s a famous play that was made into an equally famous movie, called Twelve Angry Men, which you ought to watch as an example of how one person speaking up can make a tremendous difference.

  1. Dissenters as Perceived Whiners

In some meetings, anyone that speaks in some dissenting fashion will get crushed. They either get crushed by someone that is advocating the other side, or they could get crushed simply because other attendees don’t want to debate the topic. Also, sometimes a dissenter is seen as a naysayer that won’t be happy with anything. They are just complainers.

For this, you’ll need to position yourself not as a complainer or whiner. Instead, try to focus on the facts of the situation. What are the factual reasons that LIDAR can add substantive value to the AI self-driving car that your firm is making?  It becomes more bona fide when others realize that it’s a systematic discussion rather than just a complaint session.

  1. Mindguards

This last one is quite interesting. Sometimes, a meeting will take place and yet a key expert is left out of the meeting. Not by accident, but by design. Whomever has setup the meeting has explicitly decided they don’t want the person to attend. It could be because they believe the person is too argumentative, or maybe they just don’t like the person.

The problem with this kind of mindguarding is that it means that the meeting will potentially be making decisions that are ill-informed. You don’t have in the meeting the needed expertise to make a sound decision.

Here, you can sometimes try to get the group to agree that “let’s check with George or Samantha” about whatever has been tentatively decided, allowing for them later access to the expert to help confirm or disconfirm the decision made.

Conclusion

Groupthink could lead an AI self-driving car project into a bad place. The act of meeting together can inadvertently produce adverse results by having decisions made about the hardware and software design that ultimately will undermine the safety or capabilities of the self-driving car. As a developer, you need to be wary of groupthink and try to overcome it. If you are a manager, it is your duty to be aware of groupthink and also try to overcome it. We need self-driving cars that are well designed and well built, and can be safe and usable in the real-world. Don’t let groupthink undermine that!

This content is originally posted on AI Trends.