Shiggy Challenge and Dangers of an In-Motion AI Self-Driving Car
Shiggy Challenge and Dangers of an In-Motion AI Self-Driving Car

Shiggy Challenge and Dangers of an In-Motion AI Self-Driving Car

By Lance Eliot, the AI Trends Insider

I’m hoping that you have not tried to do the so-called Shiggy Challenge. If you haven’t done it, I further hope that my telling you about it does not somehow spark you to go ahead and try doing it. For those of you that don’t know about it and have not a clue about what it is, be ready to be “amazed” at what is emerging as a social media generated fad. It’s a dangerous one.

Here’s the deal.

You are supposed to get out of a moving car, leaving the driver’s seat vacant, and do a dance while nearby to the continually moving forward car, and video record your dancing (you are also moving forward at the same pace as the moving car), and then jump back into the car to continue driving it.

If you ponder this for a moment, I trust that you instantly recognize the danger of this and (if I might say) the stupidity of it (or does that make me appear to be old-fashioned?).

As you might guess, already there have been people that hurt themselves while trying to jump out of the moving car, spraining an ankle, hurting a knee, banging their legs on the door, etc. Likewise, they have gotten hurt while trying to jump back into the moving car (collided with the steering wheel or the seat arm, etc.).

There are some people that while dancing outside the moving car became preoccupied and didn’t notice that their moving car was heading toward someone or something. Or, they weren’t themselves moving forward fast enough to keep pace with the moving car. And so on. There have been reported cases of the moving car blindly hitting others and also in some cases hitting a parked car or other objects near or in the roadway.

Some of the videos show the person having gotten out of their car and then the car door closing unexpectedly, and, guess what, the car turns out to now have all the car doors locked. Thus, the person could not readily get back into the car to stop it from going forward and potentially hitting someone or something.

This is one of those seemingly bizarre social media fads that began somewhat innocently and then the ante got upped with each person seeking fame by adding more danger to it. As you know, people will do anything to try and get views. The bolder your video, the great the chance it will go viral.

This challenge began in a somewhat simple way. The song “In My Feelings” by Drake was released and at about the same time there was a video made by an on-line personality named Shiggy that showed Shiggy taking a video of himself dancing to the tune (posted on his Instagram site). Other personalities and celebrities then opted to do the same dance, video recording themselves dancing to the Drake song, and they posted their versions. This spawned a mild viral sensation of doing this.

But, as with most things on social media, there became a desire to do something more outlandish. At first, this involved being a passenger in a slowly moving car, getting out, doing the Shiggy inspired dance, and then jumping back in. This is obviously not recommended, though at least there was still a human driver at the wheel. This then morphed into the driver being the one to jump out, and either having a passenger to film it, or setting up the video to do a selfie recording of themselves performing the stunt.

Some of the early versions had the cars moving at a really low speed. It seems now that some people have cars that crawl along at a much faster speed. It further seems that some people don’t think about the dangers of this activity and they just “go for it” and figure that it will all work out fine and dandy. It often doesn’t. Not surprising to most of us, I’d dare say.

The craze is referred to as either the Shiggy Challenge or the In My Feelings challenge (#InMyFeelings), and some more explicitly call it the moving car dance challenge. This craze has even got the feds involved. The National Transportation Safety Board (NTSB) issued a tweet that said this:” #OntheBlog we’re sharing concerns about the #InMyFeelings challenge while driving. #DistractedDriving is dangerous and can be deadly. No call, no text, no update, and certainly no dance challenge is worth a human life.”

Be forewarned that this antic can get you busted, including a distracted driving ticket, or worse still a reckless driving charge.

Now that I’ve told you about this wondrous and trending challenge, I want to emphasize that I only refer to it as an indicator of something otherwise worthy of discussion herein, namely the act of getting out of or into a moving car. I suppose it should go without stating that getting into a moving car is highly dangerous and discouraged. The second corollary equally valid would be that getting out of a moving car is highly dangerous and discouraged.

I’m sure someone will instantly retort that hey, Lance, there are times that it is necessary to get out of or into a moving car. Yes, I’ve seen the same spy movies as you, and I realize that when James Bond is in a moving car and being held at gun point, maybe the right spy action is to leap out of the car. Got it. Seriously, I’ll be happy to concede that there are rare situations whereby getting into a moving car or out of a moving car might be needed, let’s say the car is on fire and in motion or you are being kidnapped, there will be rare such moments. By-and-large, I would hope we all agree that those are rarities.

Sadly, there are annually a number of reported incidents of people getting run over by their own car. Somewhat recently, a person left their car engine running, they got out of the car to do something such as drop a piece of mail into a nearby mailbox, and the car inadvertently shifted into gear and ran them over. These oddities do happen from time to time. Again, extremely rare, but further illustrate the dangers of getting out of even a non-moving car for which the engine is running.

Prior to the advent of seat belts, and the gradual mandatory use and acceptance of seat belts in cars, there were a surprisingly sizable number of reported incidents of people “falling” out of their cars. Now, it could be that some of them jumped out while the car was moving and so it wasn’t particularly the lack of a seat belt involved. On the other hand, there are documented cases of people sitting in a moving car, and not wearing a seat belt, while the car was in motion, and their car door open unexpectedly, with them then proceeding to accidentally hang outside of the car (often clinging to the door), or falling entirely out of the car onto the street.

This is why you should always wear your seat belt. Tip for the day.

For the daredevils of you, it might not be apparent why it is so bad to leave a moving car. If you are a passenger, you have a substantial chance of falling to the street and getting injured. Or, maybe you fall to the street and get killed by hitting the street with your head. Or, maybe you hit an object like a fire hydrant and get injured or killed. Or, maybe another car runs you over. Or, maybe the car you exited manages to drive over you. I think that paints the picture pretty well.

I’d guess that the human driver of the car might be shocked to have you suddenly leave the moving car. This could cause the human driver to make some kind of panic or erratic maneuver with the car. Thus, your “innocent” act of leaving the moving car could cause the human driver to swerve into another car, maybe injuring or killing other people. Or, maybe you roll onto the ground and seem OK, but then the human driver turns the car to try and somehow catch you and actually hits you, injuring you or killing you. There are numerous acrobatic variations to this.

Suppose that it’s the human driver that opts to leave the moving car? In that case, the car is now a torpedo ready to strike someone or something. It’s an unguided missile. Sure, the car will likely start to slow down because the human driver is no longer pushing on the accelerator pedal, but depending upon the speed when the driver ejected, the multi-ton car still has a lot of momentum and chances of injuring or killing or hitting someone or something. If there are any human occupants inside the car, they too are now at the mercy of a car that is going without any direct driving direction.

Risks of Exiting a Moving Car

Let’s recap, you can exit from a moving car and these things could happen:

  •         You directly get injured (by say hitting the street)
  •         You directly get killed (by hitting the street with your head, let’s say)
  •         You indirectly get injured (another car comes along and hits you)
  •         You indirectly get killed (the other car runs you over)
  •         Your action gets someone else injured (another car crashes trying to avoid you)
  •         Your action gets someone else killed (the other car rams a car and everyone gets killed)

I’m going to carve out a bit of an exception to this aspect of leaving a moving car. If you choose to leave the moving car or do so by happenstance, let’s call that a “normal” exiting of a moving car. On the other hand, suppose the car gets into a car accident, unrelated for the moment to your exiting, and during the accident you are involuntarily thrown out of the car due to the car crash. That’s kind of different than choosing to exit the moving car per se. Of course, this happens often when people that aren’t wearing seat belts get into severe car crashes.

Anyway, let’s consider that there’s the bad news of exiting a moving car, and we also want to keep in mind that trying to get into a moving car has its own dangers too. I remember a friend of mine in college that opted to try jumping into the back passenger seat of a moving car (I believe some drinking had been taking place). His pal opened the back door, and urged him to jump in. He was lucky to have landed into the seat. He could have easily been struck by the moving car. He could have fallen to the street and gotten run over by the car. Again, injuries and potential death, for him, and for other occupants of the car, and for other nearby cars too.

I’d like to enlarge the list of moving car aspects to these:

  •         Exiting a moving car
  •         Entering a moving car
  •         Riding on a moving car
  •         Hanging onto a moving car
  •         Facing off with a moving car
  •         Chasing after a moving car
  •         Other

I’ve covered already the first two items, so let’s consider the others on the list.

There are reports from time-to-time of people that opted to ride on the hood of a car, usually for fun, and unfortunately they fell off and got hurt or killed once the car got into motion.

Hanging onto a moving car was somewhat popularized by the “Back To The Future” movie series when Marty McFly (Michael J. Fox) opts to grab onto the back of a car while he’s riding his skateboard. I’m not blaming the movie for this and realize it is something people already had done, but the movie did momentarily increase the popularity of trying this dangerous act.

Facing off with a moving car has sometimes been done by people that perhaps watch too many bull fights. They seem to think that they can hold a red cape and challenge the bull (the car). In my experience, the car is likely to win over the human standing in the street and facing off with the car. It’s a weight thing.

Chasing after a moving car happens somewhat commonly in places like New York City. You see a cab, it fails to stop, you are in a hurry, so you run after the cab, yelling at the top of your lungs. With the advent of Uber and other ridesharing services, this doesn’t happen as much as it used to. Instead, we let our mobile apps do our cab or rideshare hailing for us.

What does all of this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and one aspect that many auto makers and tech firms are not yet considering deals with the aforementioned things that people do regarding moving cars.

Some of the auto makers and tech firms would say that these various actions by humans, such as exiting a moving car or trying to get into a moving car, are considered an “edge” problem. An edge problem is one that is not at the core of the overarching problem being solved. If you are in the midst of trying to get AI to drive a car, you likely consider these cases of people exiting and entering a moving car to be such a remote possibility that you don’t put much attention to it right now. You figure it’s something to ultimately deal with, but getting the car to drive is foremost in your mind right now.

I’ve had some AI developers that tell me that if a human is stupid enough to exit from a moving car, they get what they deserve. Same for all of the other possibilities, such as trying to enter a moving car, chasing after a moving car, etc. The perspective is that the AI has enough to do already, and dealing with stupid human tricks (aka David Letterman!), that’s just not very high priority. Humans do stupid things, and these AI developers shrug their shoulders and say that an AI self-driving car is not going to ever be able to stop people from being stupid.

This narrow view by those AI developers is unfortunate.

I can already predict that there will be an AI self-driving car that while driving on the public roadways will have an occupant that opts to jump out of the moving self-driving car. Let’s say that indeed this is a stupid act and the person had no particularly justifiable cause to do so. If the AI self-driving car proceeds along and does not realize that the person jumped out, and the AI blindly continues to drive ahead, I’ll bet there will be backlash about this. Backlash against the particular self-driving car maker. Backlash against possibly the entire AI self-driving car industry. It could get ugly.

For my explanation of the egocentric designs of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For lawsuits about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

For why AI self-driving cars need to be able to do defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Let’s take a moment and clarify too what is meant by an AI self-driving car. There are various levels of capabilities of AI self-driving cars. The topmost level is considered Level 5. A Level 5 AI self-driving car is one in which the AI is fully able to drive the car, and there is no requirement for a human driver to be present. Indeed, often a Level 5 self-driving car has no provision for human driving, encompassing that there aren’t any pedals and not a steering wheel available for a human to use. For self-driving cars less than a Level 5, it is expected that a human driver will be present and that the AI and the human driver will co-share the driving task. I’ve mentioned many times that this co-sharing arrangement allows for dangerous situations and adverse consequences.

For more about the co-sharing of the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For human factors aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

The level of an AI self-driving car is a crucial consideration in this discussion about people leaping out of a moving self-driving car or taking other such actions.

Consider first the self-driving cars less than a Level 5. If the human driver that’s supposed to be in the self-driving car is the one that jumps out, this leaves the AI alone to continue driving the car (assuming that no other human driver is an occupant and able to step into the human driving role of the co-sharing task). We likely don’t want the AI to now be alone as the driver, since for levels less than 5 it is considered a precondition that there be a human driver present. As such, the AI needs to ascertain that the human driver is no longer present, and as a minimum proceed to take some concerted effort to safely bring the self-driving car to a proper and appropriate halt.

Would we want the AI in the less-than level 5 self-driving car to take any special steps about the exited human? This is somewhat of an open question because the expectation of what the AI can accomplish at the less-than level 5 is that it is not fully yet sophisticated. It could be that we might agree that at the less-than level 5, the most we can expect is that the AI will try to safely bring the self-driving car to a halt. It won’t try to somehow go around and pick-up the person or take other actions that we would expect a human driver to possibly undertake.

This brings us to the Level 5 self-driving car. It too should be established to detect that someone has left the moving self-driving car. In this case, it doesn’t matter whether the person that left is a driver or not, because no human driver is needed anyway. In that sense, in theory, the driving can continue. It’s now a question of what to do about the human that left the moving car.

In essence, with the Level 5 self-driving car, we have more options of what to have the AI do in this circumstance. It could just ignore that a human abruptly left the car, and continue along, acting as though nothing happened at all. Or, it could have some kind of provision of action to take in such situations, and invoke that action. Or, it could act similar to the less-than Level 5 self-driving cars and merely seek to safely and appropriately bring the self-diving car to a halt.

One would question the approach of not doing anything and yet being aware that a human left the self-driving car while in motion, this seems counter intuitive to what we would expect or hope that the AI would do. If the AI is acting like a human driver, we would certainly expect that the human driver would do something overtly about the occupant that has left the moving car. Call 911. Slow down. Turn around. Do something. Unless the human driver and the occupants are somehow in agreement about leaving the self-driving car, and maybe they made some pact to do so, it would seem prudent and expected that a human driver would do something to come to the aid of the other person. Thus, so should the AI.

You might wonder how would the AI even realize that a human has left the car?

Consider that there are these key aspects of the driving task by the AI:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls commands issuance

See my article about the framework of AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

The AI self-driving car will likely have sensors pointing outward of the car, such as the use of radar, cameras, LIDAR, sonar, and the like. These provide an indication of what is occurring outside of the self-driving car in the surrounding environment.

It is likely that there will also be sensors pointing inward into the car compartment. For example, it is anticipated that there will be cameras and an audio microphone in the car compartment. The microphone allows for the human occupants to verbally interact with the AI system, similar to interacting with a Siri or Alexa. The camera would allow those within the self-driving car to be seen, such as if the self-driving car is being used to drive your children to school that you could readily see that they are doing OK inside the AI self-driving car.

For more about the natural language interaction with human occupants in a self-driving car, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

I’ll walk you through a scenario of an AI self-driving car at a Level 5 and the case of someone that opts to exit from the self-driving car while it is in motion.

Joe and Samatha have opted to use the family AI self-driving car to go to the beach. They both gather up their beach towels and sunscreen, and get into the AI self-driving car. Joe tells the AI to take them to the beach. Dutifully, the AI system repeats back that it will head to the beach and indicates an estimated arrival time. Samatha and Joe settle into their seats and opt to watch a live video stream of a volleyball tournament taking place at the beach and for which they hope to arrive there before it ends.

At this juncture, the AI system would have used the inward facing camera to detect that two people are in the self-driving car. In fact, it would recognize them since it is the family car and they have been in it many times before. The AI sets the internal environment to their normal preferences, such as the temperature, the lighting, and the rest. It proceeds to drive the car to the beach.

Once the self-driving car gets close to the beach, turns out there’s lots of traffic as many other people opted to drive to the beach that day. Joe starts to get worried that he’s going to miss seeing the end of the volleyball game in-person. So, while the self-driving car is crawling along at about five to eight miles per hour in solid traffic, Joe suddenly decides to open the car door and leap out. He then runs over to the volleyball game to see the last few moments of the match.

Level 5 Self-Driving Car Thinks About Passenger Who Jumped Out

The AI system would have detected that the car door had opened and closed. The inward facing cameras would have detected that Joe had moved toward the door and exited the door. The outward facing cameras, the sonar, the radar, and the LIDAR would all have detected him once he got out of the self-driving car. The sensor fusion would have put together the data from those outward facing sensors have been able to ascertain that a human was near to the self-driving car, and proceeding away from the self-driving car at a relatively fast pace.

The virtual world model would have contained an indicator of a human near to the self-driving car, once Joe had gotten out of the self-driving car. And, it would also have indicators of the other nearby cars. It is plausible then that the AI would via the sensors be aware that Joe had been in the self-driving car, had gotten out of it, and was then moving away from it.

The big question then is what should the AI action planning do? If Joe’s exit does not pose a threat to the AI self-driving car, in the sense that Joe moved rapidly away from it, and so he’s not a potential inadvertent target of the self-driving car by its moving forward, presumably there’s not much that needs to be done. The AI doesn’t need to slow down or stop the car. But, this is unclear since it could be that Joe somehow fell out of the car, and so maybe the self-driving car should come to a halt safely.

Here’s where the interaction part comes to play. The AI could potentially ask the remaining human occupant, Samantha, about what has happened and what to do. It could have even called out to Joe, when he first opened the door to exit, and asked what he’s doing. Joe, had he been thoughtful, could have even beforehand told the AI that he was planning on jumping out of the car while it is in motion, and thus a kind of “pact” would have been established.

These aspects are not so easily decided upon. Suppose the human occupant is unable to interact with the AI, or refuses to do so? This is a contingency that the AI needs to contend with. Suppose the human is purposely doing something highly dangerous? Perhaps in this case that when Joe jumped out, there was another car coming up that the AI could detect and knew might hit Joe, what should the AI have done?

Some say that maybe the best way to deal with this aspect of leaping out of the car involves forcing the car doors to be unable to be opened by the human occupants when inside the AI self-driving car. This might seem appealing, as an easy answer, but it fails to recognize the complexity of the real-world. Will people accept the idea that they are locked inside an AI self-driving car and cannot get out on their own? Doubtful. If you say that just have the humans tell the AI to unlock the door when they want to get out, and the AI can refuse when the car is in motion, this again will likely be met with skepticism by humans as a viable means of human control over the automation.

A similar question though does exist about self-driving cars and children.

If AI self-driving cars are going to be used to send your children to school or play, do you want those children to be able to get out of the self-driving car whenever they wish? Probably not. You would want the children to be forced to stay inside. But, there’s no adult present to help determine when unlocking the doors is good or not to do. Some say that by having inward facing cameras and a Skype like feature, the parents could be the ones that instruct the AI via live streaming to go ahead and unlock the doors when appropriate. This of course has downsides since it makes the assumption that there will be a responsible adult available for this purpose and that they’ll have a real-time connection to the self-driving car, etc.

Each of the other actions by humans such as entering the car while in-motion, chasing after a self-driving car, hanging onto a self-driving car, riding on top of a self-driving car, and so on, they all have their own particulars as to what the AI should and maybe should not do.

Being able to detect any of these human actions is the “easier” part since it involves finding objects and tracking those objects (when I say easy, I am not saying that the sensors will work flawlessly and nor that it can necessarily reliably make such detections, I am simply saying that the programming for this is clearer than the AI action planning is).

Using machine learning or similar kinds of automation for figuring out what to do is unlikely as a means of getting out of the pickle of what the AI should do. There are generally few instances of these kind, and each instance would tend to have its own unique circumstances. It would be hard to have a large enough training set. There would also be the concern that the learning would overfit to the limited data and thus not be viable in generalizable situations that are likely to arise.

Our view of this is that it is something requiring templates and programmatic solutions, rather than an artificial neural network or similar. Nonetheless, allow me to emphasize that we still see these as circumstances that once encountered should go up to the cloud of the AI system for purposes of sharing with the rest of the system and for enhancing the abilities of the on-board AI systems that otherwise have not yet encountered such instances.

For understanding the OTA capabilities of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

The odds are high that human occupants will be tempted to jump out of a moving AI self-driving car more so than a human driven car, or similarly try to get into one that is moving. I say this because at first, humans will likely be timid with the AI and be hesitant to do anything untoward, but after a while the AI will become more accepted and humans will become bolder. If your friend or parent is driving the car, you are likely more socially bound to not do strange tricks, you would worry that they might get in trouble. With the AI driving the car, you have no such social binding per se. I’m sure that many maverick teenagers will delight in “tricking” the AI self-driving car into doing all sorts of Instagram worthy untoward things.

Of course, it’s not always just maverick kinds of actions that would occur. I’ve had situations wherein I was driving in an area that was unfamiliar, and a friend walked ahead of my car, guiding the way. If you owned an AI self-driving car of Level 5, you might want it to do the same — you get out of the self-driving car and have it follow you. In theory, the self-driving car should come to a stop before you get out, and likewise be stopped when you want to get in, but is this always going to be true? Do we want to have such unmalleable rules for our AI self-driving cars?

Should your AI self-driving car enable you to undertake the Shiggy Challenge?

In theory, a Level 5 AI self-driving car could do so and even help you do so. It could do the video recording of your dancing. It could respond to your verbal commands to slow down or speed-up the car. It could make sure to avoid any upcoming cars and thus avert the possibility of ramming into someone else while you are dancing wildly to “In My Feelings.” This is relatively straightforward.

But, as a society, do we want this to be happening? Will it encourage behavior that ultimately is likely to lead to human injury and possibly death? We can add this to a long list of the ethics aspects of AI self-driving cars. Meanwhile, it’s something that cannot be neglected, else we’ll for sure have AI that’s unaware and those “stupid” humans will get themselves into trouble and the AI might get axed because of it.

As the song says: “Gotta be real with it, yup.”

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.