By Lance Eliot, the AI Trends Insider
Have you ever heard of the phrase “citizen scientists”?
The phrases first entered into our lexicon in the mid-1990’s, and generally refers to the notion that ordinary everyday people can potentially contribute to the work of science, in spite of the fact that they aren’t professional scientists. We usually have some disdain for amateurs or non-professionals that try to enter into a professional realm, and we tend to denigrate whatever kind of contribution they might try to make. What do they know about real science, some ask. They are prone to fake science, some accuse.
The word “citizen” in this context is meant to suggest the lay public. In more recent times, we’ve generally seen that the word “crowd” has perhaps overtaken the now quainter use of the word citizen. We have crowd sourcing, and many refer nowadays to the “wisdom of the crowd” whenever we see lots of people band together on social media such as Facebook or Twitter. The crowd has become the plural version of the citizen, suggesting that with the crowd we have large numbers of contributors, while the word “citizen” can refer to just one person or possibly many such citizens contributing either individually or possibly banding together as a collective.
Using that same idea of the public contributing toward something that is outside of their expertise, we will soon be entering into an era of Citizen AI. In other words, we should begin to anticipate that everyday folks will want to contribute to the AI field. There will be those within AI that will certainly be skeptical about this notion. Only AI developers can develop AI, they will say. AI researchers will be horrified to see non-AI versed lay people that profess to offer new innovations in AI. Similar to those skeptics of citizen science, we’re likely to have skeptics of Citizen AI.
Now, let’s be clear that there’s a basis for being skeptical of both citizen science and citizen AI. For true science, we expect that scientists will be careful in their work, they will abide by proven scientific methods, they will document carefully their work, they will refrain from making unsupported conclusions, and so on. They are trained in these scientific approaches and can be held accountable within the community of scientists. In contrast, citizen scientists can presumably do whatever they want and make whatever outlandish claims they might wish to make. As such, one certainly should be skeptical and cautious when considering work or outcomes reported by citizen scientists.
For AI, we can say the same. AI developers are supposed to be versed in the techniques and approaches of AI. They should be careful about how they develop AI systems. They should be doing proper testing of their AI systems. They should be mindful of making outlandish claims. Unfortunately, there’s not quite the same overall code of conduct for AI as there are for scientists overall. This means that there are a number of AI developers and AI researchers that aren’t as closely held accountable for their claims. This makes it easier for citizen AI participants to enter into the fray. They can point to professional AI developers and researchers and point out potential guffaws and unsupported claims, and therefore argue that they should have similar latitude.
Please be aware that the phrase “Citizen AI” is not yet standardized and there are other uses associated with the meaning of the phrase. Some for example assert that it means that AI needs to stand-up for the citizenry. AI developers and researchers are supposed to consider the societal implications of the AI that they are bringing forth into the world. These AI innovations need to show their citizenry and the AI developers need to indicate how society and its citizens will benefit by the AI and not be harmed by the AI. I somewhat doubt that this meaning is going to take hold per se, since it doesn’t seem aligned with the Citizen Science meaning and so ends-up being more confusing than clarifying. Time will tell.
Back to then the meaning here of Citizen AI, which is that we are gradually going to have members of the general public that will be aiding the advancement of AI.
Ridiculous, you might say. Not so much, some retort. With recent advances in AI tools, we are seeing the arcane and highly complex aspects of AI coming further and further outside of the inner sanctum of obscure research labs. Conventional software developers are now routinely making use of AI by connecting with online AI systems and using Application Programming Interfaces (API’s) to have their traditional non-AI code leverage AI capabilities such as natural language processing, image analyses, artificial neural networks, and the like. It won’t be much longer before these AI tools are so easy to use that just about anyone can use them, ergo, the emergence of Citizen AI.
Anticipating Desire to Alter the AI of Self-Driving Cars
What does this have to do with AI self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are anticipating that everyday people will ultimately want to alter, add, or in some manner impact the AI that is driving their self-driving cars.
This comes as a shock to many of the auto makers and tech firms that are creating the AI for self-driving cars. For most of them, they are assuming that the AI on the self-driving cars will be a completely closed and locked system. Nobody, but nobody, gets into those systems, other than the auto maker or tech firm that made the system. This certainly makes sense at first glance, since we are talking about AI that controls a car, and therefore involves life-and-death circumstances.
Just imagine if you let a Citizen AI that opts to make a change to the sensors of a self-driving car and then, oops, the sensor interprets images of cars to be images of flowers. Some crazy minded goofball action like this would make the self-driving car do bad things. In some cases, the citizen AI might be doing something of an innocent nature and inadvertently mucking up the AI of the self-driving car, while in other cases it might be someone with a dastardly intent and they are purposely trying to make a self-driving car do terrible acts.
So, let’s for the moment say that allowing any kind of Citizen AI for self-driving cars is nonsensical and we’ll go along with the prevailing wisdom that the AI for the self-driving car is closed and locked. Well, there will always be those car hobbyists that will try to find a means around the closed and locked system. They will tinker and try. They will look for any small crack to pry open. You’ve likely heard of jail breaking your smart phone, and you can anticipate that some Citizen AI car hobbyists will be seeking to do the same to the AI of self-driving cars.
One way to try and curtail those activities would be to make it a crime to do so. The government could put in place laws that make it illegal to try and reverse engineer or otherwise crack open the AI of self-driving cars. This would certainly reduce the number of Citizen AI contributors regarding self-driving cars, but probably not get it to zero since there will still be those lawbreakers that are willing to go against the law for what they believe is right.
Indeed, I would anticipate that we’ll have some Citizen AI contributors that would say that by making it illegal to try and pry into the AI of self-driving cars that the government is putting the people at risk of faulty AI made by the auto makers or tech firms. And, if you buy into conspiracy theories, these Citizen AIers might argue that we could end-up with the AI of the self-driving cars taking over our self-driving cars, and without us citizens being able to get inside to stop it, we’d be at the mercy of this AI gone mad.
Though this last doomsday scenario is probably a better movie script than reality, the aspect that we are going to have only and always closed and locked AI for our AI self-driving cars seems rather suspect. We might be able to find some acceptable middle ground. Suppose that instead of the impenetrable barrier goal, we instead provide ways in which the AI of the self-driving can be adjusted, though in relatively controllable ways.
You can already bet that there are going to be third-party developers that will want to tie into the AI of the self-driving car. Just like there are add-on’s for our computers and our smart phones, there are bound to be a plethora of add-on’s that will emerge for self-driving cars. Currently, there are about 200 million conventional cars in the United States alone, and so if we are someday going to have that same number or more of AI self-driving cars, it’s a pretty tempting market for third-parties that want to make big bucks by supplementing whatever the auto maker has provided for the AI of your self-driving car.
Better Snow Detection for Colorado Springs?
What kinds of add-on’s would make sense?
Suppose that for the sensors of your self-driving car that the version of AI provided by the auto maker is adequate for detecting snow generally, but then a third-party developer enhances that capability for snow found in Colorado Springs in particular (note that Colorado Springs gets about 70 inches of snow per year). The add-on takes into account the specific geography of Colorado Springs and aids the conventional snow-analysis routines that come with your standard AI self-driving car. Would you be Okay with this add-on?
Now, you wouldn’t presumably proceed with the add-on unless you knew that it was well tested and able to work properly. Including this add-on would possibly confuse the standard AI snow-analysis routines if it was improperly coded, and so there is a downside to adding such an add-on. Presumably, the auto maker of the AI could have a certification program, whereby the third-party add-on needs to demonstrate that it works as intended and that it doesn’t work improperly. Just in case some people opted to get uncertified add-on’s, the auto maker might even make the AI of the self-driving car closed and locked to anything but properly certified add-on’s.
If you have now become somewhat convinced that maybe the AI of the self-driving car should allow for being semi-open, in this case for these third-party certified and approved add-on’s, you might then be ready to accept the idea of Citizen AI for the AI of the self-driving car. Now, I realize that the third-party add-on would likely have been developed by professional AI developers, and so it is not really considered a Citizen AI effort. But, it takes us one step closer to allowing for Citizen AI efforts for the AI of self-driving cars.
There are parts of the AI of the AI self-driving car that we likely would consider sacrosanct and not allow any kind of add-on’s or modifications by anyone other than the auto maker or tech firm. The core aspects of sensor analysis for the radar, cameras, LIDAR, ultrasonic are areas we’d most likely want to keep pure. The same could be said of the sensor fusion, and the virtual world model of the AI for the self-driving car, and the action plans, and the controls activation.
Where we might see allowance for Citizen AI would be at the outer areas of these cores elements. Though this might be allowed, in the end, the rest of the AI of the self-driving car would still be the overall controlling element of the driving of the car. In other words, no added element would be able to avoid being within the control of the bona fide AI. If the bona fide AI opted to nullify or momentarily turn-off the add-on, it could do so as needed. This would help to prevent some accidental rouge add-on from making chaos.
For those of you with an eye toward computer security, you might be wondering whether it is sensible to allow any kind of openings into the AI of the AI self-driving car. We are likely to already have hackers that will be virulently trying to find ways into the AI, hoping that they can take over the control of the self-driving car or maybe have an entire fleet of cars do their bidding. Admittedly, providing an opening for their efforts by having an opening for add-on’s does up the ante on the computer security aspects. But, an argument could be made that it actually forces the auto makers and tech firms to be on their toes about the computer security of the self-driving car.
There’s another twist to this topic that at first might not seem apparent. We have car hobbyists today that will take apart cars and remake them into their own image, so to speak, by deciding to add new components or change up components. In theory, such cars are supposed to be street-legal if they are intended to be used on our public roads. Suppose that a devoted car hobbyist opts to take apart a purchased AI self-driving car and remake it. Maybe they even discard the AI software and write their own.
If you saw an AI self-driving car driving on the roadways, how would you know that it is a legitimate one that came from a bona fide auto maker or tech firm, versus that it was a hot rod version that some car hobbyist put together? Of course, federal and state regulations are intended to hamper those that would want to do such a thing, by legally forcing them to make sure the self-driving car was street legal, but the answer to the question is that you would not have any means particularly of knowing that the self-driving car next to you is fully legal.
Let’s take another angle on the same notion. Currently, it is anticipated that the early days of self-driving cars will consist of cars that were made purposely to be a self-driving car. In essence, the car is likely not going to be a purely conventional car that just so happens to have some added sensory equipment bolted onto it. Eventually, though, many believe that we will have “converter kits” that allow you to turn a somewhat conventional car into an AI self-driving car. Once we get there, you can pretty much bet that then we’ll for sure have Citizen AI that opts to tinker with those converter kits.
This discussion on Citizen AI for AI self-driving cars is a somewhat futuristic look at where things are going. We don’t yet have sufficient numbers of AI self-driving cars on the roadway to see how this is going to play out. We’ve not yet seen Citizen AI come to bear on for example Tesla’s, which though they aren’t yet true AI self-driving cars (they are below the Level 5), and so we haven’t seen this happening to-date. Nonetheless, it does seem like a strong possibility that once we get enough AI self-driving cars on our roadways we are going to have Citizen AI for AI self-driving cars. Depending upon your perspective on the matter, you are either eager to see that day arrive, or you are dreading that day and pledge that should that day occur you will never ride in an AI self-driving car again. Or does that make you a Luddite?
Copyright 2018 Dr. Lance Eliot
This content is originally posted to AI Trends.
You must be logged in to post a comment.