Pursuit of Autonomous Cars May Pose Risk of AI Tapping Forbidden Knowledge
Pursuit of Autonomous Cars May Pose Risk of AI Tapping Forbidden Knowledge

Pursuit of Autonomous Cars May Pose Risk of AI Tapping Forbidden Knowledge

By Lance Eliot, the AI Trends Insider    

Are there things that we must not know?   

This is an age-old question. Some assert that there is the potential for knowledge that ought to not be known. In other words, there are ideas, concepts, or mental formulations that should we become aware of that knowledge it could be our downfall. The discovery or invention of some new innovation or way of thinking could be unduly dangerous. It would be best to not go there, as it were, and avoid ever landing on such knowledge: forbidden knowledge.   

The typical basis for wanting to forbid the discovery or emergence of forbidden knowledge is that the adverse consequences are overwhelming. The end result is so devastating and undercutting that the bad side outweighs the good that could be derived from the knowledge.   

It is conceivable that there might be knowledge that is so bad that it has no good possibilities at all. Thus, rather than trying to balance or weigh the good versus the bad, the knowledge has no counterbalancing effects. It is just plain bad.    

We are usually faced with the matter of knowledge that has both the good and the bad as to how it might be utilized or employed. This then leads to a dogged debate about whether the bad is so bad that it outweighs the good. On top of this, there is the unrealized bad and the unrealized good, which could be differentiated from the realized bad and the realized good (in essence, the knowledge might be said to be either good or bad, though this is purely conceptual and not put into real-world conditions to attest or become realized as such).   

The most familiar reference to forbidden knowledge is likely evoked via the Garden of Eden and the essence of forbidden fruit. 

A contemporary down-to-earth example often discussed about forbidden knowledge consists of the atomic bomb. Some suggest that the knowledge devised or invented to ultimately produce a nuclear bomb provides a quite visible and overt exemplar of the problems associated with knowledge. Had the knowledge about being able to attain an atomic bomb never been achieved, there presumably would not be any such device. In debates about the topic, it is feasible to take a resolute position favoring the attainment of an atomic bomb and there are equally counterbalancing contentions sternly disfavoring this attainment.   

One perplexing problem about forbidden knowledge encompasses knowing beforehand the kind of knowledge that might end up in the forbidden category. This is a bit of a Catch-22 or circular type of puzzle. You might discover knowledge and then ascertain it ought to be forbidden, but the cat is kind of out of the bag due to the knowledge having been already uncovered or rendered. Oopsie, you should have in advance decided to not go there and therefore have avoided falling into the forbidden knowledge zone.   

On a related twist, suppose that we could beforehand declare what type of knowledge is to be averted because it is predetermined as forbidden. Some people might accidentally discover the knowledge, doing so by happenstance, and now they’ve again potentially opened Pandora’s box. Meanwhile, there might be others that, regardless of being instructed to not derive any such stated forbidden knowledge, do so anyway.   

This then takes us to a frequently used retort about forbidden knowledge, namely, if you don’t seek the forbidden knowledge there is a chance that someone else will, and you’ll be left in the dust because they got there first. In that preemptive viewpoint, the claim is that it is better to go ahead and forage for the forbidden knowledge and not get caught behind the eight-ball when someone else beats you to the punch.   

Round and round we can go.   

The main thing that most would agree to is that knowledge is power. 

The alluded to power could be devastating and destroy others, possibly even leading to the self-destruction of the wielder of the knowledge. Yet there is also the potential for knowledge to be advantageous and save humanity from other ills.   

Maybe we ought to say that knowledge is powerful. Despite that perhaps obvious proclamation, we might also add that knowledge can decay and gradually become outdated or less potent. Furthermore, since we are immersing ourselves herein into the cauldron of the love-it or hate-it knowledge conundrum, knowledge can be known and yet undervalued, perhaps only becoming valuable at a later time and in a different light.   

There is a case to be made that humankind has a seemingly irresistible allure toward more and more knowledge. Some philosophers suggest you are unlikely to be able to bottle up or stop this quest for knowledge. If that’s the manner of how humanity will be, this implies that you must find ways to control or contain knowledge and give up on the belief that we can altogether avoid landing into forbidden knowledge.   

There is a relatively new venue prompting a lot of anxious hand wringing pertaining to forbidden knowledge, namely the advent of Artificial Intelligence (AI).   

Here’s the rub.   

Suppose that we are able to craft AI systems that make use of knowledge about how humans can think. There are two major potential gotchas.   

First, the AI systems themselves might end up doing good things, and they also might end up doing bad things. If the bad outweighs the good, maybe we are shooting our own foot by allowing AI to be put into use. 

Secondly, perhaps this could be averted entirely by deciding that there is forbidden knowledge about how humans think, and we ought to not discover or reveal those mental mechanisms. It is the classic stepwise logic that step A axiomatically leads to step B. We won’t need to worry about AI systems (step B), if we never allow the achievement of step A (figuring out how humans think and then imparting that into computers), since the attainment of AI would presumably not arise.   

In any case, there is inarguably a growing concern about AI.   

Plenty of efforts are underway to promulgate a semblance of AI Ethics, meaning that those developers and indeed all stakeholders that are conceiving of, building, and putting into use an AI system needs to consider the ethical aspects of their efforts. AI systems have been unveiled and placed into use replete with all sorts of notable concerns, including incorporating unsavory biases and other problems.   

All told, one bold and somewhat stark argument is that the pursuit of AI is being underpinned or stoked by the discovery and then exploitation of forbidden knowledge.   

Be aware that many would scoff at this allegation.   

There are those deeply immersed in the field of AI who would laugh that there is anything in the entirety of AI to date that constitutes potential forbidden knowledge. The technology and technological elements are relatively ho-hum, they would argue. You would be hard-pressed to pinpoint what AI-related knowledge that is already known comes anywhere near the ballpark of forbidden knowledge. 

For those that concur with that posture, there is the reply that it might be future knowledge that we have not yet attained that is the upcoming forbidden kind, and for which we are heading pell-mell down that path. Thus, they would concede that we haven’t arrived at forbidden knowledge at this juncture, but this is an insidious distractor due to the aspect that it masks or belies our qualms entailing the possibility that it lays in wait at the next turn. 

One area where AI is being actively used is to create Autonomous Vehicles (AVs). 

We are gradually seeing the emergence of self-driving cars and can expect self-driving trucks, self-driving motorcycles, self-driving drones, self-driving planes, self-driving ships, self-driving submersibles, etc.   

Today’s conventional cars are eventually going to give way to the advent of AI-based, true self-driving cars. Self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that has arisen: Might the crafting of AI-based true self-driving cars take us into the realm of discovering forbidden knowledge, and if so, what should be done about this?   

Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Forbidden Knowledge   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.   

All occupants will be passengers.   

The AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. 

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

The crux here is whether there is forbidden knowledge lurking within the existing and ongoing efforts to achieve AI-based true self-driving cars. We’ll begin by considering the status of the existent efforts and then shift into speculation about the future of such efforts.   

Per the earlier discussion about whether there is forbidden knowledge that has already perchance been revealed or discovered via the efforts toward today’s AI systems all told, the odds seem stacked against such a notion at this time, and likewise the same could be said about the pursuit of self-driving cars. Essentially, there doesn’t seem to be any forbidden knowledge per se that has been discovered or revealed during the self-driving cars development journey so far, at least with respect to the conventional wisdom about what forbidden knowledge might entail.   

One could try to argue that it is premature to reach such a conclusion and that we might, later on, realize that forbidden knowledge was indeed uncovered or invented, and we just didn’t realize it. That is a rabbit hole that we’ll not go down for now, though you are welcome to keep that presumption at hand if so desired.   

That covers the present, and ergo we can turn our attention to the future.   

Generally, the efforts underway today have been primarily aimed at achieving Level 4, and the hope is that someday we will go beyond Level 4 and attain Level 5. To get to a robust Level 4, most would likely say that we can continue the existing approaches.   

Not everyone would agree with that assumption. Some believe that we will get stymied within Level 4. Furthermore, the inability to produce a robust Level 4 will ostensibly preclude us from being able to attain Level 5. There is a contingent that suggests we need to start over and set aside the existing AI approaches, which otherwise are taking us down a dead-end or blind alley. An entirely new way of devising AI for autonomous vehicles is needed, they would vehemently argue.   

There is also a contingent that asserts the Level 4 itself is a type of dead-end. In brief, those proponents would say that we will achieve a robust Level 4, though this will do little good towards attaining Level 5. Once again, their view is similar to the preceding remark that we will need to come up with some radically new understandings about AI and the nature of cognitive acumen in order to get self-driving cars into the Level 5 realm.   

Aha, it is within that scope of having to dramatically revisit and revamp what AI is and how we can advance significantly in the pursuit of AI that the forbidden knowledge question can reside. In theory, perhaps the only means of attaining Level 5 will be to strike upon some knowledge that we do not yet know and that for which bodes for falling within the realm of forbidden knowledge. 

To some, this seems farfetched. 

They would emphatically ask; just what kind of knowledge are you even talking about?   

Here’s their logic. Humans are able to drive cars. Humans do not seem to need or possess forbidden knowledge as it relates to the act of driving a car. Therefore, it seems ridiculous on the face of things to claim or contend that the only means to get AI-based true self-driving cars, for which they would be driven on an equal basis as human drivers can drive, would require the discovery or invention of whatever might be construed as forbidden knowledge.   

Seems like pretty ironclad logic.   

The retort is that humans have common-sense reasoning. With common-sense reasoning, we seem to know all sorts of things about the world around us. When we drive a car, we intrinsically make use of our common-sense reasoning. We take for granted that we do have a common-sense reasoning capacity, and similarly, we take for granted that it integrally comes to the fore when driving a car.   

Attempts to create AI that can exhibit the equivalent of human common-sense reasoning have made ostensibly modest or some would say minimal progress (to clarify, those pursuing this line of inquiry are to be lauded, it’s just that no earth-shattering breakthroughs seem to have been reached and none seem on the immediate horizon). Yes, there are some quite fascinating and exciting efforts underway, but when you measure those against the everyday common-sense reasoning of humans, there is no comparison. They are night and day. If this were a contest, the humans win hands down, no doubt about it, and the AI experimental efforts encompassing common-sense reasoning are mere playthings in contrast.   

You might have gleaned where this line of thought is headed.   

The belief by some is that until we crack open the enigma of common-sense reasoning, there is little chance of achieving a Level 5, and perhaps also this will hold back the Level 4 too. It could be that a secret ingredient of sorts for autonomous vehicles is the need to figure out and include common-sense reasoning into AI-based driving and piloting systems.   

If you buy into that logic, the added assertion is that maybe within the confines of how common-sense reasoning takes place is a semblance of forbidden knowledge. On the surface, you would certainly assume that if we knew entirely how common-sense reasoning works, there would not appear to be any cause for alarm or concern. The act of employing common-sense reasoning does not seem to necessarily embody forbidden knowledge.   

The twist is that perhaps the underlying cognitive means that gives rise to the advent of common-sense reasoning is where there is forbidden knowledge. Some deep-rooted elements in the nature of human thought and how we form common sense and undertake common-sense reasoning are possibly a type of knowledge that will be shown as crucial and a forbidden knowledge formulation.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion   

Wow, that’s quite a bit of pondering, contemplation, and (some would say) wild thinking.   

Maybe so, but it is a consideration that some would wish that we gave at least some credence toward and devoted attention to. There is the angst that we might find ourselves by happenstance stumbling into forbidden knowledge on these voracious self-driving cars quests.   

For however you might emphasize that having AI-based true self-driving cars will be a potential blessing, proffering mobility-for-all and leading to reducing the number of car crash-related fatalities, there is a sneaking suspicion that it will not be all-good. The catch or trap could be that there is some kind of forbidden knowledge that will get brought to the eye and we will inevitably kick ourselves that we didn’t see it coming.   

The next time you are munching on a delicious apple, give some thought to whether self-driving cars might be forbidden fruit.   

We are on the path to taking a big bite, and we’ll have to see where that takes us. 

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website