Hello World
Hello World

Hello World

I am writing this below because I'd like to give my take on the true Artificial Super Intelligence (ASI) or artificial human-like intelligence (AHI). Seemingly, the definitions have changed but the goal should be something profound yet wildly simple. To me, that goal should be "hello world".

What is hello world and the TLDR of everything I am about to write below. BTW I wrote this in response to a question about what do I mean by deterministic systems. I hope it becomes clear below what it is I am referring to when I use the word deterministic agency or deterministic cognition.

Back to the TLDR.

Agency is born from cognitive determination and learning.

Thus, a learning communication through a goal/reward system may lead to "Hello World" which would be an initial primordial AI communication that it is using language to guide its worldview understanding of simply saying something. We can make it mom if you'd like. Not a prediction but rather a real world communication from the inside out.

Let's think about what we have in today's AI technology and use that along with other processes that can be totally new ways of thinking and innovating on what could become AHI. I don't like the phrase ASI and AGI because I feel that A. the definitions have been bastardized to meaningless commercial buzzwords and B. they aren't anything related to true human cognition so in my opinion aren't viable concepts. YES, I am saying LLM's alone will get us nowhere towards AHI.

Also, I deeply appreciate Yann Lecun's candor on where we really are in terms of AGI/ASI/AGI. We are in fact, nowhere close. This is obvious to any industry insider. But again, let's begin the thought process of thinking differently and discovering other forms of innovations that could complete a gain of function.

What I am proposing is instead of using LLM's to try to compress the worlds textual data and then retrieve it but rather let's think about the human system from the ground up and build a system that could go from there. A compute system that could be in this artificial way could in fact lead to an artificial superintelligence. But it doesn't have to start as a singularity but rather as an infant child who's just left his mothers' womb learning and adjusting to the world around it.

What I am looking for is all hands on experts in particular fields whom may be computer scientists, software engineers, data scientists, biological experts, neurologists, psychiatrists, psychologists and yes philosophers.

Let's begin.

First, let me add the writings of what I feel are the sentinel components of achieving AHI. I need to see these 2 pegs fall before we can have a system that does anything close to what we are all hoping and imagining of an AHI system.

--------------------------------------------------------

here is my official peg 1 and peg 2.

  1. An active RL learning system based on language. meaning, the system can primarily function in a communicative way. Think of a human learning to speak. This would be something completely untethered from an LLM or static (what I call lazy NLP layer) inference model. Inference models are what we have now and require input to get something out. This effectively is a infinite wall of protection as of today. Nothing can possibly come out other than what it was trained on. In my theory's you could have a system still use this layer for longer term memory context of the world view. Google's Deep Mind references exactly this.
  2. A QDN or a some abstraction that is like a QDN that is in control of the world view or it's view. Sort of a reward system for basic thought and problem solving and learning. You need the first peg #1 to fall in order to begin working on the this peg. What this is saying is that if you can use the above active RL system then you can posit using an active model which perhaps "think" in a way. I can speak so I tell you to learn basic math so you do. I now may seek to learn something else and so on. The desire to learn is the primary effect of an intelligent species and this would need to act the same effectively. Keep in mind AlphaGO is not this. It's pure math and steps are mathematical only with a deterministic outcome based on the worldview of the AlphaGo game. Because there is not a communicative layer of understanding by the AlphaGO model there is no other way to posit any true nature of thought. i.e. just because you got statistically better at moves is bound to the fact that it is just the math of AlphaGO. That is why the first peg is so profound and important.

My response and my thinking of a 2 / 3 part component that if we achieved an AHI this is an approach for such a thing. I hope to gardner discussion of the feasibility of this approach and the AI communities' thought of why or why this could be achievable. I go into why LLM's are not a sole path forward towards what an AHI would ultimately be. Simply, our thinking needs to radically adjust to accomplish such a goal.

---------------------------------------------- My reply

This is not a design decision but more so the reality of the deterministic system of which an LLM is not part of. The context you speak of is acting on a static (I call lazy layer) of the system. The model is ready, set, go, done. There is zero opportunity of adjustment from you or I's perspective. We use the api and it responds. This is also why the refer to this as zero shot or few shot models.

Be careful to remove the illusion of the human aspects GPT may mimic. Context is a great example of this. GPT does not keep or hold any context. Literally, the way it provides the illusion to this is to concatenate your text inputs and reinsert them up to a certain limit. This is why token size is so important.

If you're having a conversation with GPT you can see this going awry all of the time. Losing context. Why? Well the past message amount it has retained has been left off in a FIFO format. This is clear when programming directly with GPT.

This is also where CoT comes from and the obviousness of it. I posted a good paper on that. When I design a system (pipeline) this is very common practice.

Let me explain deterministic behavior and how that could relate to agentic behavior. Especially in a new system; such as a human being.

Why is deterministic behavior related to human behavior in a cognitive sense? Well, you could call it a cognitive Determinism and or Deterministic Agency. Deterministic behavior is easier to follow on its own because there is always a perceived end result. AlphaGO is a great example of this. The deterministic end is simply, winning the game.

However, what I am trying to argue is that it may be possible to do a rudimentary system that can prove deterministic agency via the cognitive layer.

Think of a child that is born into the world. They don't come out talking and speaking all at once. They're brain has to grow and adjust to the new world around them. It wouldn't surprise me at all if the human brain would be able to adapt to otherly worlds and physicalities that are elsewhere in the universe because of well designed on dna is. This is easily proven and observable with the protein red blood cells and their affinity to oxygen while in the womb and post birth into the real world. Our bodies literally take on a monumental physical biological adaptation to the world around us. There would be no reason to believe the brain doesn't hold a similar placicity.

This could come down to the very light we perceive by our star system (the sun) versus another star system or UV atmospheric filter by planetary means.

When a child comes into the world they most likely don't process and hold sounds as they do when they are of a certain developmental age. 1 - 2 years of age. The capability to hear with clear auditory precision is something that is most likely fine-tuned over a period of time.

The result is that when the child can hear properly they then can begin the agentic process of wanting to speak. But that agency is grounded, to me, in a deterministic will of a primordial desire; To communicate with another being.

Again, to me, it's not just free will agency that is alone in our conscious layer but rather our desire and will for need and want that drives our very thought processes. Determinism always comes down to a single threaded point. Quit simply, humans could be the culmination of all of those deterministic desires.

Let me try to illustrate the point biologically. I will use the biological example of urination to illustrate the point. We have a biological valve that holds our urination inside of our bodies. When our bladders get full our body creates a sensation that we need to release the urine inside of us. The agency here is clear but the bind to determinism is clear here too. I need to go urinate so I need to tell my brain when I will allow my body to do that. The deterministic point laid upon us is the feeling of urination that can become increasingly stressful and even painful if we refuse to "let go." This gives us time to plan exactly when and where we do our action i.e., the bathroom.

The thought of that planning is done continuously with increasing intensity until we have resolved the issue with our brian.

To me, it is clear that there is a very deterministic attribute to our cognitive layer.

Everyone of our thoughts has determinism built into those thought processes just on a more nuanced and intricate scale. As I am devising my argument in this presentation and writing I am constantly having one goal in mind. Try to argue the point that our agency is not without or in the very least greatly assisted with deterministic features.

Determinism therefore, to me, is the driving force of self-contained agentic behavior.

Language is therefore a simple byproduct of a layer that allows us to accomplish are behaviors and desires into this world.

This is where the magic happens. The desire or the goal or the point is lead by the thought. Meaning, I use language to define the capability of how I will reach my desire, my goal, or my thought process. The words have meanings and the sentences have meaningful thought. With this, I am conscious and I am aware.

My thoughts simply go through the day literally place to place while I am awake. My will and my desire creates/determines a goal(do this for the day...,have a conversation...), a reward (eating, sleeping, bathing, sex(goal/reward)), a feeling(i am sad, I am happy, am depressed).

This will and desire is the third arm but we don't have to do that in AI systems initially. The first thing we should do is the deterministic agency of language. Communication. It doesn't have to know everything or be this singularity of profound intelligence. Just a little system that can use words and sentences to accomplish a goal.

Just as a child doesn't know what words mean or what time is (ask my 2 year old when he says an hour ago and I laugh because I know he doesn't know what that means. It's hilarious. I look at him like what lol). I digress. The child has to learn the meaning of words and then sentences to fulfill their desires. They cry for milk as a primordial instinct but they then LEARN to communicate to get the same result.

The child saying "mom" is simply a parrot of a parent driving in a word that they have learned to hear with clarity and feel the desire to mimic aloud. The later developmental phrase of "I want" or simply "milk" is a much more targeted goal/desire to get a required necessity which is to alleviate a hunger. I say "milk" I get milk and I like milk. It's not Einstein that comes from the womb but rather a system that is learning to communicate.

LLM's don't have any of this but what they DO HAVE are the words and the phrases. I say bootstrap that onto an deterministic system that can reinforce learning with goals and rewards (desires and wants if you will).

Point is, as a possible AI/ASI the system learns to use communication in general that would be step 1. I have these words so I can use them to communicate. Then you can put other goal settings abstractions on top of that layer to get true ASI type intelligence with an AI system that is truly agentic. It may never be conscious but it would be freakily appearing to be.

The final piece would be the agentic layer. Think of this as the priorities of thought. Where should the system of thought go from place to place in motion. I thought this, I completed this, I did this, I communicated this. Ok what next. This is sort of a parameter system of wills and wants and desires to RL deterministic layer of the cognitive system in whole.

Anyways, I hope this made sense and these are just my thoughts.

I believe we could build such a system and it would be interesting to see someone or even me work on it.

submitted by /u/Xtianus21
[link] [comments]