o1 Hello – This is simply amazing – Here’s my initial review
o1 Hello – This is simply amazing – Here’s my initial review

o1 Hello – This is simply amazing – Here’s my initial review

o1 Hello - This is simply amazing - Here's my initial review

So it has begun!

Ok, so, yeah! There is not a lot of usage you can get out of this thing so you have to use the prompting very sparingly. It is days rate limiting not hours. :(

Let's start off with the media. Just one little dig at them because on CNBC they said, "the model is a smaller model". I think the notion here was that this model is a smaller model from a larger model so they just repeated that. I don't think this is a smaller model. Now, it could be that the heart of the model is smaller but what is going on behind the scenes with the thinking is a lot of throughput to model(s).

I think the implication here is important to understand because on one hand there is an insanely low rate limit. when I say low I mean 30 messages per week low. On the other hand, the thinking is clearly firing a lot of tokens to get through a process of coming to a conclusion.

The reason why I say it's a concert of models firing towards each other is because something has to be doing the thinking and another call (could be the same model) has to be doing the checking of the steps and other "things". In my mind, you would have a collection of experts doing each thing. Ingenious really.

Plausibility model

The plausibility model as the prime cerebral model. When humans think the smartest humans understand when they are headed down the right path and what is not the right path. You see this in Einstein's determination to prove the theory of relativity. His clutch of infamy came on the day when in an observatory (I think during an eclipse) he caught the images of light bending around our star proving that the fabric of space was indeed curved.

Einstein's intuition here can not be underestimated. From Newton's intuition about gravity and mass to Einstein coming along and challenging that basic notion and to take it further and learn a new understanding of the how and why. It all starts with a plausibility of where one is going in their quest for knowledge. With my thoughts am I headed down the right path. Does the intuition of my thoughts make sense or should I change course to another or should I abandon the thought all together. This is truly what happens in the mind of an intelligent and sentient being on the level of genius. Not only the quest for knowledge but the ability to understand and know correctness wherever the path has led.

In this, LLM's were at a distinct disadvantage because they are static capsules of knowledge frozen in time (and a neural network). In many ways they still are. However, OpenAI has done something that is truly ingenious to initially deal with this limitation. First, you have to understand the limitation of why being static and not dynamic is such a bad thing. If I ask you a question and tell you that the only way you can answer is to spit out the first thing that comes to your mind, without thinking, would produce in some probable occasions the wrong answer. With increasing difficulty of the question the more and more likely it would be that one would give the wrong answer.

But human beings don't operate with such a constraint. They think through things as the level of difficulty of the perceived question is queried. One initial criticism is that this model over thinks all of the time. Case in point. It took 6 seconds to process hello.

https://preview.redd.it/aih5umfz4iod1.png?width=1459&format=png&auto=webp&s=65bef59c6f7cdb52e9bef56c6d65e1a64b32f0d3

Eventually, I am sure OpenAI will figure this out. Perhaps a gate orchestrator model?! Some things don't require much thought; just saying.

But back to the plausibility model concept. I don't know from Sunday if this is what is really going on but I surmise. What I imagine here is that smaller models (or the model) are quickly bringing information to a plausibility model. What is a mystery here is how on earth does the plausibility model "know" when it has achieved a qualitative output? Sam said something in an interview that leads me to believe that what's interesting about models as they stood since GPT-4 is that if you run something 10,000 times somewhere in there is correctness. Just getting the model to definitely give you that answer consistently and reliably is the issue. Hence, hallucinations.

But what if, you could deliver responses and a model checks that response for viability. It's the classic chicken and egg problem. Does the correct answer come first or the wrong answer. Well, even going further, what if I present to the model many different answers. Choosing between the one that makes the most sense makes the problem solving a little more easier. It all becomes recursively probabilistic at this point. Of all of these incoming results keep checking to see if the path we're heading down is logical.

Memory

In another methodology, a person would keep track of where they were in the problem solving solution. It is ok to get to a certain point and pause for a moment to plan on where you would then go next. Hmmm. Memory, here is vital. You must keep the proper context of where you are in your train of thought or it is easy to lose track or get confused. Apparently OpenAI has figured out decent ways to do this.

Memory, frankly, is horrible in all LLM's including GPT-4. Building up a context window is still such a major issue for me and the way the model refers to it is terrible. In GPT-o1-preview you can tell there are major strides in how memory is used. Not necessarily from the browser but perhaps on their side via backend services we humans would never see. Again, this would stem from the coordinating models firing thoughts in and out. Memory on the backend is probably keeping track of all of that which is probably the main reason why COT won't be spilling out to your browser amongst many other reasons. Such as entities stealing it. I digress.

In the case of GPT-o1 memory seems to have a much bigger role and is actually used very well for the purpose of thinking.

Clarity

I am blown away by the totality of this. The promise is so clear of what this is. Something is new here. The model feels and acts different. It's more confident and clear. In fact, the model will ask you for clarity when you are conversing with it. Amazingly, it feels the need to grasp clarity for an input you are asking it.

https://preview.redd.it/dr8zsc235iod1.png?width=1201&format=png&auto=webp&s=9f76caa2efe0251c414162faabc389132f4310e8

Whoa. That's just wild! It's refreshing too. It "knows" it's about to head into a situation and says, wait a minute let me get a better understanding here before we begin.

Results and Reasoning

The results are spectacular. It's not perfect and for the sake of not posting too many images I had to clean up my prompt so that it would be confused by something it asked me to actual clarify in the first place. So maybe while it isn't perfect, It sure the hell is a major advancement in artificial intelligence.

Here is a one shot prompt that GPT-4, 4o continually fail at. The reason why I like this prompt is that it was something I saw in a movie and as soon as I saw the person write down the date from the guy asking him to do it I knew right away what was about to happen. Living in the US and travelling abroad you notice some oddities that are just the way things are outside of one's bubble. The metric system for example. Italy is notorious for giving Americans speeding tickets and to me the reason is because they have no clue how fast they are going with that damn speedometer in KPH. I digress. The point is, you have to "know" certain things about culture and likelihood to get the answer immediately. You have to reason through the information quickly to conclude to the correct answer. There is a degree of obviousness but not just from someone being smart but from someone having experienced things in the world.

Here is GPT-o1-preview one shotting the hell out of this story puzzle.

https://preview.redd.it/z6vdhal55iod1.png?width=1057&format=png&auto=webp&s=17d6499286d671449ca9a62fe44eba2ed37f9112

https://preview.redd.it/grphx9q65iod1.png?width=616&format=png&auto=webp&s=52457b4bd11c230590c2583aac6660b3d6b65e92

https://preview.redd.it/j0g5wm575iod1.png?width=796&format=png&auto=webp&s=cb258066c771c35ef5826ce7b37287dfc8ac712a

As I said, GPT-4 and 4o could not do this in 1 shot no way, no how. I am truly amazed.

The Bad

Not everything is perfect here. The notion that this model can't not think about certain responses is a fault that OAI needs to address. There is no way that we will want to not being using this model all of the damn time instead of <4o. it not knowing when to think and when to just come out with it will be a peculiar thing. With that said, perhaps they are imagining a time when there are acres and acres of Nvidia Blackwell GPU's that will run this in near real time no matter the thought process.

Also, the amount of safety that is embedded into this is remarkable. I would have done a section of a Safety model but that is probably coordinating here too but I think you get the point. Checks upon checks.

The model seems a little stiff on the personality and I am unclear about the verbosity of the answers. You wouldn't believe it from my long posts but when I am learning something or interacting I am looking for the shortest and most clearest answer you can give. I can't really tell if that has been achieved here. Conversing and waiting multiple seconds is not something I am going to do to try and figure out.

Which brings me to the main complaint as of right now. The rate limit is absurd. lol. I mean 30 per week how can you even imagine using that. For months now people will be screaming because of this and rightly so. Jensen can't get those GPU's to OpenAI fast enough I tell you. Here again, 2 years later and we are going to be capability starved by latency and throughput. I am just being greedy.

Final Thoughts

In the words of Wes Roth, "I am stunned". When the limitations are removed, throughput and latency are achieved, and this beast is let loose I have a feeling that this will be the dawn of a new era of intelligence. In this way, humanity has truly arrived at the dawn of an man made and plausibly sentient intelligence. There are many engineering feats that will be left to overcome but we are in a place that on this date 9/12/2024 the world will be forever changed. The thing is though this is only showcasing knowledge retrieval and reasoning. It will be interesting to see what can be done with vision, hearing, long term memory, and true learning.

The things that will built with this may be truly amazing. The enterprise implications here are going to be profound.

Great job OpenAI!

submitted by /u/Xtianus21
[link] [comments]