My prediction for AI – how it could solve the "strawberry" problem and almost anything you throw at it
My prediction for AI – how it could solve the "strawberry" problem and almost anything you throw at it

My prediction for AI – how it could solve the "strawberry" problem and almost anything you throw at it

AI is already getting much better at coding than it used to be. Infamously, if you ask an LLM how many Rs are in "strawberry," it may very well tell you the wrong answer (this is true even of OpenAI's new o1 model, according to YouTuber Fireship). This could be fixed with AI's ability to code. If you asked a chatbot a question that an LLM is not well suited to answer, then under the hood, it could prompt itself something like:

Devise an algorithm to answer the question:

<your prompt or some variation of it—e.g., How many Rs are in "strawberry"?>

Then, the company that developed the chatbot could have it add whatever additional boilerplate stuff they want to the prompt it feeds itself (for example, "Determine which programming language is best suited for this task," etc.). The chain-of-thought approach used by OpenAI's o1 model could increase the likelihood that the algorithm will work.

After that, it would have an algorithm. This algorithm might, for example, declare a variable called count and initialize it to 0, store each character of "strawberry" in an array, iterate through the array, check whether the character at each index is equal to "r" or "R," and if it is, increment the count by 1. Then, it might print the value of count.

Once written, it would execute that algorithm (which would output 3). Next, it would take its algorithm's output, package it into an answer, and provide you with that answer.

With this approach, LLMs would be able to do many things they can't currently do. This would be extremely powerful and potentially dangerous (especially giving AI free rein to write and execute code), but it seems like a logical step down the road, assuming they put up many safeguards and do extensive testing.

It would essentially be able to code the functionality necessary to satisfy your prompt. It could give itself the tools to do what it needs to do.

That's my prediction, anyway. I just wanted to state it in case it ever comes true. I also wanted to discuss the feasibility of this approach. Is there any chance of something like this coming along at some point, or are there good reasons why it never will?

Tl;dr: I predict AI chatbots will someday be able to prompt themselves to write and execute code that will allow them to satisfy your prompt with better accuracy. What's your take?

submitted by /u/DugFreely
[link] [comments]