chatGPT might be more useful than AGI
chatGPT might be more useful than AGI

chatGPT might be more useful than AGI

If AGI is "human level" intelligence, (the v1.0) might be slow, prohibitively expensive and stupid. (AGI tier list)

chatGPT costs something like 1c per second, so $60/hr. If you are paying for an artificial intelligence to slowly type, look things up, slowly read, forget things, sleep(?!) (and so on) it might seem a huge step backwards.

:::: you can stop reading ::::

TLDR: AGI v1 dumb & v expensive / chatGPT great! / chatGPT+more+more = meh / intelligence, hmm. Humany? hmm. / AGI ... crap at first.

It's true that a real "general" intelligence would be profoundly amazing. Maybe you don't even need agency.

:: This post came out of a joke, where an early-adopter AI enthusiast gets the first access to the first AGI and it slowly replies with "bro, wut" or "i dunno google it". Then goes on to delete things, misspell things, and then not send it in the end anyway. I'm fairly sure that would count as an AGI - if it was truly general (and you were talking crap).

:: I wanted to acknowledge chatGPT's talents. Huge speed. Ability to give wet 'all encompassing' answers from all directions at once.

If it's slow and expensive, and not all that smart, AGI might be of limited use. It's one person. If you have a team of them working together you might get places, but they have to organise themselves. If they work faster that'll get more interesting.

Agency is not a given. Agency seems really dangerous to me. You'd need to be clearly able to monitor it's evolving belief system / moral compass. Especially the V1. You might not need it.

It feels like something capable of learning new skills, creating new things, would have/need the intrinsic ability to teach itself. And it might then have to teach itself, as we do. This involves being wrong, taking guesses, taking time, learning and rejecting bad input. Working things out by eliminating bad guesses. Being stupid and slow. I heard more creative brains are that way because information moves slower through them (exposing more connections along the way).

That feels like a different offering to the chatGPT 2+ which it feels like openAI are most likely working on. I'm not so sure that just bolting on new capabilities to an LLM is the way to do it.

I didn't expect myself to say this, but maybe they "got lucky" with LLMs. Threw text at GPUs and got a language-based mind. Maybe an actual AGI needs to be a completely different design, probably including a language model along with others. Maybe this is the plateu some say is coming.

The point of this post is to say AGI might be far less useful that chatGPT when it first arrives. Humans are generalists, and it shows. Jack of all trades. Yes, maybe AGI 2027, but you might be using chatGPT till 2030. For example.

r/singularity seems fairly obsessed with the arrival of AGI, and its soon-ness. Which is fine, and I too have a short time-line. But AGI might be hugely disappointing and possibly not all that useful when trying to get to superintelligence. Also, enormously demanding (in terms of electricity and hardware). chatGPT and GPT4 was a real struggle for openAI. The flip side to Moore's Law is that it actually is going to take time to ramp up compute capacity, and you might want to think in terms of cost-per-time. So, probably the workers are not going to be replaced overnight, because AIs will be more expensive for 3-10 years regardless of ability. I saw somebody who pasted a massive tax document in and was charged $13 or something.

There's a question of "personality" or "perspective" on intelligence I think. When you talk to an expert in X, you are choosing them. They are playing a role, with a perspective. A teacher in a field will answer differently to a business owner, to a early-career person. They all might have the expertise to answer the question, but different perspectives... looking at different goals, with different value systems/beliefs. Is this relevant to intelligence? Yes, I think it is, because it starts to knock on the door of "there is no answer: only stuff"

"give me 5 ways to make money with web design" "why web design?" (etc) "why money" (etc) 

Before you know it you've been spun 360. This is what a super-intelligent human (who gave a ____) would do for you, but things start to lose meaning a bit when the rails come off. Maybe.

I just feel like people are expecting "chatGPT but less wet", less confusable, longer code, better characters, able to do maths. Able to drive robots. I'm not sure that's it. Sam altman is looking more like a say-anything-dreamWeaver as time goes on.

It might be that desperate drive to lastthing++ is likely not the right path. And this might be why Google is looking disinterested.

I'm still terrified of Gemini.

I just thought it was funny that AGI might turn out to look real dumb, but still be 100% legit, and an enormous human achievement.

This post has taken an hour to write, is the third attempt (the first was removed) and is still rambley. That's an AGI level post. $60 please.

TLDR2: AGI might be disappointing at first, replace nobody (expensive & slow) and be useless.

submitted by /u/inteblio
[link] [comments]