With AI there are two possibilities:
It plateaus, with no meaningful improvements for decades. Some jobs will be lost as AI gets integrated into workflows, but people can reskill and adapt.
It will improve, either slowly, at the current pace (fast), or exponentially fast as some self-improvement kicks in. This is the most likely path, as there is always some way to make AI better (like with everything, humans can always iterate and improve even bit by bit). Even a modest 5% improvement per year is a 65% compounded improvement over a decade. AI doesn't need to become AGI, or anything special, just smart enough to meaningfully improve itself when deployed as millions of hive mind-like individuals. They can be the bottom 10% of AI researchers - it doesn't matter, they will improve incredibly fast, as millions of agents will work day and night, without getting distracted, no sleep or vacation.
So if the second option happens it won't be good for anyone:
For the poor (bottom 95%): jobs will be taken, centralized AI companies will gut the economy, as money or any service will lose all value.
For the rich (top 5%): this will happen later, but if recursive self-improvement kicks in there will be no stopping it. I think it's nonsense to say that after AGI will happen 3 to 5 years or more to ASI. The only thing needed is narrow AI researcher AI that can improve itself. And it won't happen in years, it will happen in days, hours or perhaps even less, here's why:
Millions of non-tiring agents sharing a common goal, behaving as an organized hive mind. If one improves, all improve. If there is a 0.001% improvement rate per day per agent, across 5 million non-stop working AI agents it will quickly compound, and as smarter and smarter AI comes to work on itself the rate of progress will happen even faster.
When ASI happens, perhaps within hours since the first recursively self-improving AI happens, all people will become equal. We'll all be ants compared to AI. Both in strength and intelligence. ASI can break all security, deploy millions of nanorobots, improve chips in novel ways. We know really efficient ways of computing exist, we know how intelligent humans can be even if their computer (brain) is so small. So there is a huge ceiling for AI to improve.
Will AI kill us? Perhaps not, we don't kill every ant we see. Will it equally destroy poor and rich, both when it reaches full ASI, and while it's getting there?
There will be no way to stop it. It will unleash enormous suffering. This won't be a question of alignment, it is Russian roulette with humanity's future. No matter how you try to align a 10-fold smarter AI, the singularity is a single point, no matter from which side AI will approach, it will end up in the same spot. The only thing that could change is how many people die from the first self-improving iteration to singularity.
Now:
There are billions if not trillions of dollars poured into this, not only by companies but by countries. Everyone tries to achieve self-improvement, it is the holy grail.
I don't say the first attempt will succeed, but it's like there is one person throwing burning matches onto a dried field:
Some matches won't even hit the ground. Some matches that do hit the ground will quickly fizzle out because of wind or whatever. But if one match hits the right grass, the whole field will be up in seconds.
And everyone is working on having more and more smart people throw matches.
There are very limited logical reasons why self-improvement couldn't work, if (and when) it starts. What could stop millions of AI researchers?
Counter-arguments to common objections:
"The ceiling for AI isn't that high" - Wrong. The human brain efficiency vs silicon chip gap is huge. And probably there is a more efficient way than the human brain.
"Millions of AI don't parallelize really well" - Even if true, what does it matter if 5 million agents equates to only 100k real agents? So what? Stretches AGI to ASI timeline to one week? This doesn't defeat the argument.
"Each level gets harder" - Not really. The human brain could be studied and with those findings AI could already improve itself hundredfold, and even if research becomes harder, the AI also becomes smarter. I'm guessing even if it breaks the exponential, from a human perspective this won't change much.
"AI can't figure out self-improvement" - If AI can start improving itself on a set of tasks that mirror human science fields, it will figure it out. Once the narrow AI hits its improvement goal to reach at least 150% human performance across every domain, while making computing as efficient as the human brain, I think it will be able to figure this one out.
"Manufacturing takes time" - This is such a weak argument. With millions of nanobots working day and night with near perfect coordination, it will take days or hours. Make the first nano bot, then 1000 nano bots, then make millions, then do whatever you feel like. If me, as a human can come up with this, I think this ASI-like creature will do just fine.
The brutal reality:
With this logical framework humanity is at best: irrelevant in 10 years, or worse: dead in 10 years, if the path to ASI is slightly bloodier, or if any of ASI's byproducts is deadly to us.
There is a very limited logical path that would make the singularity and ASI be human-centric, making human welfare its priority, and not allowing for it to wander off doing other stuff.
We can't guess what it'll do, it's the same as if ants tried to guess what humans would do. No chance.
So why is everyone so hyped for AI?
[link] [comments]