The "AI experts doomsaying about AI" feels like a COVID situation (antivax "doctors" fearmongering on assumptions) for me, but my Machine Learning knowledge is few years old. What am I missing? What experts should I follow, that provide and explain realistic warnings?
The "AI experts doomsaying about AI" feels like a COVID situation (antivax "doctors" fearmongering on assumptions) for me, but my Machine Learning knowledge is few years old. What am I missing? What experts should I follow, that provide and explain realistic warnings?

The "AI experts doomsaying about AI" feels like a COVID situation (antivax "doctors" fearmongering on assumptions) for me, but my Machine Learning knowledge is few years old. What am I missing? What experts should I follow, that provide and explain realistic warnings?

Hello!

So, I've recently watched some podcasts with AI experts talking about how will AI be the end of us all, or get unimaginably smarter than we can ever imagine - but I just don't see how could that happen, with my (limited) knowledge of ML models. Every argument is based on speculation along the lines of "The ML model already has an IQ of XXX, it will have 10x more eventually" or "the ML model can improve itself, so it may learn how to hack us and break free". But I don't see how anything like that could happen with the way how current ML algorithms and models work.

However, my background with ML and AI is that I had a few classes during my Master's in IT few years ago, but it was not my field of study. I have an idea about how ML models such as Neural networks or Genetic algorithms work in theory, and almost everything of the most commonly presented "doom scenarios" do not match up with what I know about them.

I can't come up with a real scenario what would have to happen, for a ML model to do something unexpected and evil, i.e learning how to hack out of it's sandbox. ML model is just "dumb" math and statistics that lives on a server in someone's basement - you feed it a bunch of numbers, it gives you bunch of numbers back. If the numbers are wrong, it semi-randomly changes some constants and then next time it gives you different numbers.

The whole ML is build on feedback loop. If it wanted to do something unexpected and evil, it would have to first fail several times at attempting to do so - to learn that it should try a different option. That's a lot of opportunities to just stop it. ML has no concept of data it's working with, and more importantly - it's quality is directly depending on it's fitness function - on how it's solution is rated.

Which also brings me to my second point - a lot of experts are saying that an AI is getting smarter, and will soon be X times smarter than a human. But I also don't see that as possible, since its results are directly related to the data we teach it on, and the fitness function we use to give it feedback. I've recently heard someone mention an analogy with Einstein - Imagine the dumbest commoner talking to an Einstein, expect AIs are getting so smart that we will soon reach a point where Einstein is the commoner in this conversation, and the ML model is the Einstein.

But that would mean that we would have to have a way how to give the AI feedback, to have a fitness function that would be able of saying whether the AI is correct or not. It's like if the commoner was sitting behind Einstein the whole time he's coming up with his ideas, and correcting his every little piece of theory. Of course he wouldn't be able to come up with anything new, but his result would probably eventually get reduced to what the commoner can understand.

So, anyone saying that AIs will get super smart and then unpredictable, I don't see that happening. I agree they will get really good at tasks that we are already good at - because ML is really good at perfecting the one thing we already know about, but it can't come up with anything new.

I also don't see how would you have to set up the learning environment for an AI to even have a chance at escaping or causing some unexpected havock - that is, unless you design and teach the model with the goal of doing that in the first place.

Is my assumption about ML correct? Both the feedback loop neccessarily limiting the inteligence it can reach, and the fact that it's just literally numbers in - numbers out, and that it doesn't have any concept of the data it's working with, but it's just trying to semi-randomly change constants in the algorithm to get it's output to match what we want from it? Or are there any advances in the ML field that I'm missing that would better allow an AI to somehow realize that if it tries escaping the sandbox it's in, spread into the internet and starts living there on it's own?

On the other hand, I do agree that ML models ARE a problem and WILL really change society for the worse - but not because of some unexpected "It has decided that it should rebell and wipe us all" Sci-fi stuff that every expert seems to mention that "At this rate, it will soon happen", but because it's way way better than humans at doing one thing we tell it to - and once people start using it to manipulate with others, such as "Keep users on Facebook at all times" or "How to convince people to vote for me, based on their personal data we have about them", it will figure it out. It will also reduce the amount of knowledge people have, since they will be relying on AIs for most of it, instead of figuring it out themselves - which also hinders innovation, since every AI output is just most probable answer based on what other people would say, which isn't neccessiarily correct or the best approach. And the fact that everyone is focusing on the more click-bait warnings of "AI going rogue" or "AI getting too smart for us to handle" is just taking attention away from the problem - and since I also don't see how it would be technologically possible, it just feels exactly like "antivax doctors talking about vaccines being the end of us all" - people with assumptions that sound correct and interesting, but have no base in reality of what's true or possible. But because I don't know if I'm not missing some key piece of advancement on the field of ML, I made this question - the comparison to antivaxers may be completely wrong and untrue, and the experts are actually warning against what's possible and true, given the current state of the ML field. And I would like to clarify this for myslef, so I don't go around spreading false information myself.

So, is there anyone here who has better insight into the state and dangers of current ML? An expert I should follow, or a podcast, that actually provides and explains real scenarios without "assuming the advancement continues at this rate, it will...", but rather mentions algorithms or ways how could they be implemented to reallisticaly allow for such a scenario?

EDIT: I just wanted to add that yes, if someone decided to make an AI whose goal is to figure out a way how to live on it's own and wipe out humanity, that would be a problem - and that's a task that could be teachable to a ML model. But it would require wast amount of resources, and - we already have a lot of other technologies that would be capable of something like this, like nuclear or chemical weapons. And from what i know about ML, building a nuclear bomb would be a lot easier than teaching a ML model for such a task. I have a problem only with people saying that some AI will decide to go against it's purpose and go rogue, or cause such problems without against the creators wishes.

submitted by /u/Mikina
[link] [comments]