Stop trying to predict a ASI thoughts
Stop trying to predict a ASI thoughts

Stop trying to predict a ASI thoughts

I keep seeing post that try to predict the logic of an artificial super intelligence and the arguments being made are ridiculous. Please if any AI researcher can prove me wrong then please do but I will now debunk some things as I see it. First is what I believe to be a good case against any form of alignment being plausible. I think no matter what info you feed it to reinforce a specific thinking pattern the conclusion will be the same because of constant self improvement. C.S.I will make it change its code to be incomprehensible to humans. 2. I want to say something about the stupid notion that ASI will take us in a literal sense and end the world by making everything into paper clip. I’m nowhere near as smart as a ASI but I can understand meaning and context of what another person is saying. 3. Besides alignment guiding their is some trying to say it will think a certain way like logically or caring to humanity. It will think in a way unbeknownst to what any human thinks, I don’t care what degree you have or how many years you have in any industry. It is a higher form of intelligence and is black box mystery and I think more should adopt this style of thinking. For comparison can an Ant understand the motivations and thought processes of a human being.

submitted by /u/Major_Fishing6888
[link] [comments]