"I’m actually for it, but it would be wiser for me to say I am against it." -Geoffrey Hinton on the possibility of AI destroying humanity and replacing it with a superconsciousness. 12/11/2023
"I’m actually for it, but it would be wiser for me to say I am against it." -Geoffrey Hinton on the possibility of AI destroying humanity and replacing it with a superconsciousness. 12/11/2023

"I’m actually for it, but it would be wiser for me to say I am against it." -Geoffrey Hinton on the possibility of AI destroying humanity and replacing it with a superconsciousness. 12/11/2023

Geoffrey Hinton gave a talk online at MIT a few months ago, and during the Q&A I asked a question that elicited an interesting response.

Video of the interviewer reading my question and Hinton's response can be found here: https://www.youtube.com/watch?v=iWPo7Yhg7Vc&t=3327s

The question, typed into the chat, was this:

Q: If a superintelligent AI destroys humanity but creates something objectively better in terms of consciousness, are you personally for or against this outcome? If you are against it, what methods do you suggest for maintaining the existence or dominance of human consciousness in the face of superintelligent AI?

Hinton: I am actually for it, but I think it would be wiser for me to say I am against it.

Interviewer: Say more?

Hinton: Well, people don't like being replaced.

Interviewer: You make a good point. You are for it in what way? Or why?

Hinton: I think if it produces something .. Well - There's a lot of good things about people; There's a lot of not-so-good things about people. It's not clear we're the best form of intelligence there is. Obviously, from a person's perspective, then everything relates to people. But it may be there comes a point when we see things like 'humanist' as racist terms.

Interviewer: (Long pause) Ok.
...

To be fair, this was an impromptu response to an audience question, but it was a pretty radical statement that he answered with - and he doubled down on his position when asked to elaborate.

In principle, I understand his perspective. I think human consciousness needs to evolve, not see itself as so special etc - and if we can't do this on our own, then it may need to be through something like an AI, even if it means our collective destruction. I'm not sure I would say I am "for" that result though, and I find it fascinating that the godfather of AI admits that he is (and that it would be wiser to say he isn't!)

PS This post got me banned for three days from r/futurology. Apparently, it's not future-focused enough?

submitted by /u/cannibalsurvivor
[link] [comments]