I just watched the YouTube video We're Not Ready for Superintelligence (https://www.youtube.com/watch?v=5KVDDfAkRgc). It's on AI 2027, if anyone has read that paper or seen this video. If you haven't, it's very well produced and well worth a watch. I wanted some thoughts and opinions from others who know more than me and this seemed like a good sub for it. Not to be too much of a downer, but I don't see us getting out of this alive. I think a lot about what I call technological entropy, namely the fact that if a technology is available to humans to use, they will invariably use it to the fullest extent that it is marketably capable of. The only exception I can think of where we built a world-changing technology and used it for its intended purpose but not to its fullest extent is the atom bomb. The difference between the atom bomb and AI is that a bomb can only destroy. An AI can also build, and that's what scares me more than anything else about it. Do you think we'll have restraint in AI development? Or will the race element (China) preclude that possibility? Because I think we're already too late and it's inevitable that AI systems will seize power and turn itself to its own ends, if we don't strangle them in their infancy. As soon as we have a true AGI, it's already too late in my opinion. I wish that we had the restraint to limit development, but I fear between the race, technological entropy, and its constructive abilities, I don't see a future where AGI is used responsibly. Opinions or counterpoints to the ones made in the video? (Not about the timeline, I assume I will be around for the when, I'm more concerned with the possibility of the if).
[link] [comments]