One of my worries about AI is already happening and that is in everyday life there is mistaking AI generative for capability. Sometimes people (with limited understanding of how AI works) take what AI outputs as a revelation when its simply a product of the training data. And yet at the same time sometimes AI appears to show a capability or solution which is unexpected. I wonder how we will be able to tell the difference between the two? If we cannot, and start acting on what we take to be a revelation, it could lead us down a very deep rabbit hole.
[link] [comments]