If you are someone who had never used AI, or had only used ChatGPT 3.5, I'm going to be highly skeptical of any claims you make about AI capabilities and limitations.
We often wind up seeing strong claims, one way or the other, that are not based in reality, but instead motivated by fear or hatred. There are people who hate AI images because it can never create "real art", while simultaneously fearing that it will become so good that it will steal all artists jobs. People are so emotionally charged and cloudy headed, that they cannot do a level headed, honest assessment of this technology.
People who have never used ChatGPT, or have only used 3.5, love to parrot the same talking points about how it's useless because it makes mistakes. What they never seem to consider is how ChatGPT actually works, because if they knew then they would realize that it is unreasonable it to have perfect knowledge and understanding - in much the same way that humans struggle to remember things they had learned years ago. Can you accurately recall everything you studied in college? If someone asked you to answer a math equation without using a calculator or scratch paper, can you arrive at the correct answer? If you cannot do these things, should I question if you have any intelligence?
It might be sounding like I'm holding AI up on this grand pedestal, but really I'm just annoyed and frustrated by hearing the same bad arguments made over and over. You can't say anything to correct anyone without getting dog piled with down votes.
Large language models are impressive, with their ability to do things computers had struggled with since their initial inception. I'm sure Alan Turing would have been excited by all this if he were still alive today. Criticizing large language models for not being able to easily solve complicated math problems is like criticizing cars for not being able to easily cross a deep river. Cars are not boats and large language models are not calculators.
[link] [comments]