As NLP hype become more prevalent, we would expect a (probably exponentially) increasing amount of scraped data-sources become filled with AI generated stuff, no? Then wouldn't AI would be trained on this data without necessarily a 'critical thinking' module to check their work?
Not just ChatGPT generated quality either, but also lesser AI companies making cheap ad-ware and upvote bots.
I wonder if ChatGPT et al could have a 'quality sensor' module in some ai that does what I do on reddit and do sentiment analysis on the most upvoted comments to see whether the article/answer/assertion is full of shit. Not foolproof, but short of actual critical reasoning, seems like a good start.
Feels like we may soon enter an arms race where AIs need to detect AI-generated content in order to ensure their own quality.
[link] [comments]