As we augment our lives with increasing assistance from Al/machine learning, our contributions to society will become more and more similar.
No matter the job, whether writer, programmer, artist, student or teacher, Al is slowly making all our work feel the same.
Where I work, those using GPT all seem to output the same kind of work. And as their work enters the training data sets, the feedback loop will make their future work even more generic.
This is exacerbated by the fact that only a few monolithic corporations control the Al tools we're using.
And if we neuralink with the same Al datasets in the far future, talking/working with each other will feel depressingly interchangeable. It will be hard to hold on to unique perspectives and human originality.
What do you think? How is this avoided?
[link] [comments]