I was watching Emily in Paris, a show that's quite cliché, and I was attempting to end the sentences of most characters in my head as soon as they started it, but I couldn't, in the end the lines of the characters were not as cliché as I expected, and surprisingly entertaining (as a french, btw)
Anyways, I suddenly thought about LLMs and the current AI craze, the fact that they complete sentences, blocks of texts, using the most probable answer after digging through the biggest ever dataset. Well, is that really what we want ? When I watch a show, do I really want the next line, the next plot event, to be the most statistically plausible one ? Well, chances are it's actually the opposite. What I like the most, is something that's surprising, it's something I can relate to in some way at the moment. In some way, the most statistically sound result would also be the most boring one.
In this way, I really think current LLMs can't succeed at any creative tasks, the most probable result is not what's interesting, because it's already been done over and over. There are always cheap knockoffs of famous stuff (movies, games), but they always suck, and don't make any money, because once again there's no value in replicating approximately what already exists and is known by everyone
[link] [comments]