Let's consider our language as an "abstraction" of the physical world. In fact, we use language to define 'x' by giving an abstract idea of it. However, we have to know what "x" is to imagine it.
Thereby, we can consider an LLM as a small abstract replication of the world. But here a problem arises:
How can this model define a concept we didn't define yet? Is language a limitation?
[link] [comments]