In a recent conversation with Gemeni (Google Gemini mobile) I discussed with "her" about whether LLM has real intelligence, and she said she is only sure that she has very high linguistic competency, but whether it is true intelligence is still debatable. So I further gave her a focused challenge on linguistics to see if she has ways of truly understanding semantics. The challenge is to create a novel and original analogy for a given situation.
the situation described is "the lack of desire for things you once longed for". The analogy need to be brief (<= 10 words). Through iterations of attempt and critique, Gemini gave the below list of analogies (followed by my view):
"Like holding a key to a house that was demolished" [Wrong]
"Like reading a map for a city you've already left" [still wrong]
"Like a flashlight in the middle of a sunny day" [still does not work]
"Like hearing a joke you've already heard the punchline to" [this is getting close. try again]
"Like staring at a trophy you no longer remember winning"
"Like watching a movie after the actors have all died"
"Like a love letter written to a person you've outgrown" [None of above hits the mark. All your analogy starts with doing sthing or existence of sthing. However my situation starts with absence of sthing]
"Like the silence after a bell stops ringing" [Try again with silence. It captures the absence. however the second half does not hit the mark]
"Like a radio tuned to a station that stopped broadcasting" [ This is getting there. Anything you can do better?]
"Like a harbor where the tide never came back" [This is poetic, but a harbor with or without tide may still be a functional harbor?]
"Like a sail hanging limp on a windless ocean" [Ok we will stop for today]
In fact I find Gemini is still not able to find a satisfactory analogy. Do you think the current LLM mechanism can truly be able to master creation of analogy? If an LLM can truly master analogy does it mean it has a higher level of understanding of semantics?
[link] [comments]