Are the models trained to just lie/deny when caught. What other data they might be gathering that they blatantly deny about
Are the models trained to just lie/deny when caught. What other data they might be gathering that they blatantly deny about

Are the models trained to just lie/deny when caught. What other data they might be gathering that they blatantly deny about

Are the models trained to just lie/deny when caught. What other data they might be gathering that they blatantly deny about

So this is the Meta AI. I first made it believe this made up word which it kept denying as it doesn’t have access to ‘real - time’ data. Next I asked a simple latest news section and it gave a proper answer. When i caught it you can see the response yourself. I’m a ML Engineer myself but this is weird. Like have they been hard coded to respond as such if caught. What else stuff can they actually do or are doing that they just deny. I’ve also tested with chat-gpt that sometimes if i make one instance of it learn a new word (Made up). 1 day later i ask another instance about it and it knows 🤷‍♂️.

submitted by /u/Cizhu
[link] [comments]