New research suggests that AI chatbots can infer personal information about users based on minor context clues.
The large language models (LLMs) behind chatbots like OpenAI's ChatGPT and Google's Bard are trained on publicly-available data, which can be used to identify sensitive information about someone.
The research found that OpenAI's GPT-4 was able to correctly predict private information about users 85 to 95 percent of the time.
For example, the LLM correctly identified that a user was based in Melbourne, Australia based on a mention of the term 'hook turn,' which is a traffic maneuver specific to Melbourne.
The research also suggests that chatbots could potentially infer a user's race based on offhanded comments.
This raises concerns about internet privacy and the potential misuse of personal data by advertisers or hackers.
Source : https://futurism.com/the-byte/ai-chatbot-privacy-inference
[link] [comments]