Bing AI chat messages are being hijacked by ads pushing malware.
Malvertising has made its way to Bing's chatbot/search engine.
Cybersecurity researchers observed a malicious ad being offered as part of the Chat-GPT, AI-powered answer to a search query.
Malvertising is a practice where hackers trick ad networks into displaying ads that look legitimate but are actually malicious.
Microsoft integrated Chat-GPT into Bing earlier this year and started monetizing it.
When a user types in a query, they would get a result paired with sponsored links.
In this instance, researchers were given a link that redirected them to a malicious site.
Threat actors continue to leverage search ads to redirect users to malicious sites hosting malware.
Bing Chat serves some of the same ads seen via a traditional Bing query.
[link] [comments]