LLM’s (learning language models) are currently being explored by threat actors in order to create specialized malware by introducing their own versions of ChatGPT, for example, and engineering it in order to disregard all moral and ethical values preprogrammed into their design. This isn’t some type of “hack,” or way of manipulating AI; it’s a new era of cyberwar.
I’m convinced most of them are back-door encrypted Trojans that threat actors use for pivoting into a network or even a honeynet but nonetheless they exist and are quite scary from the perspective of someone in IT.
What do you think of this concept?
[link] [comments]