What happens if the LLMs are sabotaged?
Asking because I'm just curious. The LLMs are only as good as the data they are trained with. Let's take coding for example. If as an attack, the sources for these LLM's training data are filled with garbage or deliberately poorly written …