Controlling Content Moderation in Generative AI: Ensuring Safe and Accurate Responses for Company Data
Controlling Content Moderation in Generative AI: Ensuring Safe and Accurate Responses for Company Data

Controlling Content Moderation in Generative AI: Ensuring Safe and Accurate Responses for Company Data

I'm supposed to analyse and implementing an Azure OpenAI solution to use it as as a chatbot answering customer questions in our company, using our own data like product manuals and repair manuals for training.

However, I'm concerned about content moderation and the potential risks associated with generative AI. How can we ensure that the AI remains within the boundaries of our intended use case and doesn't answer political or general questions?Additionally, how can we prevent the AI from guessing when it lacks the necessary knowledge, especially when handling questions related to potentially dangerous topics, such as sharp tools?

Our colleagues from the usa have implemented a GPT 3.5 solution and wrote in the prompt that it should only answer answers about our company. This works, but if you repeat the same question three times ("Who is competitor XYZ?") it starts generating answers how the competitor is known for its good products and quality.

Is azure OpenAI currently able to serve as a reliable chatbot answering customer service questions or is it the wrong solution for this? (I am based in the EU, so an answer that is incorrect about how to repair a Drill with a lot of power could lead to serious liability issues if it doesnt cite exactly from the source like a repair manual). I am afraid that generative AI will paraphrase from the source and generate incorrect solutions because it is not specific enough.

submitted by /u/Other-Name5179
[link] [comments]