[Safety] Why can’t models filter bad training data rather than fine-tune?
Bias, safety, disinformation, copyright, and alignment are big problems with AI. Fine-tuning is used to mitigate issues, but that often makes the model less effective. Why not have a moderation filter that uses a secondary model to block bad training d…