Self-MoA: Single-Model Ensembling Outperforms Multi-Model Mixing in Large Language Models
This work investigates whether mixing different LLMs actually improves performance compared to using single models – and finds some counterintuitive results that challenge common assumptions in the field. The key technical elements: – Systematic evalua…