<span class="vcard">/u/juliensalinas</span>
/u/juliensalinas

How To Fine-Tune LLaMA, OpenLLaMA, And XGen, With JAX On A GPU Or A TPU

Hello, Fine-tuning your own large language model is the best way to achieve state-of-the-art results, even better than ChatGPT or GPT-4, especially if you fine-tune a modern AI model like LLaMA, OpenLLaMA, or XGen. Properly fine-tuning these models is …