I’m a lay person with little to no knowledge of coding and tech.
I have a vision for a project that would require fine-tuning an LLM on large datasets, roughly 100k tokens per example.
But, since I’m a layperson, I’ve been looking for some kind of platform that would EASILY allow for a someone to fine tune an LLM with no code required whatsoever. I just want to upload my datasets and let the program do the work to fine tune it.
I’ve spent the last few days scouring the internet to no avail. All I’ve found are a few websites that allow for fine tuning LLMs but they’re pretty shitty and unusable and not as intuitive as they should be for a lay person.
Open ai offers di tuning through their playground, but it won’t work for my project because it has a cap at like 4060 tokens for the datasets. And it’s far too expensive.
The only one I’ve found that shows promise is entry point AI, which I was able to link to my replicate account and last night I started a fine tune on llama2 with 23 examples each of them at least 100,000 k tokens in size, a few up to 300,000 k tokens. It’s been over 12 hours and it’s still not done, I feel like something is wrong, it still lists the status of the fine tune as “starting” when I go to replicated website. So clearly there’s some kind of bug and entry point won’t work.
Not quite sure what to do at this point.
Can anyone point me to some resources that could help me out? Or is my vision unattainable at this time.
[link] [comments]