Genuine question.
Why does almost every AI training setup still feel extremely engineer-focused?
Most tools I’ve tried expect people to already understand things like:
CUDA
VRAM
LoRA settings
Docker
dependency issues
quantization
optimizers
terminal commands
training configs
Even simple fine-tuning workflows become confusing fast.
I’ve been thinking a lot about whether there’s room for a much more beginner-friendly approach where users could basically:
upload dataset → train → test → deploy
while the system handles things like:
GPU selection
safe limits
preventing huge billing mistakes
deployment setup
logs
model storage
Do people here actually want simpler AI training workflows, or do most users eventually learn the technical side anyway?
Curious what the biggest pain points are for people who’ve tried training models themselves.
[link] [comments]