Hey r/artificial
I’ve been exploring tools for building and managing AI workflows, especially for applications powered by LLMs. Along the way, I’ve often felt the frustration of juggling multiple tools that don’t quite fit together seamlessly.
To address this, I ended up building something that simplifies the process end-to-end (it’s called Athina).
Here’s what it helps you do:
- Test & version control prompts
- Build multi-step AI workflows
- Manage datasets with a spreadsheet UI
- Run evaluations on datasets or CI/CD
- Compare outputs across prompts/models
- Monitor traces, evaluations, & regressions.
And so much more...
I’d love to know—how are you all handling prompt testing, dataset management, or workflow automation in your AI projects? What tools or strategies do you use?
[link] [comments]