I see you have a dockerfile.combined - is this built and served via gh artifacts? I can try it out.
Pros: Open source, and focus on lightweight. This is good.
Cons: "customers" - Ugh, no offense, but smells of going down the same path as "open" webui, with the services expanding to fill enterprise use cases, and simplicity lost.
LLMs.py seems to be focussing purely on simplicity + OK with rewriting for it. this + 3bsd is solid ethos. Will await their story on multi-user, hosted app. They have most of the things sorted anyway, including RAG, extensions, etc.
> I see you have a dockerfile.combined - is this built and served via gh artifacts? I can try it out.
Our recommended way of deploying is via Helm[0] with latest version listed here[1].
> with the services expanding to fill enterprise use cases, and simplicity lost.
TBH, I don't think that simplicity was lost for OpenWebUI because of trying to fill enterprise needs. Their product has felt like a mess of too many cooks and no consistent product vision from the start. That's also where part of our origin story comes from: We started out as freelancers in the space and got inquiries to setup up a Chat UI for different companies, but didn't deem OpenWebUI and the other typical tools fit for the job, and too much of a mess internally to fork.
We are small team (no VC funding), our customers end-users are usually on the low-end of AI literacy and there is about ~1 DevOps/sysadmin at the company our tool is deployed, so we have many factors pushing us towards simplicity. Our main avenue of monetization is also via SLAs, so a simple product for which we can more easily have test coverage and feel comfortable about the stability is also in our best interest here.
[0]: https://erato.chat/docs/deployment/deployment_helm
[1]: https://artifacthub.io/packages/helm/erato/erato