logoalt Hacker News

olivermuty05/14/20250 repliesview on HN

Hello, I have two projects in prod on ash!

First of all, its not "macro based", as that implies dark magic and sacrificed goats. The spark dsl underlying all this is just structs all the way down in a nested manner. Just like you would see if you look under the covers of a Absinthe Blueprint produced by that dsl.

The dsl is declarative and allows to express a lot of stuff with less code, but I would say that saying its "macro based" is a bit misleading, although "technically correct". You could achieve the same by just having functions returning structs.

I have replaced a biiiiig nestjs app that exposed graphql with an ash app exposing graphql, and the boilerplate ratio for resolvers etc is bordering on 1:999. Like literally, across a 90 table large application I have maybe 600 lines of "specifically graphql related code" (5-10 lines of code to expose select actions as mutations and queries per resource). As opposed to the nestjs codebase that was using an annotation driven approach and had a gazillion lines of glue code for resolvers and data loading.

Also the authorization logic through the policies is so extremely composable and easy to do when combined with matching on resources it is fantastic. Each resource "owns" its own authorization, so there is no song and dance about figuring out acl from the entry point and then downwards a tree. You just let the resolver resolve its way down the graphql tree or just feed a long ass loader path into Ash.load and each resource is responsible to implement its own policies and you don't have to worry about accidentally leaking data because you access the data from a new entry path that was not locked down because you added a new resolver.

I kept reimplementing the same boring boiler plate every damn time I started a new project and that pain is almost 100% gone.

It is a harsh learning curve for sure, because the one downside of Ash is that you have to do it the "ash way" for stuff to compose as beautifully as it does. Once you really get into the groove making "expression calculations" (basically projections that reach into other resources or columns to make some kind of computed data, but is done in the database layer since you expose it as an expression) that you can compose and make depend on eachother etc it becomes so incredibly fast to make new functionality.

You think about one and one thing and let the framework take care of how to compose the loading and usage of what you make. A much simpler model than "making it yourself" in ecto which I have been doing for 10 years prior.