Have you tested this under high concurrency with lots of short OLTP queries? I’m curious whether the much faster compile time actually moves the point where JIT starts paying off, or if it’s still mostly useful for heavier queries.
> By default, jit_above_cost parameter is set to a very high number (100'000). This makes sense for LLVM, but doesn't make sense for faster providers. It's recommended to set this parameter value to something from ~200 to low thousands for pg_jitter (depending on what specific backend you use and your specific workloads).
It's not useful for sub-millisecond queries like point lookups, or other simple ones that process only a few records. sljit option starts to pay off when you process (not necessarily return) hundreds of records. The more - the better. I'm still thinking about a caching option, that will allow to lift this requirement somewhat - for cached plans. For non-cached ones it will stay.