I thought this was common practice, generated columns for JSON performance. I've even used this (although it was in Postgres) to maintain foreign key constraints where the key is buried in a JSON column. What we were doing was slightly cursed but it worked perfectly.
It works until you realize some of these usages would've been better as individual key/value rows.
For example, if you want to store settings as JSON, you first have to parse it through e.g. Zod, hope that it isn't failing due to schema changes (or write migrations and hope that succeeds).
When a simple key/value row just works fine, and you can even do partial fetches / updates
Doesn't sound very cursed, standard normalized relations for things that need it and jsonb for the big bags of attributes you don't care to split apart
It is. I’d wondered if STORED is necessary and this example uses VIRTUAL.
This is the typical practice for most index types in SingleStore as well except with the Multi-Value Hash Index which is defined over a JSON or BSON path
If you're using postgres, couldn't you just create an index on the field inside the JSONB column directly? What advantage are you getting from extracting it to a separate column?
ref: https://www.crunchydata.com/blog/indexing-jsonb-in-postgres