There are use cases where is better to not normalize the data.
I'm a fan of the sushi principle: raw data is better than cooked data.
Each process should take data from a golden source and not a pre-aggregated or overly normalized non-authorative source.
One day I hope to write about denormalization, explained explicitly via JOINs.
JSON is extremely fast these days. Gzipped JSON perhaps even more so.
I find that JSON blobs up to about 1 megabyte are very reasonable in most scenarios. You are looking at maybe a millisecond of latency overhead in exchange for much denser I/O for complex objects. If the system is very write-intensive, I would cap the blobs around 10-100kb.
Typically it's better to take normalized data and denormalize for your use case vs. not normalize in the first place. Really depends on your needs