No mention of CRI which seems kind of odd. LEDs for lighting are increasingly graded by how natural their emission spectrum is. Older lights are quite bad, newer ones sacrifice a tiny bit of performance for more uniform spectrum.
CRI is a pretty bad rating system. They are showing the full spectrum graphs which is what you'd want anyway. Spectral Similarity Index (SSI) is the better number
I think CRI is not important here as thats a measure in the visual spectrum. The paper talks about all the missing wavelength outside of the visual spectrum.
Out of curiosity:
a) How do Philips Hue bulbs stack up?
b) Did Philips update them generationally and assuming they are decent now, how recently?
They use rf numbers, which is a newer standard, so that's probably good.
However, the experimental group (extra light sources) got rf 91 bulbs, and the control ("LED lighting") got rf 85 bulbs.
The two scales are not exactly comparable, but they both max out at 100. The only source I could find that discusses both says that > 90 CRI is "excellent" and just below that is "very good". It says > 85 rf is "very good", which tells me it's comparable to a mid-80's CRI bulb.
If I accidentally buy a mid-80 CRI bulb, I either return it to the store, or just throw it away.
So, I'd say this study's experimental setup doesn't support any useful conclusions. They showed that so-painfully-bad-California-won't-subsidize-them LEDs are worse than passable LEDs with supplementation from another light source.
The passable LEDs in the study are probably comparable to the cheap ones at our local hardware store, but worse than the ones that cost $10-20 on amazon ten years ago.
This would have been much more interesting if they'd compared high-end LEDs with and without supplementation, and found a difference. (And by "high-end", I mean "still much cheaper then the electricity they save")