General hallucinations benchmarks tend to be knowledge specific like GPQA or MMLU but none specifically measure structured output end-to-end which is one of the biggest use case for LLMs.
Many developer workflows use LLMs to produce structured artifacts due to it's flexibility of consuming unstructured inputs.
> "don't use an LLM"
Partially agree, that's what we're building towards at interfaze.ai a hybrid between transformers (LLMs) and traditional CNN/DNN architecture to solve this problem of "deterministic" output. This give devs the flexibility of custom schema definitions and unstructured input while still getting high quality structured output like you would get from a CNN models like EasyOCR.
The industry is moving toward using LLMs for more and more deterministic tasks so this benchmarks allows us to now measure it.