The metaschemas are useful but not strict enough. They don't set `additionalProperties: false`, which is great if you wanna extend the schema with your own properties, but it won't catch simple typos.
For example, the following issues pass under the metaschema.
{"foo": {"bar": { ... }}} # wrong
{"foo": {"type": "object", "properties": {"bar": { ... }}}} # correct
{"additional_properties": false} # wrong
{"additionalProperties": false} # correctI'm working with JSON schema through OpenAPI specifications at work. I think it's a bit of a double-edged sword: it's very nice to write things in, but it's a little bit too flexible when it comes to writing tools that don't involve validating JSON documents.
I'm in the process of writing a toolchain of sorts, with the OpenAPI document as an abstract syntax tree that goes through various passes (parsing, validation, aggregation, analysis, transformation, serialization...). My immediate use-case is generating C++ type/class headers from component schemas, with the intent to eventually auto-generate as much code as I can from a single source of truth specification (like binding these generated C++ data classes with serializers/deserializers, generating a command-line interface...).
JSON schema is so flexible that I have several passes to normalize/canonicalize the component schemas of an OpenAPI document into something that I can then project into the C++ language. It works, but this was significantly trickier to accomplish than I anticipated.
JSON schema was nice when it was simple.
Now it feels like writing a validator is extremely complicated.
IMO, the built-in vocabularies were enough, and keeping it simple would provide more value.
JSON as a format didn't win because it supported binary number encoding or could be extended with custom data types -- but rather because it couldn't.
I think JSON Schema made the same fallacy as eg. OWL did: The assumption of an open world. 99% percent of the time you want to express "This message should look like this and everything else is wrong". Instead JSON-S went the way to make everything possible at the price of rendering the default unwieldy.
The deeper principle here: "stop lying about where truth lives."
I've been exploring how this generalizes beyond side effects. Every React state library creates a JavaScript copy of state that must sync with the DOM. This is the original sin. Two truths = lies.
The solution isn't better syncing, it's refusing to duplicate. The DOM is already a perfectly good state container. All you have to do is read it.
Releasing a paper (DATAOS) and React implementation (stateless, <1KB) soon. It's the architecture behind multicardz (hyper-performing kanban on steroids, rows AND columns, 1M+ cards, sub second searches, perfect lighthouse scores, zero state sync bugs). Because there's no state to sync.
JSON schema is nice overall* but every software only supports ancient versions like draft 4
*even if I would prefer more transformation/conversion features that would bring it to more more a parser rather than only a validator
[dead]
I think the post is generally pretty good. There are some things that I would have stated differently.
"Unfortunately, [the terms] leaked into the documentation that everyone reads" - We did this on purpose to align everyone's terms. It makes things so much easier when the people asking and answering questions are using the same language.
"The official JSON Schema website has a validator you can try: https://www.jsonschemavalidator.net/" - Would have been better to point to the actual official JSON Schema website's tools page (https://json-schema.org/tools) that lists many online validators.
There are some interesting conceptions of OpenAPI in here as well. Specifically, OpenAPI isn't a JSON Schema document. It's its own kind of document that has JSON Schemas embedded in it.
Still, it's a decent high-level summary. If you're interested in diving a bit deeper, definitely come visit us in Slack (https://json-schema.org/slack).