Some other fun facts about JSON, its mainstream implementations and using it reliably:

1. json.dump(s) in Python by default emits non-standards-compliant JSON, ie. will happily serialize NaN/Inf/-Inf. You want to set allow_nan=False to be compliant. Otherwise this _will_ annoy someone who has to consume your shoddy pseudo-JSON from a standards-compliant library.

2. JSON allows for duplicate/repeated keys, and allows for the parser to basically do anything when that happens. Do you know how the parser implementation you use handles this? Are you sure there's no differences between that implementation and other implementations used in your system (eg. between execution and validation)? What about other undefined behaviour, like permitted number ranges?

3. Do you pass around user-provided JSON data accross your system? How many JSON nesting levels does your implementation allow? What happens if it's exceeded? What happens if different parts of your processing system have different limits? What about other unspecified limits like serialized size, string length?

My general opinion is that it's extremely hard to reliably use JSON as an interchange format reliably when multiple systems and/or parser implementations are involved. It's based on a set of underdefined specifications that leaves critical behaviour undefined, effectively making it impossible to have 100% interoperable implementations. It doesn't help that one of the mainstream implementations (in Python) is just non-compliant by default.

I highly encourage any greenfield project to look into well designed and better specified alternatives.

You want to set allow_nan=False to be compliant. Otherwise this _will_ annoy someone who has to consume your shoddy pseudo-JSON from a standards-compliant library

Funny (well, not really) thing is NaN and Inf are perfectly valid floating point numbers acoording to most (?) standards used on computers. To the point that I don't understand why it was left out of JSON. So unless you're 100% sure you won't encounter these numbers the choice is between not being able to use JSON, or finding hacks around (and using null isn't one of them since you have 3 numbers to represent), or just using non-compliant-yet-often-accepted JSON and possibly annoying someone whos parser doesn't handle it.

And for me there have been quite a lot of cases were I just quickly needed something simple to interface between components so when finding out they all support JSON+Nan/Inf then the choice is usually made quickly.

From a practical standpoint, defining numbers in JSON to be "whatever double precision binary floating point does, or optionally something more precise" would have been good enough, and capture what we end up having anyway.

Still, I prefer Crockford's choice: that JSON numbers are defined to be numbers. Infinity and the flavors of NaN are... not numbers.

In an extensible data interchange format, like [edn][1], people could define conventions about more specific interpretations of numbers, e.g.

    #ieee754/b64 45.6653 ; this is a double
We could build such a format on top of JSON (there are probably multiple), but I again agree with Crockford that this sort of thing does not belong in JSON.

Makes for a bunch of headaches, though, for sure.

One example is a data scientist I used to work with. He was working with lots of machine learning libraries that liked to use NaN to mean "nothing to see here." A fellow developer ended up writing code that used some sort of convention to work around it, e.g. number := decimal | {"magic-uuid": "NaN"}. I can see why some people are of the opinion "this is stupid, just allow NaNs." I disagree.

[1]: https://github.com/edn-format/edn