> I highly encourage any greenfield project to look into well designed and better specified alternatives.
Like what?
Part of the problem is that there's at least half-a-dozen high quality answers out of the gate (gRPC, FlatBuffers, Protocol Buffers, XML in some cases, Thrift), and an even-longer long tail after that. It's made harder when four different teams who deeply loathe JSON and independently decide to use something "better" can legitimately use four completely different technologies if they don't communicate with each other.
To your comment above – you can bodge around interop problems with JSON in ways that you cannot with some of these other technologies.
I like to joke that I invented ndjson over a decade ago when I accidentally forgot to put things in an array before `json.dumps`, I just wasn't smart enough to call it a standard. But when you do end up with ndjson when you wanted an array of results, or vice versa, JSON makes it easy to munge things to where you need.
Compare that to something like protobuf: it's not a self-synchronizing stream, so if you send someone multiple messages without framing them (prefix by length or delimited are popular approaches), they're going to decode a single message that doesn't make much sense on the other end. And they won't be able to fix it at all.
So I guess JSON is New Jersey style design[1].
FWIW, this is a conscious design decision with Protobuf: it allows for easy upsert operations on serialized messages by appending another message with the updated field values. This is very useful for middleware that wants to either just add its own context to a message it doesn't even parse [1], or for middleware that might handle protobuf messages serialized with unknown fields.
On the other hand, 'newline delimited protobuf' is much less useful day-to-day than ndjson, as gRPC provides message streaming, which solves the issue of wanting to stream small elements of a long response (which is the general usecase of ndjson from my experience). For on-disk storage of sequential protobufs (or any other data, really), you should be using something like riegeli [2], as it provides critical features like seek offsets, compression and corruption resiliency.
[1] - eg. passing a Request message from some web server frontend, through request routers, logging, ACL and ratelimit systems up to the actual service handling the request.