So I've been writing shell scripts for about two decades, about 75% of that time professionally. Parsing the unstructured, text-based output of utilities is not the problem for anyone who's had maybe a few weeks of training. Most `... | grep ... | cut ... | sed ... | awk ...`-abominations the post laments can be replaced by a single informed call to `awk`, making everything a lot more elegant and concise.

Having JSON as an intermediate representation to work on instead is not going to save anyone - what we'd really need is for all tool versions/variants output on all platforms (all GNU/Linux distros, all the BSDs, all embedded Linux variants, all commercial UNICES, etc.) to be the same, all the time. That's not going to happen, so shell scripting is going to stay messy.

Also, for my INTERACTIVE shell, anyone can try to pry free-form, text-based and semi-structured output from my cold, dead hands. JSON or YAML output might be an acceptable compromise between being easy to parse and bearable for human consumption, but for my daily work, I would rather have my tools make it easy for me, the human part in the whole equation, and not some parsing logic that might not even (need to) exist. Shell scripting provides most of its value from the fact that since I'm in its repl-of-sorts all the time, I can translate that familiarity to scripts and executables effortlessly, and I would not want that going away. But I am rather certain it would, if we had JSON (or another form of more structured data interchange syntax) adopted as the "universal" interface between UNIX tools.

I don’t think the issue is that it’s hard to manually parse. The problem is that it’s hard for someone else to read your ad-hoc parser years later and reason about what you did if they need to modify it.

Disclaimer: I am the author of the article and JC.

This is even more true for the ungodly long `jq` incantations that people write.

It's like I get it, the old way is ugly and not always easy to decipher but at least it's shorter and your chances of understanding it are better.

I've had both -- the classic piped chain of UNIX commands and various JSON-producing tools piped to `jq`. The former were still easier to work with.

Yes, I have seen those too! That’s why I also wrote Jello, which is like jq but uses pure python without the boilerplate. Python is nearly universal now and typically easy to read, though more verbose. Jq is just as much a write-once tool as awk and perl for more complex queries. For simple attribute calls, though, it’s both terse and readable.

This is why I wrote murex shell (https://github.com/lmorg/murex), it's an alternative $SHELL, so you'd use it in place of Bash or Zsh, but it's optimised for modern DevOps tools. Which means JSON and YAML are first class citizens.

It's syntax isn't 100% POSIX compatible so there is some new stuff to learn but it works with all the existing POSIX tools and is more readable than AWK and Perl but while also being terse enough to write one liners.