I like the idea of piping structured data between processes instead of streams of bytes, but that's what PowerShell is for. I don't think I've ever met another dev who likes PowerShell

I'd actually like the ability to do either raw bytes (traditional) or structured bytes (the NuShell/PowerShell idea) in an "opt-in" scenario. This would also allow a smoother transition/lack of needing to fully commit right off the bat. I've considered writing wrappers for standard utilities in Bash/Zsh that accept and output structured data in JSON (or maybe a denser serialization format that can easily convert to JSON?) instead of a raw byte stream that you could then just use regular old pipes with (a lot of `jq` would likely get called in between...) The "structured" versions of the utilities would have some namespacing convention such as a "struct_" prefix (or perhaps optionally, or aliased, for brevity, "s_") or (another naming idea I just had... oooh, I like this one) they would be boxed in brackets, so "[ls]" or "]ls[" would call "structured ls" (note that brackets without spaces around them are valid name characters)

To handle the transition to/from structured data, an idea I had was to omit one of the brackets in these names, so for example "ls[" would emit structured data but accept (well, assuming "ls" was a command that took stdin) an unstructured bytestream, and something like "]cat" would take in structured data on stdin but emit raw data... "cat[" would take raw data and... interpret it as JSON? or something? and output that as structured data? I don't know, it has to be fleshed out, but this could work maybe!

To get the JSON data back to a visual format like a table, we'd probably have to explicitly do what NuShell implicitly calls when you don't provide it (I forgot the name of it).

anyway, I haven't even begun a POC of this idea, but it was one I had. Anyone else like this idea?

This is similar to how my shell works. It still just passes bytes around but additionally passes information about how those bytes could be interpreted. A schema if you will. So it works as cleanly with POSIX / GNU / et al tools as it does with fancy JSON, YAML, CSV and other document formats.

It basically sits somewhere between Powershell and Bash: typed pipelines like Powershell but without sacrificing familiarity with all the CLI commands you already use day in and day out.

https://github.com/lmorg/murex

As an aside, I’m about to drop a massive update in the next few days that will make the shell even more intuitive to use.