Amazing work! Between this, Alacritty [1], and coreutils [2], we're getting pretty close to a plausible all-Rust CLI stack.

While Cicada is pretty clearly modeled on the "old" generation of shells (sh, Bash, Zsh, etc.), one has to wonder what a more modern shell might look to address some of the problems of its predecessors like pipes that are essentially text-only, poor composability (see `cut` or `xargs`), and terribly obscure syntax.

PowerShell seems to have been a flop outside of the Windows ecosystem, and between overly elaborate syntax and COM baggage, was probably rightfully passed over, but it probably had the right idea. Imagine a future where we're piping structured objects between programs, scripting in a modern language, and have IntelliSense quality auto-completion for every command.

---

[1] https://github.com/jwilm/alacritty

[2] https://github.com/uutils/coreutils

My structured objects aren't your structured objects. In that use case why not use a dedicated program (like say ae python/js/clojure interpreter) to handle more complex pipes and keep the shell level primitves... Well, primitive.

Text is compatible with all systems, past and present, and can be used to model more complex objects. Let's not add features to the core on a "why not" please.

The so-called "plain text formats" are also just representations of complex objects. Often these formats are ad-hoc and hard to parse correctly. Multiple text files in a file system hierarchy (e.g. /etc) are also just nested structures.

So in principle, treating those as complex object structures is the right way to go. Also, I believe this is the idea behind D-Bus and similar modern Unix developments.

However, what's hard is to provide good tooling with a simple syntax that is simple to understand:

* Windows Registry and Power Shell show how not to do it.

* The XML toolchain demonstrate that any unnecessary complexity in the meta-structure will haunt you through every bit of the toolchain (querying, validation/schema, etc.).

* "jq" goes somewhat into the right direction for JSON files, but still hasn't found wide adoption.

This appears to be a really hard design issue. Also, while deep hierarchies are easier to process by scripts, a human overview is mostly achieved by advances searching and tags rather than hierarchies.

A shell needs to accomodate for both, but maybe we really just need better command line tools for simple hierarchial processing of arbitrary text file formats (ad-hoc configs, CSV, INI, JSON, YAML, XML, etc.).

Powershell does two things right: it uses structured output, and it separates producing the data from rendering the data.

It also does one thing wrong: it uses objects (i.e. the stuff that carries behavior, not just state). This ties it to a particular object model, and the framework that supports that model.

What's really needed is something simple that's data-centric, like JSON, but with a complete toolchain to define schemas and perform transformations, like XML (but without the warts and overengineering).

> What's really needed is something simple that's data-centric, like JSON, but with a complete toolchain to define schemas and perform transformations, like XML (but without the warts and overengineering).

What would it be? Is there a real alternative to XML with those features out there? I don't think so.

When you would want to have the features of XML and would design it from scratch I'm quite sure it would have the complexity of XML again.

Usually implementing some functionality yields every time the same level of complexity regardless of how you implement it (given that none of the implementations isn't out right stupid of curse).

> When you would want to have the features of XML and would design it from scratch I'm quite sure it would have the complexity of XML again.

I don't think so. The problem with the XML stack is that it has been designed with some very "enterprisey" (for the lack of better term) scenarios in mind - stuff like SOAP. Consequently, it was all design by committee in the worst possible sense of the word, and it shows.

To see what I mean, take a look at XML Schema W3C specs. That's probably the worst part of it, so it should be readily apparent what I mean:

https://www.w3.org/TR/xmlschema11-1/ https://www.w3.org/TR/xmlschema11-2/

The other problem with XML is that it's rooted in SGML, and inherited a lot of its syntax and semantics, which were designed for a completely different use case - marking up documents. Consequently, the syntax is overly verbose, and some features are inconsistent for other scenarios - for example, if you use XML to describe structured data, when do you use attributes, and when do you use child elements? Don't forget that attributes are semantically unordered in XDM, while elements are ordered, but also that attributes cannot contain anything but scalar values and arrays thereof.

Oh, and then don't forget all the legacy stuff like DTD, which is mostly redundant in the face of XML Schema and XInclude, except it's still a required part of the spec.

I guess the TL;DR version of it is that XML today is kinda like Java - it was there for too long, including periods when our ideas of best practices were radically different, and all that was enshrined in the design, and then fossilized in the name of backwards compatibility.

One important takeaway from XML - why it was so successful, IMO - is that having a coherent, a tightly bound spec stack is a good thing. For example, with XML, when someone is talking about schemas, you can pretty much assume it's XML Schema by default (yes, there's also RELAX NG, but I think calling it schema is a misnomer, because it doesn't delve much into semantics of what it describes - it's more of a grammar definition language for XML). To transfer XML, you use XSLT. To query it, you use XPath or XQuery (which is a strict superset). And so on. With JSON, there's no such certainty.

The other thing that the XML stack didn't quite see fully through, but showed that it could be a nice thing, is its homoiconicity: e.g. XML Schema and XSLT being XML. Less so with XPath and XQuery, but there they had at least defined a canonical XML representation for it, which gives you most of the same advantages. Unfortunately, with XML it was just as often a curse as it was a blessing, because of how verbose and sometimes awkward its syntax is - anyone who wrote large amounts of XSLT especially knows what I'm talking about. On the other hand, at least XML had comments, unlike JSON!

Hey, maybe that's actually the test case? A data representation language must be concise enough, powerful enough, and flexible enough to make it possible to use it to define its own schema and transformations, without it being a painful experience, while also being simple enough that a single person can write a parser for it in a reasonable amount of time.

> A data representation language must be concise enough, powerful enough, and flexible enough to make it possible to use it to define its own schema and transformations, without it being a painful experience, while also being simple enough that a single person can write a parser for it in a reasonable amount of time.

Seems like a standardized S-expression format would fit the bill.

You could even try to make it work with existing XML tools by specifying a way to generate XML SAX events from the S-expressions.

I'd prefer a format that distinguishes between sequences and associative arrays a bit more clearly. You can do that with S-exprs with some additional structure imposed on top, but then that gets more verbose than it has to be.

JSON is actually pretty decent, if only it had comments, richer data types, and some relaxed rules around syntax (e.g. allow trailing commas and unquoted keys).

Maybe something like EDN? https://github.com/edn-format/edn