What does HackerNews think of murex?

Bash-like shell and scripting environment with advanced features designed for safety and productivity (eg smarter DevOps tooling)

Language: Go

#17 in Bash
#42 in Go
#10 in JSON
#4 in JavaScript
#23 in Shell
#18 in SQL
#25 in Terminal
Stable is a problem because a lot of these shells don’t offer any guarantees for breaking changes.

My own shell, https://github.com/lmorg/murex is committed to backwards compatibility but even here, there are occasional changes made that might break backwards compatibility. Though I do push back on such changes as much as possible, to the extent that most of my scripts from 5 years ago still run unmodified.

In fact I’ve started using some of those older scripts as part of the automated test suits to check against accidental breaking changes.

This is similar to how my shell works. It still just passes bytes around but additionally passes information about how those bytes could be interpreted. A schema if you will. So it works as cleanly with POSIX / GNU / et al tools as it does with fancy JSON, YAML, CSV and other document formats.

It basically sits somewhere between Powershell and Bash: typed pipelines like Powershell but without sacrificing familiarity with all the CLI commands you already use day in and day out.

https://github.com/lmorg/murex

As an aside, I’m about to drop a massive update in the next few days that will make the shell even more intuitive to use.

It's possible without any kernel changes. My shell (https://github.com/lmorg/murex) already supports doing that.

The way it works is it uses fd3 to communicate schema information so it can natively support all the existing "dumb" pipes without any modification but any new tools can be written to send objects instead (albeit byte encoded).

It's not as elegant as PowerShell sending .NET objects natively, but then PowerShell doesn't work with existing CLI tools natively (it needs wrapper scripts to convert them into PowerShell commands). Whereas my shell is fully backwards compatible while still supporting a suite of additional functionality too.

> getting the shell quoting hell right

Shameless plug coming, it this has been a pain point for me too. I found the issue with quotes (in most languages, but particularly in Bash et al) is that the same character is used to close the quote as is used to open it.m. So in my own shell I added support to use parentheses as quotes in addition to the single and double quotation ASCII symbols. This then allows you to nest quotation marks.

https://murex.rocks/docs/parser/brace-quote.html

You also don’t need to worry about quoting variables as variables are expanded to an argv[] item rather than expanded out to a command line and then any spaces converted into new argv[]s (or in layman’s terms, variables behave like you’d expect variables to behave).

https://github.com/lmorg/murex

Shameless plug, but the problems you've described are exactly the problems I was looking to solve with my own shell (https://murex.rocks / https://github.com/lmorg/murex)

> 100% reliable argument passing. That is, when I run `system("git", "clone", $url);` in Perl, I know with exact precision what arguments Git is going to get, and that no matter what weirdness $url contains, it'll be passed down as a single argument. Heck, make that mandatory.

Variables are parsed as tokens so they're passed to the parameters whole, regardless of whether they contain white space or not. So you can still use the Bash terseness of parameters (eg `git clone $url`) but $url works in the same way as `system("git", "clone", $url);` in Perl.

> 100% reliable file iteration. I want to do a "for each file in this directory" in a manner that doesn't ever run into trouble with spaces, newlines or unusual characters.

Murex is inspired by Perl in that $ is a scalar and @ is an array. So if you use `f +f` builtin (to return a list of files), it returns as a JSON array. From there you can use @ to expand that array with each value being a new parameter, eg

  rm -v @{ f +f } # delete all files
or use $ to pass the entire array as a JSON string. Eg

  echo ${ f +f }
  # you could just run `f +f` without `echo` to get the list.
  # This is just a contrived example of passing the array as a string.
Additionally there are lots of tools that are natively aware of arrays and will operate on them. Much like how `sed`, `grep`, `sort` et al treat documents as lists, murex will support lists as JSON arrays. Or arrays in YAML, TOML, S-expressions, etc.

So you can have code that looks like this:

  f +f | foreach file { echo $file }
> No length limits. If I'm processing 10K files, I don't want to run into the problem that the command line is too long.

This is a kernel limit. I don't think there is any way to overcome this without using iteration instead (and suffering the performance impact form that).

> Excellent path parsing. Such as filename, basename, canonicalization, finding the file extension and "find the relative path between A and B".

There are a number of ways to query files and paths in murex:

- f: This returns files based on meta data. So will pull files, or directories, or symlinks, etc depending on the flags you pass. eg `+f` would include files. `+d` would be include directories. or `-s` would exclude symlinks.

- g: Globbing. Basically `*` and `?`. This can run as a function rather than being auto-expanded. eg `g *.txt` or `rm -v @{ g *.txt }`

- rx: Like globbing but using regexp. eg `rx '\.txt$'` or `rm -v @{ rx '\.txt$' }` - this example looks terrible but using rx does sometimes come in handy if you have more complex patterns than a standard glob could support. eg `rm -v @{rx '\.(txt|rtf|md|doc|docx)$'}`

The interactive shell also have an fzf like integration built in. So you can hit ctrl+f and then type a regexp pattern to filter the results. This means if you need to navigate through complex source tree (for example) to a specific file (eg ./src/modules/example/main.c) you could just type `vi ^fex.*main` and you'd automatically filter to that result. Or even just type `main` and only see the files in that tree with `main` in their name.

> Good error handling and reporting / Excellent error reporting. I don't want things failing with "Command failed, aborted". I want things to fail with "Command 'git checkout https://....' exited with return code 3, and here's for good measure the stdout and stderr even if I redirected them somewhere".

A lot of work here.

- `try` / `trypipe` blocks supported

- `if` and `while` blocks check the exit code. So you can do

  if { which foobar} else {
    err "foobar does not exist!"
  }

  # 'then' and 'else' are optional keywords for readability in scripting. So a one liner could read:
  # !if {which foobar} {err foobar does not exist!}
- built in support for unit tests

- built in support for watches (IDE feature where you can watch the state of a variable)

- unset variables error by default

- empty arrays will error by default when passed as parameters in the next release (hopefully coming this week)

- STDERR is highlighted red by default (can be disabled if that's not to your tastes) so you can clearly see any errors if they're muddled inside STDOUT.

- non zero exit numbers automatically raise an error. All errors return a line and cell number so you can find the exact source of the error in any scripts

Plus lots of other stuff that's referenced in the documents

> Easy capture of stdout and stderr, at the same time. Either together or individually, as needed.

Murex handles named pipes a little differently, they're passed as parameters inside triangle brackets, <>. STDERR is referenced with a proceeding exclamation mark. eg a normal command will appear internally as:

  command1  <!err> parameter1 parameter2 etc | command2  <!err> parameter1 parameter2 etc
(You don't need to specify nor <!err> for normal operation).

So if you want to send STDERR off somewhere for later processing you could create a new named pipe. eg

  pipe example # creates a new pipe called "example"
  command1 <!example> parameter1 parameter2 | command2 <!null> parameter1 parameter2
This would say "capture the STDERR of the first command1 and send it to a new pipe, but dump the STDERR of command2".

You can then later query the named pipe:

   | grep "error message"
> Give me helpers for common situations. Eg, "Recurse through this directory, while ignoring .git and vim backup files". Read this file into an array, splitting by newline, in a single line of code. It's tiresome to implement that kind of thing in every script I write. At the very least it should be simple and comfortable.

This is where the typed pipelines of murex come into their own. It's like using `jq` but builtin the shell itself and works transparently with multiple different document types. There's also an extensive library of builtins for common problems, eg `jsplit` will read STDIN and output an array split based on a regexp pattern. So your example would be:

  cat file | jsplit \n
> That's the kind of thing I care about for shell scripting. A better syntax is nice, but actually getting stuff done without having to work around gotchas and issues is what I'm looking for.

I completely agree. I expect this shell I've created to be pretty niche and not to everyone's tastes. But I'd written it because I wanted to be a more productive sysadmin ~10 years ago and since I've moved into DevOps I've found it invaluable. It's been my primary shell for around 5 years and every time I run into a situation where I'm like "I wish I could do this easily" I just add it. The fact that it's a typed pipeline makes it really easy to add context aware features, like SQL query support against CSV files.

https://github.com/lmorg/murex

(Don't be afraid to self promote!)

One obvious benefit over what's suggested in the article is that you can use it interactively first, with autocomplete and such goodies, and transition smoothly to a script later.

A quick Google for murex cli -sea yields: https://murex.rocks and https://github.com/lmorg/murex, looks like a promising tool
To those who are wishing something would fill this void, I'm working on something that does overlap with this. I've already matured the shell somewhat over the last ~8 years. That shell can inline images, manage and manipulate different document formats and convert between them, understanding data types and structures like Powershell but works natively with POSIX CLI tools (which Powershell cannot).

Last year I also started playing around with WASM and building a terminal emulator in it. The idea being if I can manage the terminal emulator then I can add more features / friendlier UI/UX that might compliment non-CLI users who have been tasked with using the terminal; while power users still have access to the features of the $SHELL in their preferred terminal emulator. There is definitely overlap between what I'm doing and TermKit.

The idea of doing it this way means I'm still leveraging the foundations already in common use. Opinions about whether they're ready to be deprecated aside, it's the standard everyone already uses so trying to displace that would be an even big hurdle than even just trying to displace Bash. But at least if the standards are reused and extended it does mean people can pick and chose the items that work for them rather than forcing them into an "all or nothing" type scenario.

I'm not far down the terminal emulator route yet and it's definitely a task that is bigger than the time I personally have to commit to it. So I welcome any time and/or support anyone is willing to donate.

I should write a blog post about this at some point too....

The shell (in it's current form): https://github.com/lmorg/murex (docs on Github too but also accessible via https://murex.rocks )

Shameless self promotion but this is already possible using my shell https://github.com/lmorg/murex

It works with YAML, TOML, JSON, jsonlines, CSV, and regular shell command output. You can import from any data format and convert to any other data format and even in line SQL relational look ups too.

Since SQL inlining literally just imports the data into an in memory sqlite3 database it means you can do your JSON import into sqlite3 using this shell. And in fact I did literally just this last month when using a cloud service restful API which returned two different JSON docs that needed to restructured into two different tables and then a relational query run between them.

If it is any help, I've written a $SHELL in Go and have been using it as my primary shell for around 5 years now. So while it's still largely Beta it is pretty stable these days.

It's intentionally not designed to be a drop in replacement for Bash though. I saw no point replicating all of the problems with Bash when writing a new shell. So it does a lot of things different -- better in most cases -- but that does also create a learning curve too (I mean you're literally learning a new language).

https://github.com/lmorg/murex

Powershell is available on Linux. Albeit it doesn't always play too nicely with existing CLI tools.

There are also other typed shells that appeal to yourself which aren't Powershell but do have behaviours similar to Powershell and which do play a lot more nicely with the wider Linux / UNIX ecosystem:

- murex https://github.com/lmorg/murex

- Elvish https://github.com/elves/elvish

- NGS https://github.com/ngs-lang/ngs

I'm the maintainer of murex so happy to answer any questions on that shell here. I do know the the maintainers of Elvish and NGS pop on HN too.

That paper misses a lot. Including:

- murex https://github.com/lmorg/murex

- Elvish https://github.com/elves/elvish

- NGS https://github.com/ngs-lang/ngs

- Powershell

- And any language REPLs turned into shells, such as Python, LISP, Lua and others.

The author did say there is more to follow though so maybe the scope of this article was a little more closed by intention.

Such a shell already exists

https://github.com/lmorg/murex

You’d have to learn a new shell syntax but at least it’s compatible with existing CLI tools (which Powershell isn’t)

> I've always had trouble getting `for` loops to work predictably, so my common loop pattern is this:

for loops were exactly the pain point that lead me to write my own shell > 6 years ago.

I can now iterate through structured data (be it JSON, YAML, CSV, `ps` output, log file entries, or whatever) and each item is pulled intelligently rather than having to conciously consider a tonne of dumb edge cases like "what if my file names have spaces in them"

eg

    » open https://api.github.com/repos/lmorg/murex/issues -> foreach issue { out "$issue[number]: $issue[title]" }
    380: Fail if variable is missing
    379: Backslashes and code comments
    378: Improve testing facility documentation
    377: v2.4 release
    361: Deprecate `swivel-table` and `swivel-datatype`
    360: `sort` converts everything to a string
    340: `append` and `prepend` should `ReadArrayWithType`

Github repo: https://github.com/lmorg/murex

Docs on `foreach`: https://murex.rocks/docs/commands/foreach.html

Shameless plug, but so does my shell, https://github.com/lmorg/murex

  $ open test/example.csv | format generic
  Login email         Identifier  One-time password  Recovery code  First name  Last name  Department   Location
  [email protected]  9012        12se74             rb9012         Rachel      Booker     Sales        Manchester
  [email protected]   2070        04ap67             lg2070         Laura       Grey       Depot        London
  [email protected]   4081        30no86             cj4081         Craig       Johnson    Depot        London
  [email protected]    9346        14ju73             mj9346         Mary        Jenkins    Engineering  Manchester
  [email protected]   5079        09ja61             js5079         Jamie       Smith      Engineering  Manchester
My shell also aims to have closer compatibility with POSIX (albeit it's not a POSIX shell) so you can use all the same command line tools you're already familiar with too (which, for me at least, was the biggest hurdle in my adoption of PowerShell).

It also supports other file types out of the box too. eg jsonlines

  $ open test/example.csv | format jsonl
  ["Login email","Identifier","One-time password","Recovery code","First name","Last name","Department","Location"]
  ["[email protected]","9012","12se74","rb9012","Rachel","Booker","Sales","Manchester"]
  ["[email protected]","2070","04ap67","lg2070","Laura","Grey","Depot","London"]
  ["[email protected]","4081","30no86","cj4081","Craig","Johnson","Depot","London"]
  ["[email protected]","9346","14ju73","mj9346","Mary","Jenkins","Engineering","Manchester"]
  ["[email protected]","5079","09ja61","js5079","Jamie","Smith","Engineering","Manchester"]
Shameless plug but this is exactly what I'm working with my shell https://github.com/lmorg/murex

It's designed to work with JSON, YAML, CSV, etc so you don't need to remember jq syntax for JSON, awk for CSV, something else for S-Expressions and so on and so forth. It's the same commands (like grep) that work with all formats and are aware of the format.

It can work with tmux to give do stuff like if you hit `F1` it will open up a pane to the right with the man page of the command you're typing. So it's been built around taking UI lessons from IDEs.

But above all else, it's a $SHELL so it works find with all your existing CLI core utils. Which is more than can be said for the likes of Powershell.

There is a learning curve though because it's not POSIX syntax. I've tried to keep some POSIX-isms where they've made sense but redesigned things where POSIX was a little counter-intuitive (though that's a subjective opinion).

I've been using it as my primary shell for about 4 years now.

This is why I wrote murex shell (https://github.com/lmorg/murex), it's an alternative $SHELL, so you'd use it in place of Bash or Zsh, but it's optimised for modern DevOps tools. Which means JSON and YAML are first class citizens.

It's syntax isn't 100% POSIX compatible so there is some new stuff to learn but it works with all the existing POSIX tools and is more readable than AWK and Perl but while also being terse enough to write one liners.

There'll always be multiple ways to skin the proverbial cat.

Shameless plug but I'd written my own $SHELL callewd `murex` as I kept running into pain points with Bash as a DevOps engineer. The shell doesn't have `tree` inbuilt but it does have FZF-like navigation built in.

https://github.com/lmorg/murex

I've been using it as my primary shell for a few years now and I'm not going to pretend that it isn't BETA it does work. However it's not POSIX and some of the design decisions might rub people the wrong way (given how opinionated peoples work-flows are). But if you're curious then check it out.

Murex does this. eg

Take a plain text table, convert it into an SQL table and run an SQL query:

  » ps aux | select USER, count(*) GROUP BY USER
  USER                   count(*)
  _installcoordinationd  1
  _locationd             4
  _mdnsresponder         1
  _netbios               1
  _networkd              1
  _nsurlsessiond         2
  _reportmemoryexception 1
  _softwareupdate        3
  _spotlight             5
  _timed                 1
  _usbmuxd               1
  _windowserver          2
  lmorg                  349
  root                   134
The builtins usually print human readable output when STDOUT is a TTY, or JSON (or JSONlines) when the TTY is a pipe.

  » fid-list:
    FID   Parent    Scope  State         Run Mode  BG   Out Pipe    Err Pipe    Command     Parameters
    590        0        0  Executing     Normal    no   out         err         fid-list    (subject to change)

  » fid-list: | cat
  ["FID","Parent","Scope","State","RunMode","BG","OutPipe","ErrPipe","Command","Parameters"]
  [615,0,0,"Executing","Normal",false,"out","err",{},"(subject to change) "]
  [616,0,0,"Executing","Normal",false,"out","err",{},"cat"]
and you can reformat to other data types, eg

  » fid-list: | format csv
  FID,Parent,Scope,State,RunMode,BG,OutPipe,ErrPipe,Command,Parameters
  703,0,0,Executing,Normal,false,out,err,map[],(subject to change)
  704,0,0,Executing,Normal,false,out,err,map[],csv
and query data within those data structures using tools that are aware of that structural format. Eg Github's API returns a JSON object and we can filter through it to return just the issue ID's and titles:

  » open https://api.github.com/repos/lmorg/murex/issues | foreach issue { printf "%2s: %s\n" $issue[number] $issue[title] }
  348: Potential regression bug in `fg`
  347: Version 2.2 Release
  342: Install on Fedora 34 fails (issue with `go get` + `bzr`)
  340: `append` and `prepend` should `ReadArrayWithType`
  318: Establish a testing framework that can work against the compiled executable, sending keystrokes to it
  316: struct elements should alter data type to a primitive
  311: No autocompletions for `openagent` currently exist
  310: Supprt for code blocks in not: `! { code }`
  308: `tabulate` leaks a zero length string entry when used against `rsync --help`

Source: https://github.com/lmorg/murex

Website: https://murex.rocks

This is the approach murex tries to take. It breaks POSIX compliance without remorse but retains compatibility wherever practical. This means it can have some of the flexibilities of Powershell (typed pipelines), the flexibilities of IDEs (events, better auto-completions, etc) but still works with all of your everyday command line tools.

My ultra long term aim is to integrate the shell into a media rich terminal emulator so using the command line will have the speed and precision of a TUI, the power of an IDE but the rich content of a GUI.

I'm a loooong way off achieving that but the shell is already usable and has been my primary shell for ~4 years now. And I welcome any and all feedback on the current and any future builds.

At some point I'll publish a paper with my ideas.

https://github.com/lmorg/murex

> Command line programs are black boxes (they could be pure bash, a compiled C++ binary, an npm package, etc), which means there is no programmatic standard for extracting the command set.

You say that but it's actually an easier problem to solve than it first appears. Both my shell, murex[0], and Fish[1] parse man pages to provide automatic autocompletions for flags.

I also have a secondary parser for going through `--help` (et al) output for programs that don't have a man page. This isn't something that is enabled by default (because of the risk of unintended side effects) but it does work very effectively.

They way I look at it though, is you want to cover as many things automatically as you can but there will always be instances where you might want to manually fine tune the completions. For that, I've also defined a vastly simplified autocompletion description format. It's basically a JSON schema that covers 99% use cases out of the box and supports shell code to be included to cover the last 1% of edge cases.

This means you can write autocompletions to cover common usecases and have the shell automatically fill in any gaps you might have missed.

[0] https://github.com/lmorg/murex

[1] https://fishshell.com

I'm guessing you're on a phone? The desktop experience is better because you get links on the left but I haven't yet worked out how to replicate that on mobile yet.

Apologies for the frustration there. The project about page is https://murex.rocks and the content there mirrors the README on the git repo which can be found at https://github.com/lmorg/murex

This is something I've been working on with my shell, https://github.com/lmorg/murex

The problem isn't so much finding a way of passing metadata but how do you retain compatibility with existing GNU / UNIX tools? It would be easy to create an entirely new shell (like Powershell) that throws out backwards compatibility but that wouldn't be very practical on Linux/UNIX when so much of it is powered by old POSIX standards.

I addressed this by having a wrapper around pipes. Anything that connects out to a classic executable is a traditional POSIX byte stream with no type data sent. Anything sent to a command that is written to support my shell can make use of said meta data.

As for handling graphical components, that's a little easier. Many command line programs already alter their behaviour depending on whether STDOUT is a TTY or not. I take things a little further and have a media handler function, called `open`, that inlines stuff in a shell that can be (it will render images in the terminal) but if STDOUT is a TTY then it will pipe the image data instead.

It's worth adding mine to the list too:

[4]: https://github.com/lmorg/murex

Superficially it's quite similar to elvish (purely by coincidence) but it's aimed around local machine use (eg by developers and devops engineers) so is as much inspired by IDEs as it is by shells.

I've written one specifically to solve if and looping: https://github.com/lmorg/murex (docs: https://murex.rocks)

Others I've not used are oil shell, which aims to be a better Bash. Elvish, which has a heck of a lot of similarities with my own project (and is arguably more mature too). There are REPL shells in LISP, Python and all sorts too.

So there are options out there if you want a $SHELL and happy to break from POSIX and/or bash compatibility.

`if` and iterations were two things that annoyed me about bash enough to write my own $SHELL. It's not a transpiler though so I can't realistically use it for production services. However for quickly hacking stuff together for testing / personal use it's been a life saver due to having the same prototyping speed of bash but without the painful edge cases.

Not suggesting it's suitable for everyone though. But it's certainly solved a lot of problems for me.

https://github.com/lmorg/murex

I've spent the last ~5 years working on a cross platform $SHELL and scripting language that sits somewhere between Bash, Fish and Windows Powershell.

The idea being it has UX improvements and sane defaults like fish (eg man page parsing for better auto completions), it supports structured data types like Powershell but yet still works fine with traditional POSIX byte streams like Bash.

I've also worked hard on the syntax to try and keep it as familiar with POSIX as I can for ease of use, but throwing out POSIX where it's counter-intuitive. And likewise, the syntax tries to balance terseness (since REPL is a write many read once environment) with readability (to make shell scripts less cryptic).

Target audience is basically myself but I think this fits anyone who spends a lot of time in the terminal using sysadmin or developer tools. Particularly DevOps tooling which are often JSON / YAML / etc heavy.

This is a personal project but I'm very much open to feedback, suggestions, feature requests, pull requests, etc...

https://github.com/lmorg/murex

Actually there are several shells out there that fix that problem and still support existing UNIX tools too (which Powershell doesn't play nice with).

My own shell, https://github.com/lmorg/murex does this by passing type information along with the byte stream. So _murex_ aware tools can have structured data passed and POSIX tools can fall back to byte streams. Best of both worlds.

The problem, however, is that as long as Bourne Shell and Bash are installed everywhere, people will write scripts for it. This is less about the popularity of UNIX tools and more about the ubiquity of them (though the two points aren't mutually exclusive).

As long as you re-serialise that data when piping to coreutils (etc) you shouldn't have an issue.

This is what my shell (https://github.com/lmorg/murex) does. It defaults to using JSON as a serialisation format (that's how arrays, maps, etc are stored, how spaces are escaped, etc) but it can re-serialise that data when piping into executables which aren't JSON aware.

Other serialisation formats are also supported such as YAML, CSV and S-Expressions too. I had also considered adding support for ASCII records but it's not a use case I personally run into (unlike JSON, YAML, CSV, etc) so haven't written a marshaller to support it yet.

Shameless plug but this is what I wrote murex for. Murex is a "UNIX" shell that's keeps enough similarities with POSIX that you can use it like a Bash REPL with (hopefully) minimal disruption but it breaks from POSIX compatibility where it makes sense. And one significant area it does break compatibility is how it handles structured data formats like JSON, S-Expressions, CSV's, and other tabulated data (to name a few).

It's designed from the ground up to support object manipulation while still retaining compatibility with the UNIX pipeline.

I do this by building a suite of builtin tools that are aware of structured data files (primarily because that information is passed down the pipeline as a data-type) but it still breaks into normal pipeline when forking an external executable.

https://github.com/lmorg/murex

In Linux and UNIX you enable or disable use of ctrl+c (effectively setting the tty into a raw mode) via syscalls against the tty file descriptor - it is not handled by the terminal emulator at all (in fact if you read the man page for `stty` you'd see it's changing the your terminals fd and not the terminal emulator).

Sure, theoretically terminal emulators could capture and even rebind those keys via the APIs of whatever graphical toolkit they're built in....but if you wanted to rebind SIGINT to another key in the terminal emulator, that terminal emulator would still have to transmit ^c to the tty.

As for rebinding those keys in the kernel, Linux simply doesn't support doing that and nor does it support binding other keys to different signals. In fact I've tried to do this on a tool I was working on to emulate BSD's SIGINFO in Linux (turned out not to be possible) as as well part of the job control (SIGSTSP et al) support in my $SHELL (https://github.com/lmorg/murex).

This is also why you can throw signals over an SSH (or other remote shell) session when the terminal emulator itself would have no knowledge of the commands running on the remote host.

Shameless plug but I've also written a language (murex) for shell scripting which can be used for this too:

    open test.json -> [ countries ]
    open test.json -> [[ /countries/0 ]]
    open test.json -> [ countries ] -> [ 0 2 ]
Single brackets only return the next level deep but can retrieve multiple items (and negative integers to count from the end of an array). Double brackets are to specify a path. So [[/foo/bar]] is literally the same as [foo]->[bar]

Lastly the 4th example:

    open test.json -> [ countries ] -> foreach c { echo $c[name] }
Unfortunately this one doesn't output it in JSON. If you needed that you could chain another command to reformat it:

    open test.json -> [ countries ] -> foreach c { echo $c[name] } -> cast str -> format json
This works because murex can auto-convert between lists, JSON, YAML, CSV and a few other structured formats. However it's fair to say it is a lot more verbose than jsonnet and jql in that last example.

Github repo: https://github.com/lmorg/murex

I've been using this as my primary shell for a few years now and, like yourself, I've never bothered to learn jq because of that.

NB all of the above example are running from inside the murex interactive command line (like bash). If you wanted to run it like jq/jql then you'd need to do the same sort of thing as bash:

    murex -c 'open test.json -> [ countries ]'
Coincidentally just this week I've working to resolve point 1 in my own shell (https://github.com/lmorg/murex). The compromise I came up with was:

- The shell would attempt to return all suggestions immediately

- However a subset of functions which are known to be slower queries (like recursive directory look ups on larger file system hierarchies) are given a "soft timeout" -- where after that timeout is reached, the slower faster subset of functions are returned as the suggestions while the slower subset continues to run in the background

- The slower subset are also given a "hard timeout" where when that point is reached, if the function still hasn't completed then it is just killed. Any results is has produced (if any) by that point are appended to the auto-completion suggestions.

- If the slower subset finished before the hard timeout then it is appended to the suggestions. If it finishes before the soft timeout then it's just part of the suggestions.

At the moment it's only been implemented for file and directory suggestions. The fast function is just any files and directories in the current directory level; where as the slow function will return a larger directory hierarchy (for quick navigating akin to fzf). But the plan is to extend it further to support other dynamic completers as well.

This wouldn't completely solve your point though. If the "fast" query actually runs slow (eg you're trying to complete when inside a network mounted filesystem when the network is dropping) then the shell would still hang. At some point I'll add timeouts on them as well. But I am inching towards that goal.

Be warned though, because this feature is less than a week old, it's still only in a feature branch. :)

Your point on 3 is apt too. One of the things I built early on was man page parsing for auto-completion suggestions. I've since learned that I'm not alone in that regard either (Fish does it as well -- and from some of the demo's I've watched it looks like they've done a nicer job there too).

More recently I've also been writing tools that parse binaries for flags as well (in instances where you download a statically compiled binary - eg terraform - and thus don't have any corresponding man pages). That work is very much in it's infancy though.

If you pardon the shameless self promotion; I'm working on something just like that:

https://github.com/lmorg/murex

It currently has:

* Proper error handling (eg try and catch blocks)

* unit testing and debugging frameworks to help with development and maintainability

* data-type aware, including complex types like how CSV, JSON and YAML are all handled as memory structures and thus the same tools can query any structured data format without understanding it's contents

* while still ostensibly working the same way as a traditional POSIX shell

There's also some work on improving the REPL experience too where I've included:

* automatic man page parsing for flags

* a "tool tip text" like hint line which tells you where commands reside on the fs, what it does, etc. Which is handy if you're trying to debug an existing commend.

* if you paste multiline text into the console you get offered a chance to preview the text before executing it (handy if, like me, you're pretty useless at copy/pasting content reliably)

* a package management system so you know exactly which functions and imported scripts are loaded from which sources (no more "where did that autocomplete suggestion / alias / etc get loaded from?"

There's a few other features like support for events and such like, but they're not yet documented.

The shell is currently beta but I've been using it as my daily driver for about 18 months now. There are still quite a few bugs, plenty of places where code needs to be rewritten for performance and lots of stuff isn't yet documented (though the documentation is pretty good already considering it's only me working on it). So don't expect a finished product. However I do think I'm at the stage where I'm ready for more users to have a play and I welcome PRs, issues raised, general comments and feedback, etc.

# end of shameless self promotion :D

Indeed. https://github.com/lmorg/murex

However I'd still recommend WSL or Powershell over my shell for Windows use as I haven't invested an great amount of time on the Windows port (and certainly haven't used nor tested it extensively on Windows like I do on Linux and FreeBSD). Though at some point - probably after I've finished documenting the thing - I do plan to go back and revisit the Windows port.

Thank you.

The source can be found at https://github.com/lmorg/murex

Happy to discuss any questions or comments you might have on it.

https://github.com/lmorg/murex

It's not without its bugs but I now use it everyday as my primary shell. Documentation is about 30% there but that's something I'm actively working on at the moment.

Happy to receive any feedback, possitive or negative :)

Shameless self promotion but I'm writing my own shell, murex, as well[1]

The goals of mine are akin to Fish in terms of REPL use but with a greater emphasis on scripting.

Like Fish, murex also does man page parsing (in fact I wrote mine before realising Fish did the same), but unlike fish autocompletions can be defined by a flat JSON file (much like Terraform) as well as dynamically with code.

Currently I'm working on murexes event system so you can have the shell trigger code upon events like file system changes.

My ultimate aim is to make murex a goto tool systems administration tool - if just for myself - but the project is still young

[1] https://github.com/lmorg/murex

> This is an area where most open-source developers have avoided, as it's a "solved" problem, but it really isn't.

I've found the complete opposite to be the case. This is one area where there are hundreds of different shells out there as it seems a "cool" thing to try since it's basically just an extension of writing your own language. Most never go beyond pet projects though.

If you're curious about Go shells, I'm writing one too[1]. It's also alpha but you can already write some reasonably complex scripts in it since it supports message passing, network sockets and background processes. It's definitely not fast though. Fast enough for what I need it to do but there's plenty of room for performance tuning once in beta.

[1] https://github.com/lmorg/murex

I don't know anything about PTYs either, since my shell is mostly a batch program like Perl right now. I'm viewing it first from a programming language perspective.

As mentioned, I'm deferring to GNU readline for interactive features. I'd be interested in what problems you had with PTYs and what module you used?

Are PTYs standardized by POSIX? It feels like you should be able to write a shell against only POSIX APIs (and ANSI C).

I'd also be interested to see your shell. There are a bunch of alternative shells and POSIX shell implementations here:

https://github.com/oilshell/oil/wiki/ExternalResources

EDIT: I saw the sibling comment linking to https://github.com/lmorg/murex. Taking a look! A few months ago I started a thread with the authors of elvish, oh, mash, and NGS shells to exchange ideas. (elvish and oh are also written in Go.) I didn't know about your shell or I would have included you!

Please excuse the self promotion but I'm also writing a Unix shell in Go. It's not designed to be a drop in replacement for Bash (ie it follows a few Bash-isms but is essentially my own bespoke scripting language) however it does spawn processes in their own PTY and support interactive commands such as vi and top so it does work as a proper $SHELL. It's still very much alpha but at a stage where I'm ready for feedback.

https://github.com/lmorg/murex

I'm working on a shell scripting language that, like Tisp, is rooted in functional and concurrency paradigms while still offering some degree of backwards compatibility with traditional shells like Bash.

It's very immature at the moment but the repo is below if anyone else is curious:

https://github.com/lmorg/murex

I'm not suggesting my level of coding is comparable with the others mentioned though. Murex was created to scratch an itch rather than delusions of thinking I could write a powerful next gen programming language. I intend murex to be more of a REPL shell than something one would use to write scalable applications.