"In those days it was mandatory to adopt an attitude that said:
• “Being small and simple is more important than being complete and correct.” • “You only have to solve 90% of the problem.” • “Everything is a stream of bytes.”
These attitudes are no longer appropriate for an operating system that hosts complex and important applications. They can even be deadly when Unix is used by untrained operators for safety-critical tasks."
Very much so.
Where is this "complete and correct" code that came at the expense of being large and complex? I've certainly never seen it.
Richard Gabriel's "The Rise of Worse is Better," written around the same time period the Unix-HATERS Handbook was written, gives some clues. Unix was contrasted with environments such as Scheme (which is rather small, but because it's designed as a "crown jewel"), Common Lisp (the exemplar of a "big complex system" that is complete and correct, but large and complex), and the ITS operating system (https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...). Thankfully there are many open-source Scheme and Common Lisp implementations, and ITS is also available as open source (https://github.com/PDP-10/its).
Of course, modern Unix-like systems these days are large and complex, though Plan 9 and Inferno are quite architecturally refined, reducing some of the complexities that you'll see in contemporary Unix-like systems.
Sometimes I wonder, if you wanted to create an operating system like ITS for more modern platforms, what it would look like?
ITS is written in PDP-10 assembly, what if someone wrote a compiler which read in PDP-10 assembly language and spit out C code? Could that be a first step to porting it?
It is surely a lot more complicated than that. It contains self-modifying code, which would obviously break that translation strategy. A lot of hardware-specific code would have to be rewritten. 6 character filenames without nested directories might have been acceptable in the 1970s, but few could endure it today. A multi-user system with a near-total absence of security was acceptable back then, obviously not in today's very different world.
ITS does have some interesting features contemporary systems don't:
A process can submit commands to be run by the shell that spawned it – actually MS-DOS COMMAND.COM also had that feature (INT 2E), but I haven't seen anything else with it. A Unix shell could implement this by creating a Unix domain socket, and passing its path to subprocesses via an environment variable–but I've never seen that done.
Another was that a program being debugged could call an API to run commands in its own debugger – I've never seen that in any other debugger, although I suppose you could write a GDB plugin to implement it (have a magic do-nothing function, then a GDB Python script sets a breakpoint on that function, and interprets its argument as commands for GDB.) Actually, in ITS these two features were the exact same feature, since the debugger was used as the command shell.
Another was that a program had named subprocesses (by default the subprocess name was the same as the executable name, but it didn't have to be.) Compare that to most Unix shells, where it is easy to forget what you are running as background jobs 1, 2 or 3.