Software is gradually turning into a similar pattern of layering like sediment. With most modern "hardware level" applications, there are still layers of OS magic happening under the hood.

We're now into maybe decade 4-ish of software dependency.

There was a scene in one of Alastair Reynold's books where a character basically was a computational archaeologist. That resonates with me a lot.

In a couple centuries, it's not a terrible prediction of the future that software stacks will accumulate cruft over time and debugging certain issues will require immense financial effort to both dig through the layers of software commits and historical proposed merge commits, plus adding extra tests on top of bedrock code and its fixes.

No idea what this will look like. I imagine easily executed functions will pop up in mixed pip's and npm's that are easily recreated functionality every decade, regardless of prior art. Every new programmer wants to make a stamp on the world.

There's some saying about history repeating itself, but I'm dumb and don't remember.

Not only that, but also the concept of open-source development is not the panacea we believe it to be. Bear with me.

Software is extremely complex, even if it is open-source, no one except the original developers and very dedicated people will attempt to patch the myriad of issues and bugs they encounter daily. And even if we do spend the time to track down and fix a bug, there's a political and diplomatic game to convince the maintainers to incorporate your fix. It is not uncommon for a PR to just sit, unreviewed, for years. Open-source does not and will never scale, because software is orders of magnitude too complex.

Outside of software, this problem is lessened because maintainership is distributed: if your car engine breaks, you do not depend on your manufacturer to have enough time and energy to fix it. There are thousands of licensed garages that can do it for you. And, not least, the real world is much simpler than any piece of software, which is effectively completely ad-hoc: knowing how Chrome works will not help you fix this Firefox issue, whereas if you can fix the carburettor on a Honda car, you probably can do the same on a FIAT.

Open-source/distributed development and bug fixing worked much better when computers had 64 kB of RAM and programs no more than 10 pages long.

EDIT TO CLARIFY: I'm not talking of open-source vs commercial, or other types of governance. I'm talking more abstractly about the fact that having source available and open contributions does not noticeably increase the amount of bugs fixed. This comment is about software complexity and logistics of distributed bugfixing.

All of your arguments work much better in favor of Open Source and against closed-source. After all, in Open Source, maintainership can be distributed, but a single closed-source shop is much more likely to simply declare bug bankruptcy and refuse to even consider a fix, at which point absolutely nobody else can do it.
I haven't mentioned anything about closed-source development. I'm talking about software complexity here. I've updated my comment to clarify.
Still:

> And even if we do spend the time to track down and fix a bug, there's a political and diplomatic game to convince the maintainers to incorporate your fix.

That's why forking is one of the Four Freedoms. It's written into the licenses.

Granted that you need to be dedicated to even attempt to fix complex software. However, Open Source can draw from a larger pool of potential talent, and it's more likely that someone out there will care than someone in a company. What's that saying? "If you're one in a million, there's three of you in New York."?

> And, not least, the real world is much simpler than any piece of software, which is effectively completely ad-hoc: knowing how Chrome works will not help you fix this Firefox issue, whereas if you can fix the carburettor on a Honda car, you probably can do the same on a FIAT.

Aside from the difficulty of finding a carburetor on a modern car, this is about software complexity, not Open Source/closed-source per se. Fixing problems in a badly-architectured codebase is always difficult, time-consuming, and likely to introduce more bugs. Closed source doesn't make it any better.

I have never said that closed source makes it better. I don't know how to make that more clear.

You're focusing too much on politics, I'm focusing on Stallman wanting the source code of his printer to be available, so he could change it to better suit his needs. I'm just saying that in 2023 even if your printer is open-source, ain't nobody got time to dive into hundreds of thousands of line of code to change it.

> I'm just saying that in 2023 even if your printer is open-source, ain't nobody got time to dive into hundreds of thousands of line of code to change it.

I disagree. I disagree wholeheartedly, based on both practical projects and the retrocomputing world.

For example:

https://github.com/PDP-10/its/

This is a repo for the Incompatible Timesharing System operating system, ITS to its friends. ITS ran on 36-bit mainframe hardware from Digital Equipment Corporation (DEC) which went out of production in the 1980s. DEC was acquired by Compaq in 1998, and Compaq ceased to exist as a company in 2002. Commercially, ITS is dead. It is dead-dead. It is old-university-project-with-no-grants dead. Doornails evince more metabolic activity than ITS, at least in the commercial world. Developing on ITS means reading and writing assembly language, TECO, and a Lisp dialect that only runs on ITS and a few other OSes of similar vintage and commercial utility. However, it is still under active development because people are interested in it.

Besides: Digging into a codebase to fix a dumbass printer? People will do that out of spite. People will do that for the blog post and Hacker News thread.