Trusting trust all the way down: How do you trust your assembler to output the exact machine code corresponding to the source? How do you trust whatever program you might use to do the validation? Ok, let's skip the assembler and write machine code directly. How do you trust the program you use to write that machine code? How do you even trust the firmware will execute your machine code in the way you intend?

This is what I don't understand about Stallman. Even if we all agree to use free software, that's only the beginning. The next step is nobody is going to use web applications anymore. Okay now that we're all liberated from corporate espionage what about how the software is made. You can't trust package maintainers any more than you can trust companies. So we all use GNU/Gentoo. As you said, the compiler may be compromised too. Well you learn the dark art of compiling a compiler. But wait, your entire computer is a black box with no schematics whatsoever. Rinse and repeat.

> The next step is nobody is going to use web applications anymore.

Well, we should certainly be wary of falling into the trap known as "Service as a Software Substitute (SaaSS)"[0], but in principle it is possible for a web app to work entirely on the client side and be distributed under a Free licence (and for the browser to enforce that, if the web developer is careful).[1]

> You can't trust package maintainers any more than you can trust companies.

The point is, if you have the source code, you don't have to trust the package maintainers. Instead you can trust whoever you choose to audit the source code for you, which might be yourself (if you're very skilled, and very untrusting), or it could be the community.

I admit that "trust the community" usually means "Assume that someone somewhere will find any critical bugs before they affect you, and assume that developers won't destroy their reputation when they know they'll eventually get caught", but we are slowly moving towards a system of community code reviews for all Free software[2]. (Obviously reproducible builds, and boostrappable builds, are necessary steps to take full advantage of this).

> But wait, your entire computer is a black box with no schematics whatsoever. Rinse and repeat.

Nope, that's the final step (unless you think the aliens that built this simulation put backdoors into the laws of physics). As for how we trust hardware, fortunately there are projects to make computers out of chips that can be safely reasoned about. You have to ask yourself what your threat model is.

Do you think the NSA is hiding a hardware backdoor in every FPGA, which detects when someone is running a compilation process and makes sure to install a software backdoor in any compiler or kernel it detects? What if multiple people on multiple homebrew computers all carried out the same build process and hashed the results, and all the hashes agreed? What if you ran these processes in virtual machines that implement custom architectures that have never been seen before? Eventually you start to run into information-theoretic problems trying explain how such a backdoor can remain hidden and effective.

[0] https://www.gnu.org/philosophy/who-does-that-server-really-s...

[1] https://www.gnu.org/software/librejs/index.html

[2] https://github.com/crev-dev/crev