What does HackerNews think of crev?
Socially scalable Code REView and recommendation system that we desperately need. See http://github.com/crev-dev/cargo-crev for real implemenation.
I've been willing to try it for a while for Rust projects but never committed to spending the time. Any feedback?
It makes a lot more sense to use cryptography to verify that releases are not malicious directly. Tools like crev [1], vouch [2], and cargo-vet [3] allow you to trust your colleagues or specific people to review packages before you install them. That way you don't have to trust their authors or package repositories at all.
That seems like a much more viable path forward than expecting package repositories to audit packages or trying to assign trust onto random developers.
[1]: https://github.com/crev-dev/crev [2]: https://github.com/vouch-dev/vouch [3]: https://github.com/mozilla/cargo-vet
I think it makes more sense to verify individual releases. There are tools in that space like crev [1], vouch [2], and cargo-vet [3] that facilitate this, allowing you to trust your colleagues or specific people rather than the package authors. This seems like a much more viable solution to scale trust.
[1]: https://github.com/crev-dev/crev [2]: https://github.com/vouch-dev/vouch [3]: https://github.com/mozilla/cargo-vet
- https://github.com/crev-dev/crev
- https://github.com/vouch-dev/vouch
Anyone know of any more alternatives or similar tools already available?
I suppose the people you trust to audit some code will likely not be the same people you trust to do build verification for you, but it might be nice to manage those trust relationships in a single UI/config.
Well, we should certainly be wary of falling into the trap known as "Service as a Software Substitute (SaaSS)"[0], but in principle it is possible for a web app to work entirely on the client side and be distributed under a Free licence (and for the browser to enforce that, if the web developer is careful).[1]
> You can't trust package maintainers any more than you can trust companies.
The point is, if you have the source code, you don't have to trust the package maintainers. Instead you can trust whoever you choose to audit the source code for you, which might be yourself (if you're very skilled, and very untrusting), or it could be the community.
I admit that "trust the community" usually means "Assume that someone somewhere will find any critical bugs before they affect you, and assume that developers won't destroy their reputation when they know they'll eventually get caught", but we are slowly moving towards a system of community code reviews for all Free software[2]. (Obviously reproducible builds, and boostrappable builds, are necessary steps to take full advantage of this).
> But wait, your entire computer is a black box with no schematics whatsoever. Rinse and repeat.
Nope, that's the final step (unless you think the aliens that built this simulation put backdoors into the laws of physics). As for how we trust hardware, fortunately there are projects to make computers out of chips that can be safely reasoned about. You have to ask yourself what your threat model is.
Do you think the NSA is hiding a hardware backdoor in every FPGA, which detects when someone is running a compilation process and makes sure to install a software backdoor in any compiler or kernel it detects? What if multiple people on multiple homebrew computers all carried out the same build process and hashed the results, and all the hashes agreed? What if you ran these processes in virtual machines that implement custom architectures that have never been seen before? Eventually you start to run into information-theoretic problems trying explain how such a backdoor can remain hidden and effective.
[0] https://www.gnu.org/philosophy/who-does-that-server-really-s...
As you note, this doesn't require a blockchain. crev uses a web-of-trust model which is pretty well suited to the task.
It is really good, allowing for distributed reviews, not necessarily from the people authoring your dependencies. The OP suggests that if you -> A -> B, the package manager should only install versions of B vetted by A (or close); I think this doesn't scale well in practice (all the more so if the only way A can vet a new version of B is by doing a new release). Having the possibility to rely on other people to vet releases (possibly in your company) opens a lot of doors, such as the ability to not trust the author of A at all.
As soon as you are no longer implicitly trusting all future versions of your dependencies, things become much more sane.
Perhaps there needs to be a Platinum level which involves storing the hash of each release in a distributed append-only log, with multiple third parties vouching that they can build the binary from the published source.
Obviously I'm thinking of something like sigstore[0] which the Arch Linux package ecosystem is being experimentally integrated with.[1] Then there's Crev for distributed code review.[2]
[0] https://docs.sigstore.dev/