What does HackerNews think of crev?

Socially scalable Code REView and recommendation system that we desperately need. See http://github.com/crev-dev/cargo-crev for real implemenation.

#38 in Security
Looks like there's an implementation of it for npm: https://github.com/crev-dev/crev

I've been willing to try it for a while for Rust projects but never committed to spending the time. Any feedback?

If you don't know the author, signatures do nothing. Anybody can sign their package with some key. Even if you could check the author's identity, that still does very little for you, unless you know them personally.

It makes a lot more sense to use cryptography to verify that releases are not malicious directly. Tools like crev [1], vouch [2], and cargo-vet [3] allow you to trust your colleagues or specific people to review packages before you install them. That way you don't have to trust their authors or package repositories at all.

That seems like a much more viable path forward than expecting package repositories to audit packages or trying to assign trust onto random developers.

[1]: https://github.com/crev-dev/crev [2]: https://github.com/vouch-dev/vouch [3]: https://github.com/mozilla/cargo-vet

I don't think it makes much sense to verify pypi authors. I mean you could verify corporations and universities and that would get you far, but most of the packages you use are maintained by random people who signed up with a random email address.

I think it makes more sense to verify individual releases. There are tools in that space like crev [1], vouch [2], and cargo-vet [3] that facilitate this, allowing you to trust your colleagues or specific people rather than the package authors. This seems like a much more viable solution to scale trust.

[1]: https://github.com/crev-dev/crev [2]: https://github.com/vouch-dev/vouch [3]: https://github.com/mozilla/cargo-vet

Alternatives to cargo-vet that has been mentioned before here on HN:

- https://github.com/crev-dev/crev

- https://github.com/vouch-dev/vouch

Anyone know of any more alternatives or similar tools already available?

Are there plans to integrate it with something like Crev[0] for tying trusted code/security reviews to the binary artefacts?

I suppose the people you trust to audit some code will likely not be the same people you trust to do build verification for you, but it might be nice to manage those trust relationships in a single UI/config.

[0] https://github.com/crev-dev/crev

> The next step is nobody is going to use web applications anymore.

Well, we should certainly be wary of falling into the trap known as "Service as a Software Substitute (SaaSS)"[0], but in principle it is possible for a web app to work entirely on the client side and be distributed under a Free licence (and for the browser to enforce that, if the web developer is careful).[1]

> You can't trust package maintainers any more than you can trust companies.

The point is, if you have the source code, you don't have to trust the package maintainers. Instead you can trust whoever you choose to audit the source code for you, which might be yourself (if you're very skilled, and very untrusting), or it could be the community.

I admit that "trust the community" usually means "Assume that someone somewhere will find any critical bugs before they affect you, and assume that developers won't destroy their reputation when they know they'll eventually get caught", but we are slowly moving towards a system of community code reviews for all Free software[2]. (Obviously reproducible builds, and boostrappable builds, are necessary steps to take full advantage of this).

> But wait, your entire computer is a black box with no schematics whatsoever. Rinse and repeat.

Nope, that's the final step (unless you think the aliens that built this simulation put backdoors into the laws of physics). As for how we trust hardware, fortunately there are projects to make computers out of chips that can be safely reasoned about. You have to ask yourself what your threat model is.

Do you think the NSA is hiding a hardware backdoor in every FPGA, which detects when someone is running a compilation process and makes sure to install a software backdoor in any compiler or kernel it detects? What if multiple people on multiple homebrew computers all carried out the same build process and hashed the results, and all the hashes agreed? What if you ran these processes in virtual machines that implement custom architectures that have never been seen before? Eventually you start to run into information-theoretic problems trying explain how such a backdoor can remain hidden and effective.

[0] https://www.gnu.org/philosophy/who-does-that-server-really-s...

[1] https://www.gnu.org/software/librejs/index.html

[2] https://github.com/crev-dev/crev

I plug this every time, but here goes: https://github.com/crev-dev/crev solves this by providing code reviews, scales via a web-of-trust model, and relies on cryptographic identities. That way, you can depend on a package without having to trust its maintainers and all future versions.
This exists, it's called crev: https://github.com/crev-dev/crev

As you note, this doesn't require a blockchain. crev uses a web-of-trust model which is pretty well suited to the task.

This exists, it's called crev: https://github.com/crev-dev/crev

It is really good, allowing for distributed reviews, not necessarily from the people authoring your dependencies. The OP suggests that if you -> A -> B, the package manager should only install versions of B vetted by A (or close); I think this doesn't scale well in practice (all the more so if the only way A can vet a new version of B is by doing a new release). Having the possibility to rely on other people to vet releases (possibly in your company) opens a lot of doors, such as the ability to not trust the author of A at all.

The idea of transparency logs for software is also being developed by Google with its Trillian project[0], but Gossamer seems to go further by also logging code review and security audit results, which is similar to the idea of Crev[1] from the Rust ecosystem.

[0] https://transparency.dev/#trillian

[1] https://github.com/crev-dev/crev

This is easily fixable by using tooling to help review. For example I have high hopes in the crev project: https://github.com/crev-dev/crev

As soon as you are no longer implicitly trusting all future versions of your dependencies, things become much more sane.

It's good that having a reproducible build process is a requirement for the Gold rating, as is signed releases.

Perhaps there needs to be a Platinum level which involves storing the hash of each release in a distributed append-only log, with multiple third parties vouching that they can build the binary from the published source.

Obviously I'm thinking of something like sigstore[0] which the Arch Linux package ecosystem is being experimentally integrated with.[1] Then there's Crev for distributed code review.[2]

[0] https://docs.sigstore.dev/

[1] https://github.com/kpcyrd/pacman-bintrans

[2] https://github.com/crev-dev/crev