What does HackerNews think of cargo-crev?

A cryptographically verifiable code review system for the cargo (Rust) package manager.

Language: Rust

#3 in P2P
#8 in Security
This problem is more far reaching than just extension, and further reaching than what entity is in charge of something. For instance, the worst company imaginable may be in charge of software that was once FOSS, and they may change absolutely nothing about it, so it should be fine. However, if a small update is added that does something bad, you should know about it immediately.

The solution seems to be much more clearly in the realm of things like crev: https://github.com/crev-dev/cargo-crev/

Wherein users can get a clear picture of what dependencies are used in the full chain, and how they have been independently reviewed for security and privacy. That's the real solution for the future. A quick score that is available upon display everytime you upgrade, with large warnings for anything above a certain threshold.

There is a similar idea being explored with https://github.com/crev-dev/cargo-crev - you trust a reviewer who reviews crates for trustworthiness, as well as other reviewers.
Well, Deno by itself won't solve the supply chain attack problem. Crowd review might[0]. It's cute that npm/yarn/github/dependabot/renovate all warn if there's a security issue, but at that point it's a bit too late. (Better than nothing, definitely, a step in the right direction, but not a solution.)

Clean reboots are always hard, usually don't work out. Just look at the Py2-Py3 transition. Also there's a Perl6 story somewhere here. (Relevant xckd[1], relevant c2 entry[2] )

[0] https://github.com/crev-dev/cargo-crev

[1] https://xkcd.com/224/

[2] https://wiki.c2.com/?SecondSystemEffect

You don't need to read the code yourself, but ideally it should be vetted or reviewed by sources you trust. Maybe that's Debian / Ubuntu / Red Hat, or maybe it's through a review system like Rust's cargo-crev: https://github.com/crev-dev/cargo-crev

Minimizing the number of dependencies helps a lot too.

But don't blindly npm or pip install something unless you trust the developers. npx/pipx are even worse. All it takes is a one typo-squatter to steal your ssh keys and maybe even saved browser passwords or cookies.

A blockchain isn't needed for that. Authentication needs "crypto"graphy, but not "crypto"currency.

This wouldn't be a complete thread without someone mentioning Rust, so I'll do it. cargo-crev is a nice web-of-trust type code review system for Rust crates. https://github.com/crev-dev/cargo-crev

dependabot (GitHub's free? notifier) is probably the biggest risk factor in npm supply-chain attacks. Because who audits the actual diffs?

"npm-crev" can't come soon enough...

https://web.crev.dev/rust-reviews/ https://github.com/crev-dev/cargo-crev

not to mention that unless someone is very familiar with the code of dependencies it's very hard to review hundreds of small near meaningless changes unrelated to your actual functional/business requirements.

something like cargo-crev for npm might be a long term solution

https://github.com/crev-dev/cargo-crev

Reproducible builds are an important part of efforts to secure the software supply chain. Ideally you want multiple independent parties vouching that a given package (whether a compiled binary, or a source tarball) corresponds to a globally immutably published revision in a source code repository.

That gives you Binary Transparency, which is already being attempted in the Arch Linux package ecosystem[0], and it protects the user from compromised build environments and software updates that are targeted at a specific user or that occur without upstream's knowledge.

Once updates can be tied securely to version control tags, it is possible to add something like Crev[1] to allow distributed auditing of source code changes. That still leaves open the questions of who to trust for audits, and how to fund that auditing work, but it greatly mitigates other classes of attack.

[0] https://github.com/kpcyrd/pacman-bintrans

[1] https://github.com/crev-dev/cargo-crev

Bootstrapping is always a harder problem. But for updates, guix git authenticate [0] has definitely solved the problem. The idea is if you specify a certain commit that serves as a trust anchor, the .guix-authorizations file at that commit represents the public keys allowed to sign commits from that point: the file can be amended only by an already-approved key.

So if the repo has that file, and as long as you don't allow history rewrite when pulling (disabled by default), you can be pretty confident all commits have been signed only by authorized keys from the point you checked out initially.

About bootstrapping, several strategies are possible:

- [crev](https://github.com/crev-dev/cargo-crev) is an alternative (not PGP/SSH based) web of trust for vetting code

- [radicle](https://radicle.xyz/) is a p2p forge to do away entirely with web-based centralized forges (like github)

- we could imagine a sort of public-key-addressed DHT where a forge (such a Github) advertises the public keys of its members, for example based on IPNS (although a history-preserving system would be better)

- of course source-based distros like NixOS and GNU/guix are also a very good answer about bootstrapping trust in the code you run ; guix in particular does intensive research and development about bootstrappability and reproducibility [1] which you can read about on their blog [2]

Shameless plug: i wrote an article earlier this year about some of the challenges surrounding decentralized forging: https://staticadventures.netlib.re/blog/decentralized-forge/

[0] https://guix.gnu.org/en/blog/2020/securing-updates/

[1] I'm personally really amazed and slightly disappointed by both: they're really cool on paper, but although guix has a much better CLI UX than nix (and in my opinion Guile Scheme is much easier than nix programming language), both have very cryptic errors

[2] https://guix.gnu.org/en/blog/

The answer for "what's the source of this version of this crate?" is always "what was published to crates.io?", the content of the git repository should be irrelevant. All serious code review efforts should always operate on the code that's actually going to be used instead of the upstream git repository, for reasons outlined by the blog post.

This is how cargo-crev already works: https://github.com/crev-dev/cargo-crev

Tangentially related:

Many companies have auditing requirements for external dependencies, with increasing strictness for more sensitive domains.

It would be immensely helpful to distribute this effort.

We could have a platform that pays top domain experts and security researchers for audits. Companies can get access to via a subscription model or by paying for specific dependencies.

Vulnerabilities would also be reported and fixed, helping everyone, and companies benefit by having a trustworthy source for audits and save internal work.

Ideally the platform would be successful enough to open up a good amount of audits publicly to benefit the whole community.

Also related: cargo-crev [1] explores a concept for shared auditing and trust for Rust crates.

[1] https://github.com/crev-dev/cargo-crev

There are projects like crev [0] which attempt to get around this by using a web of trust to audit dependencies. Rust has cargo-crev [1] as an implementation.

Here's the previous HN discussion [2].

[0] https://github.com/crev-dev/crev/

[1] https://github.com/crev-dev/cargo-crev

[2] https://news.ycombinator.com/item?id=18824923

Maybe using crev?

https://github.com/crev-dev/crev https://github.com/crev-dev/cargo-crev

Probably just publish it and eventually you will get some review or bug reports.

Isn't this chain of trust essentially what crev [0] tries to do? FWIW Rust implements cargo-crev [1], but I suppose you could extend this to AUR packages with a bit of work.

[0] https://github.com/crev-dev/crev/

[1] https://github.com/crev-dev/cargo-crev

cargo-geiger will recursively warn you of unsafe code in dependencies: https://github.com/anderejd/cargo-geiger

It's not a silver bullet, it will show you areas where memory safety issues could arise, but doesn't necessarily prove the presence of memory safety issues. Memory safety issues aren't the only type of security bug so them being impossible also doesn't mean it's entirely safe. You can think of issues like injection attacks or faulty ACL logic happening without memory unsafety.

There's also cargo-crev, which is an attempt at making a web of trust for reviews of third party packages: https://github.com/crev-dev/cargo-crev - for a longer form article on what crev's goals are and how it works, I found this explained it for me: https://wiki.alopex.li/ActuallyUsingCrev

No, you are basically right, but the number of nodes in the dependency tree doesn't really mean that you really have to review all of those. Usually you end up with a big basket of actual dependent projects, and with some versions for them (which leads to the big explosion of the number of nodes in the dep tree).

Naturally it should be easy to specify a whitelist of licenses. (Of course then one has to decide whether to trust the package.json-s.)

That said, security review is hard for any ecosystem. Go probably has inherent advantages compared to the JS ecosystem, simply by virtue of being younger, having a real standard library, being more focused (no browser vs nodeJS issues) etc.

PS: there are projects that aim to do collaborative audit/review for Rust ( https://github.com/crev-dev/cargo-crev ) there should be something like that for the JS world. also there's the NPM "report vulnerability" feature.