Can I ask, is this a real, actual concern? Why do I need to sign and verify my software is my software? Why is a hash not sufficient integrity verification?

I have never heard of a good argument for this besides the Apple-esque control of remotely disabling the ability for software to run based on certificate authority, which is not a feature I'm interested in.

Further, I'd like to not see this as possible, since year after year more and more software companies seem to think they're entitled to more and more.

Fifteen years ago the principle security worry in running a web application was that some "script kiddy" would break in and deface your homepage.

Today the threats are much more real. Ransomware, cryptocurrency miners, even state actors.

An enormous point of weakness in modern software is the supply chain - many projects now have thousands of nested dependencies.

Most of those dependencies represents at least one human being who can be threatened with a crowbar and forced to ship an exploit, which can then infect vast numbers of production applications.

So yes, for me this is a very real concern!

Why can't they be "threatened with a crowbar" to sign the exploit?

Ultimately the system will need to support signatures which represent not just "I made this" but "I reviewed this", and people will need to set policies for whose reviews they trust, and how many reviews they require for each component.

If reviewers can build up a reputation anonymously, that will make it harder to find the human who needs to be crowbarred, but I'm not sure how you prove you are a good reviewer in a way which isn't gameable.

Alternatively, the reviewers could be well known teams in multiple jurisdictions, such that an attacker would need to buy multiple crowbars and multiple plane tickets.

The reviewing aspect sounds a lot like cargo-crev. [1]

[1] https://github.com/crev-dev/cargo-crev