People from outside Google freak out about this because at their company, in 99.9% of companies, running code on an engineer's workstation would immediately be the highest possible level of breach. Said process could silently insert code into repos, corrupt the build environment, replace packages in production, or whatever. If you never worked at a company that took it seriously it is hard to imagine that there are people who do take it seriously, and that it is possible to have technical defenses against committing unreviewed and unapproved code, poisoning the official build toolchain, or surreptitiously changing production software images.

If accurate, at least one of the hosts in question indicated the user was root. Perhaps it was running in a container. Regardless, this malicious package could be used for data exfiltration or other nefarious things.

But they have already explained (at the end) that in any event, "Googlers can download and run arbitrary code on their machines", with the implication being that Google already thought about and had to deal with this general issue a long time ago. What this exploit does is invert the how of this "arbitrary code" running on google machines, but it makes perfect sense that Google's security protections couldn't care less how the code ended up on some dev's machine since they themselves could have just explicitly pulled it from the net.

Not directly relevant but interesting...

https://github.com/google/santa

This is a product developed by Google that has at least been utilized internally to some extent. It's not perfect, but my previous company used it and it does prevent unexpected unknown code from running in the background.

What it does not do is prevent someone from intentionally downloading and executing a library unless the upvoter actually comes to some demand that you do so. I found that it quickly became a bit of a "alert fatigue" where you approve things your coworkers send you so they can get back to work without properly vetting.