Second, and more importantly, many people carry a copy of a trusted compiler around, though it’s rarely mention in attacks like these: their head. In a pinch people can do spot checks to verify codegen to see whether it looks correct, unless the backdoor is incredibly subtle. But experience shows us that the more complex and hidden a backdoor is the more likely it is to break when subjected to unfamiliar examination.
Timestamps are one of the biggest offenders, but this is also why reproducible builds are important. Nondeterministic codegen is just scary.
Second, and more importantly, many people carry a copy of a trusted compiler around, though it’s rarely mention in attacks like these: their head.
This is also why I'm against inefficient bloated software in general: the bigger a binary is, the easier it is to hide something in it.
Along the same lines, a third idea I have for defending against such attacks is better decompilers --- ideally, repeatedly decompiling and recompiling should converge to a fixed point, whereas a backdoor in a compiler should cause it to decompile into source that's noticeably divergent.
Not exactly easy, but probably easier than a decompiler that produces human-equivalent source code.
It could be an interesting exercise to bootstrap up from something like this to a working linux environment based solely on source code compilation : no binary inputs. Of course a full linux environment has way too much source code for one person or team to audit, but at least it rules out RoTT style binary compiler contamination.