The arguments in this blogpost are fundamentally flawed. The fact that they opened a bug based on them but got shut down should have raised all red flags.

When compiling and running a C program, the only thing that matters is "what the C abstract machine does". Programs that exhibit UB in the abstract machine are allowed to do "anything".

Trying to scope that down using arguments of the form "but what the hardware does is X" are fundamentally flawed, because anything means anything, and what the hardware does doesn't change that, and therefore it doesn't matter.

This blogpost "What The Hardware Does is not What Your Program Does" explains this in more detail and with more examples.

https://www.ralfj.de/blog/2019/07/14/uninit.html

Honestly, I think you are both incorrect.

C has always had a concept of implementation defined behavior, and unaligned memory accesses used to be defined to work correctly on x86.

Intel added instructions that can’t handle unaligned access, so they broke that contract. I’d argue that it is an instruction set architecture bug.

Alternatively, Intel could argue that compilers shouldn’t emit vector instructions unless they can statically prove the pointer is aligned. That’s not feasible in general for languages like C/C++, so that’s a pretty weak defense of having the processor pay the overhead of supporting unaligned access on some, but not all, paths.

Unaligned memory accesses are undefined behavior in C. If you're writing C, you should be abiding by C rules. "Used to work correctly" is more guesswork and ignorance than "abiding by C rules". In C, playing fast&loose with definitions hurts, BAD.

Frankly, I'd be ashamed to write this blog post since the only thing it accomplishes is exposing its writers as not understanding the very thing they're signaling expertise on.

What makes you think they don't understand it? They acknowledge that it is UB. I read them as realistic, since they know that people rely on C compilers working a certain way. They even wrote an interpreter that detects UB: https://github.com/TrustInSoft/tis-interpreter

I understand why people like the compiler being able to leverage UB. I suspect this philosophy actually makes Trust-In-Soft more money: You could argue that if there was no UB, there would be no need for the tis-interpreter.

So isn't it in fact quite self-less that they encourage the world to optimize a bit less (spending more money on 'compute'), while standing to profit from the unintended behaviour they'd otherwise be contracted to help debug?