Because C is slower than C.
That is, they are compiling C with GCC, but Rust uses LLVM as a backend. Compiling their C code with clang reveals that it is also slightly slower than the C code compiled by GCC, but not faster than Rust.
The actual conclusion here is that the GCC toolchain is slightly faster than the LLVM toolchain for this particular use case, but that isn't really news - it happens all the time.
If one want to focus this on Rust, then the downside of Rust vs C here is that C has a GCC toolchain that happens to be faster for this use case, but Rust does not.
Yeah, their initial explanation is that its the "safety features", but I was under the impression that literally everything related to rust's safety happens at compile time. Their big selling point is the "zero cost abstractions".
Their initial explanation is bounds checking specifically. While most abstractions are zero-cost, bounds checking isn't.
I do wish there were a way to track expectations about integer ranges in a type system, but there currently aren't.
Bounds checking is a “zero cost abstraction” in the sense that it is both “pay for what you use” and “you couldn’t implement it yourself any better.” That said, the implemention is not as fast as it theoretically can be, because the compiler isn’t always removing bounds checks that are provably true.
When const generic lands, you will be able to make those assertions! A basic implementation is in nightly.
how do you not use bounds checking?
(unsafe like get_unchecked doesn't count because then you can say that about almost every rust feature).
You're mis-understanding what "you don't pay for what you don't use" means. It means that a program that does not contain [] in it does not include code for []. There's no "mandatory runtime" code for []. It doesn't mean you can use [] in some special way and somehow get less code.
And even if we were talking about your understanding, I don't see why get_unchecked would be disqualified. unsafe is a part of the language for a reason.
I know, the meaning of "zero cost abstraction" is really "zero cost to not use the abstraction."
But that's like saying if you don't use rust you don't pay for it. Just because there is the unsafe escape hatch in the language, you don't get to label the language as zero cost abstraction. Because practically speaking there is a lot more runtime cost to rust than people will tell you.
Many langauges (almost anything without a runtime) meet that definition of zero cost abstraction then.
You still seem to be missing the point.
If you use a Rust construct that does bounds-checked accesses, you're explicitly using bounds checks. You're not paying for anything beyond the bounds checks that you're using.
If you use a Rust construct that elides the bounds checks (e.g. get_unchecked), there are no bounds checks, and you don't pay any cost for the fact that Rust supports a bounds-checked subscript operator.
If you want to compare Rust's subscript operator, do so with an explicitly bounds-checked version of the C code. That's the appropriate comparison for "zero-cost abstraction". Because the whole point of "abstraction" is it's not a change to the observed runtime behavior, it's just a language abstraction.
1. Use a C-style raw pointer. This has no overhead but is unsafe.
2. Use a reference counted or garbage collected pointer. This provides safety but is a non-zero-overhead abstraction. In other situations, reference counting can be necessary to manage unpredictable object lifetimes, but in this case we're assuming it would only be used to guard against mistakes, so the cost represents overhead. Even if some language implements reference counting just as fast as you can implement reference counting in C, that doesn't make it zero-overhead.
But Rust adds a new option:
3. Use a borrow-checked reference. This is safe, yet zero-overhead compared to the unsafe option, as long as your usage pattern can be expressed in terms of Rust lifetimes.
Going back to array indexing, the analogous situation is that you want to access an index that you, the programmer, expect to always be in bounds. Option 1 corresponds to unchecked indexing, and option 2 corresponds to bounds-checked indexing. But Rust brings nothing new in this area: there is no option 3.
Yet an option 3 is theoretically possible: safe unchecked indexing if you can prove to the compiler that the index can never be out of bounds. This can be done in languages with dependent types or built-in proof systems. (It can even be done in Rust with a certain neat hack [1], but with a lot of limitations.)
I'm not saying Rust is bad for not having dependent types. As far as I know, it's an open research problem how to make those features feel ergonomic and easy to use, even to the extent that Rust lifetimes are (i.e. not totally). And on the flipside, bounds checks usually don't cause very much overhead in practice, thanks to CPU branch prediction, plus the optimizer can sometimes remove them.
But I'd say that Rust's choice not to go there (...yet...) justifies calling its current approach a non-zero-overhead abstraction.