Memory, memory, memory, and memory again. All 4 of them.

Yet everyone’s idea of safety still seems to be “just write bug-free C code, bro”.

> Yet everyone’s idea of safety still seems to be “just write bug-free C code, bro”.

Does it though? I think "the industry" has pretty much come around on C being unsafe, and generally not a great choice of language for new projects.

But the fact is that there's a lot of C code out there, and we can't just ditch or rewrite it all overnight. In an ideal world, we wouldn't be starting from here. But we are, and it's going to take time to get to where we need to be.

Heck, take a look at the recent story about "Considering C99 for curl": https://news.ycombinator.com/item?id=33704054

From the linked article:

> The slowest of the “big compilers” to adopt C99 was the Microsoft Visual C++ compiler, which did not adopt it properly until 2015 and added more compliance in 2019. A large number of our users/developers are still stuck on older MSVC versions so not even all users of this compiler suite can build C99 programs even today, in late 2022.

What are the chances that the kinds of people who are still doing development on systems where the C compiler is from 2015 or earlier, or on a more niche system that Windows that lots of new safe languages haven't been ported to yet (and such ports aren't even on the radar), are going to able to start moving to developing in rust or whatever anytime soon?

> Does it though? I think "the industry" has pretty much come around on C being unsafe, and generally not a great choice of language for new projects.

If you look at some of the Rust threads on HN right now [0], you find many commenters claiming that there is no reason to replace C with Rust. It could be a vocal minority, but there's still a sense that "not everyone is on board".

[0] https://news.ycombinator.com/item?id=34788714

> there is no reason for replace C with Rust.

If you have some reason to choose C today, I'm not sure Rust is a contender to stand in as a replacement. They are vastly different languages. That doesn't mean there is rejection that C's memory model is unsafe, just that there isn't a "better C" recognized to choose from.

IMHO Rust could be a good replacement for C when (if) it gets an actual spec, or at least when (if) there are multiple real implementations of the compiler (meaning, with borrow checker, compiler errors, and all).

Because otherwise, you could write Rust 2018 edition (for example), and there's nothing preventing a situation where your currently-valid Rust-2018-edition program stops compiling (or behaves differently) on a future version of rustc even when compiling to that specific edition, just because some behavior is (in the future) considered a compiler bug or something. And it would be perfectly valid to cause such breakage, because there's no definition of "what is Rust", other than "what this specific compiler in this specific version accepts".

I'm not talking about whether such possible backwards-incompatible change on a Rust edition would be good or bad. Just that with the current monoculture around a single implementation, there's not many guarantees.

A specification, or multiple implementations, would at least add some (good) friction.

I'm not denying its current value as a safe language; just that, as it is now, I don't consider it as a good choice for programs that must stay alive for decades, like `curl` or an operating system.

> need multiple implementations

This thinking is understandable but it’s basically pattern matching. There were two successful languages that followed this model and so a language that seeks to replace them should adopt the same model of design by committee followed by multiple independent implementations.

But there’s no reason to think that a language needs multiple implementations to succeed. Python has had one implementation used by all its user and that hasn’t prevented from becoming a top 3 language by usage. Ditto with other very popular languages like Go, Ruby, TypeScript, C#.

I don’t understand the concern that your code will stop compiling with a future version of the compiler. Firstly, Rust hasn’t broken backwards compatibility in 8 years and doesn’t plan to in future. Secondly, why do you need to upgrade the compiler? If it works on the current version you can continue to use that in perpetuity.

Honestly, having multiple implementations would be a curse. I see nothing but negatives. Even the smallest proposals (like embedding data in a binary) take a decade to get buy in and 3-4 years to be implemented. Convincing everyone of the benefit of a feature is just too difficult so a lot of good ideas are never put forward, let alone implemented.

And the difficulty of getting it working across compilers and operating systems. How hard is it to write C++20 code that will compile with all major compilers on all major OSs? On Rust it’s trivial. To put it simply - it’s trivial to answer the question “is this Rust code valid?” If rustc compiles it, it’s valid today and will always be valid (modulo bugs).

You want alternate implementations? There’s a WIP - gcc-rs. But thankfully they haven’t forked the semantics of the language. It should and shouldn’t compile all code that rustc does and doesn’t. Otherwise it’s a bug in gcc-rs. Anything else would make the life of Rust users painful.

In summary, multiple implementations is a bad idea for many reasons. If you can express the benefit beyond “that’s the way C and C++ do it”, I’m eager to listen. But please explain how Python and Go managed to succeed with one impl.

The trouble with a single, unstandardised implementation is that it's only available on the OSs and CPUs that the lone implementation developers choose to target.

By creating a standard, and allowing multiple implementations, someone creating a new OS or a new CPU architecture can create their own implementation of the language without having to wait for anyone else to deign to create the port.

Maybe a new implementation won't compile very fast, or produce great code, or it might not even catch all the incorrect constructs that it ought to. But if it can make all the existing code written in that language suddenly available for the new platform, that platform instantly becomes orders of magnitude more useful and powerful.

> How hard is it to write C++20 code that will compile with all major compilers on all major OSs?

That depends on the code. You want to compile Firefox? You might be out of luck.

You want to compile curl? Based on how portable it already is, chances are it might work out of the box once you've got a libc implementation working.

You want to compile Gnu bash and coreutils, to get a basic Posix shell environment up and running? It might be a bit of work, but easier than having to reimplement a whole userland from scratch.

> But please explain how Python and Go managed to succeed with one impl.

It's not that the language can't succeed - at least, not on popular platforms that the devs are interested in targetting. It's that those devs get to dictate which platforms are even capable of making use of the software written in those languages.

And that just rubs me the wrong way.

> It's that those devs get to dictate which platforms are even capable of making use of the software written in those languages.

Adding other tiers is welcomed. Take a look at the supported platforms (https://doc.rust-lang.org/nightly/rustc/platform-support.htm...) and see if there's a platform you'd like to target that isn't a tier 3 target at least. There's HaikuOS, PlayStation1 etc. I'm on a Tier 2 platform myself and I'm really happy.

If you'd like even more platforms, there's an ongoing project to add a GCC backend to rustc (https://blog.antoyo.xyz/rustc_codegen_gcc-progress-report-20).

> if it can make all the existing code written in that language suddenly available for the new platform, that platform instantly becomes orders of magnitude more useful and powerful.

I agree with this, but creating a front end that can actually compile all existing code correctly is a massive task. If the compiler is modular enough, you should be able to contribute just a backend, rather than reimplementing a frontend. I'm strongly against only the frontend reimplementation, because it's no longer clear what "valid" Rust code is.

In summary - I completely agree with the importance of targeting niche platforms. I think supporting multiple backends (LLVM now, libgccjit and Cranelift in future) gets us there without fracturing the ecosystem.

> but easier than having to reimplement a whole userland from scratch.

Probably. But once it's reimplemented (https://github.com/uutils/coreutils) I think it's really cool you can run get binaries for Linux, macOS and Windows with one build command.