What does HackerNews think of runtime?
.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
C# and .NET are most heavily invested in by Microsoft which owns and steers its development, that is true. It is also true that JVM world sees investment from multiple MSFT-sized corporations.
And yet, despite the above, it keeps moving forward and outperforming Java on user experience, performance and features despite being worked on by much smaller teams. I think it stands on its own as a measure of a well-made technology.
In addition, you can look at source code and contribute yourself, 90% of what makes .NET run is below. Almost all development happens in the open:
https://github.com/dotnet/runtime
https://github.com/dotnet/installer
https://github.com/dotnet/roslyn
https://github.com/dotnet/aspnetcore
Could Microsoft do a better job at making it even more community-facing and attempting to make the .NET foundation as a sole owner and steering committee of the language itself? Sure. But it's not that bad either today. Quick reminder - Oracle is not exactly a saint, perhaps even worse (MSFT has never gotten into any litigation even remotely related to .NET or C#).As for career opportunities, as other commenters would note, this is highly specific to a region and does not translate globally. Again, we are discussing the "how good the language/platform is" first and foremost. I don't see startups adopting Go because of the market or trusting Google not to rug pull them...so perhaps we can do a better job so the next language of choice they pick is C#, which has much higher ROI in the hands of the good developers (for example, it can be very easy to adopt as a second language if you are well versed in Rust).
It's certainly true that most things aren't AOT-compatible out of the box -- this is basically due to the heavy ecosystem reliance on reflection.
BUT, it definitely shouldn't be silently incompatible. We've spent a lot of time building "trimming warnings" (https://learn.microsoft.com/en-us/dotnet/core/deploying/trim...) that should flag any and all incompatibilities.
If there's some incompatibility, a warning should be generated. If there isn't that's likely a bug on us (https://github.com/dotnet/runtime). But conversely, if there are no warnings, you should be guaranteed that the app should be AOT-compatible.
For example the Go runtime and I think C# as well are written in their respective language.
Which is mostly (~95%) developed by Oracle and is about the same in its openness and community participation as
https://github.com/dotnet/runtime/
is.
Or maybe you meant OpenJ9?
FWIW I also use Linux exclusively, develop (and host) dotnet applications on it, and have my own gripes with it (mostly with Linux still being treated as a second-tier platform which is only good for servers as far as MS is concerned — I'm not talking about the abomination that VS is — try to compare the official profiling & debugging tooling).
I don't have a single, definitive, clear solution -- as pointed out by others -- nobody does. It's not a simple problem.
That doesn't mean that steps can't be taken to improve the situation, perhaps dramatically in some cases.
1) Enforced MFA to publish a crate -- credential theft is semi-regularly seen as an attack vector.
2) Strong links between the "source ref" and the specific crate versions. An example of this done super badly is NuGet. All of the hundreds (thousands?) of Microsoft ASP.NET packages point to the same top-level asp.net or .net framework URLs. E.g.:
https://www.nuget.org/packages/Microsoft.Extensions.Configur...
Links to "https://dot.net" as the Project Website, and "https://github.com/dotnet/runtime" as the repository. This couldn't be more useless. Where is the Git hash for the specific change that "7.0.0-preview.4.22229.4" of this library represents? Who knows...
3) Namespaces. They're literally just folders. If you can't code this, don't run a huge public website. This is more important than it sounds, because wildly unrelated codebases might have very similar names, and it's all too easy to accidentally drag in entire "ecosystems" of packages. Think of the Apache Project. It's fine and all if you've "bought in" to the Apache way of doing... everything. But imagine accidentally importing some Google thing, some Netflix thing, some Apache thing, and some Microsoft thing into the same project. Now your 2 KLOC EXE is 574 megabytes and requires 'cc', 'python', and 'pwsh' to build. Awesome.
For example, in ASP.NET projects I avoid anything not officially published by Microsoft and with at least 10M downloads because otherwise it's guaranteed to be a disaster in 5-10 years. Ecosystems diverge, wildly, and no single programmer or even a small group could possibly stitch them back together again. Either it's a dead end of no further upgrades, or rip & replace an entire stack of deeply integrated things.
4) Publisher-verified crate metadata / tags. You just cannot rely on the authors to be honest. It's not even about attacks, it's also about consistency and quality. All crates should be compiled by the hosting provider in isolated docker containers or VMs using a special "instrumented build" flag. Every transitive dependency should be indexed. Platform compatibility should be verified. Toolchain version compatibility should be established for the both the min and max range. Flags like "no-std" or whatever should be automatically checked. CPU and platform compatibility would also be very helpful for a lot of users. The most important one in the Rust world would be the "No unsafe code" tag.
This would stop "soft attacks" such as the guy spamming C++ libraries as Rust crates. Every such crate should have been automatically labelled as: "Requires CC" and "Less than 10% Rust code".
Similarly, if a crate/package changes its public signature in a breaking way, then the publishing system should enforce the right type of semantic versioning bump.
Essentially, what I would like to see is something more akin to a monorepo, but not technically a single repository. That is, a bunch of independent developers doing their own thing, but with a cloud-hosted central set of tooling that helps gain the same benefits as a monorepo.
I'm expecting a lot of arguments along the lines of "that sounds like a lot of work, etc..." Meanwhile Mozilla had a large team for this, millions of dollars of funding, and did not do even 0.1% of what Matt Godbolt did in his spare time...
Debugging is one area where the form factor will likely demand different behavior vs. JIT, so it will be an evolving scenario. It would be great to know what you would expect vs. what you saw, and importantly what you expect to be different from a traditional native debugging experience (like in C++).