What does HackerNews think of rust?
Empowering everyone to build reliable and efficient software.
The foundation is now nine months old, but I don't see any announcement on its website saying that the trademarks have been transferred.
The Rust website [1] says that the Rust trademark is still owned by the Mozilla Foundation.
The README in Rust's git repository [2] says it's owned by the Rust Foundation.
I am distinctly unimpressed by the foundation's communication skills so far.
[1] https://www.rust-lang.org/policies/media-guide [2] https://github.com/rust-lang/rust
This project is using libgccjit, which is basically a library interface to GCC (intended for JITs, but here being used for ahead-of-time compilation), as an alternative to LLVM for the standard Rust compiler ("rustc" aka https://github.com/rust-lang/rust). This allows reusing all the logic inside the Rust compiler for compiling code, checking types and lifetimes, etc. while targeting platforms that LLVM does not support.
The GCC Rust project https://rust-gcc.github.io/ is an alternative implementation of Rust inside the GCC project, in much the same way that, say, gccgo is an alternative implementation of Go or GCJ is an alternative implementation of Java. It's adding a Rust frontend to GCC, instead of adding a GCC backend to rustc.
From a pure functionality standpoint, the libgccjit approach is preferable, because as soon as a feature is in rustc, it's usable. You share the same implementation.
Personally, I'm very happy to see it approved because it immediately solves a difficult issue with getting Rust into the Linux kernel - that the Linux kernel supports many more architectures than LLVM does, and rustc doesn't even support all of those (https://github.com/fishinabarrel/linux-kernel-module-rust/is... is where things were last time I looked in detail), and without some answer for how to get things to compile on every architecture, Rust kernel code will have to be limited to drivers that are only used on architectures that Rust supports.
From an avoiding-monoculture standpoint, GCC Rust is preferable specifically because it's an alternative implementation. I think it will be good, long-term, to have that as an option too.
Also, as I understand it, a reason that the company behind grsecurity is sponsoring GCC Rust is that they have various GCC compiler plugins for hardening, and those would apply straightforwardly to GCC Rust, but they wouldn't necessarily apply to libgccjit. From https://opensrcsec.com/open_source_security_announces_rust_g... :
> As the source of the GCC plugin infrastructure in the Linux kernel and nearly all of the GCC plugins adapted for inclusion in the upstream Linux kernel, we too immediately spotted the importance of this problem and set out to ensure both those plugins as well as the security features built-in to GCC itself are able to instrument code from all languages supported by the Linux kernel with compatible and consistent security properties.
https://github.com/rust-lang/rust
The git repo has over 150,000 commits, 3,000 contributors, and 9 years of release history. The last 6 years of which have been post 1.0.
Also, Mozilla has been writing large parts of firefox in Rust since ~2017. There's some interesting writeups on hacks.mozilla, including this one on reducing CVEs with Rust
https://hacks.mozilla.org/2019/02/rewriting-a-browser-compon... https://research.mozilla.org/rust/
All the commercial service providers recommend keeping total repository sizes <1GB or so, and I hear nothing but performance complaints and how much they miss perforce from those who foolishly exceed those limits, even when self hosting on solid hardware - which is 100% the fault, or at least limitation, of git - I believe you'll agree.
LFS is a suggested alternative by several commercial service providers, not just one, and seems to be one of the least horrible options with git. You're certainly not suggesting any better alternatives, and I really wish you would, because I would love for them to exist. This results in a second auth system on top of my regular git credentials, recentralization that defeats most of the point of using a DVCS in the first place, and requires a second set of parallel commands to learn, use, and remember. I got tired enough of explaining to others why you have a broken checkout when you clone an LFS repository before installing the LFS extension, that I wrote a FAQ entry somewhere that I could link people. If you don't think these are problems with "git", we must simply agree to disagree, for there will be no reconciling of viewpoints.
When I first hit the quota limits, I tried to setup caching. Failing that, I tried setting up a second LFS server and having CI pull blobs from that first when pulling simple incremental commits not touching said blobs. Details escape me this long after the fact - I might've tried to redirect LFS queries to gitlab? After a couple hours of failing to get anywhere with either despite combing through the docs and trying things that looked like they should've worked, then I tried to pay github more money - on top of my existing monthly subscription - as an ugly business-level kludge to solve a technical issue of using more bandwidth than should really have been necessary. When that too failed... now you want to pin the whole problem on github? I must disagree. We can't pin it on the CI provider either - I had trouble convincing git to use an alternative LFS server for globs when fetching upstream, even when testing locally.
I've tried gitlab. I've got a bitbucket account and plenty of tales of people trying to scale git on that. I've even got some Microsoft hosted git repositories somewhere. None of them magically scale well. In fact, so far in my experience, github has scaled the least poorly.
> Github the company is not interested in providing you (or anyone else) with free storage for arbitrary data.
I pay github, and tried to pay github more, and still had trouble. Dispense with this "free storage" strawman.
> You were unable to pay for the storage options they do provide because you did not have admin rights to the github account you wanted to work with.
To be clear - I was also unable to pay to increase LFS storage on my fork, because they still counted against the original repository. Is this specific workaround for a workaround for a workaround failing, github's fault? Yes. When git and git lfs both failed to solve the problem, github also failed to solve the problem. Don't overgeneralize the one ancedote of a failed github-specific solution, from a whole list of git problems, to being the whole problem and answer and it all being github's fault.
> None of this is a problem with git, be it GUI git clients or command line ones.
My git gui complaints are a separate issue, which I apparently shouldn't merely summarize for this discussion.
Clone https://github.com/rust-lang/rust and run your git GUI client of choice on it. git and gitk (ugly, buggy, and featureless though it may be) handle it OK. Source Tree hangs/pauses frequently enough I uninstalled, but not so frequently as to be completely unusable. I think I tried a half dozen other git UI clients, and they all repeatedly hung or showed progress bars for minutes at a time, without ever settling down, when doing basic local use involving local branches and local commits - not interacting with a remote. Presumably due to insufficient lazy evaluation or insufficient caching. And these problems were not unique to that repository either, and occured on decent machines with an SSD for the git UI install and the clone. These performance problems are 100% on those git gui clients. Right?
> This isn’t just "technically correct".
Then please share how to simply scale git in practice. Answers that include spending money are welcome. I haven't figured it out, and neither has anyone I know. You can awkwardly half-ass it by making a mess with git lfs. Or git annex. Or maybe the third party git lfs dropbox or git bittorrent stuff, if you're willing to install more unverified unreviewed never upstreamed random executables off the internet to maybe solve your problems. I remember using bittorrent over a decade ago for gigs/day of bandwidth, back when I had much less of it to spare.
> It’s the "a commercial company doesn’t have to provide you with a service if they don’t want to" kind of correct.
If it were one company not providing a specific commercial offering to solve a problem you'd have a point. No companies offering to solve my problem for git to my satisfaction, despite a few offering it for perforce, is what I'd call a git ecosystem problem.
Not an ideal example. The rust-vim plugin supports the same feature (though, naturally via keys instead of mouse).
I do think though, that going down that route might be challenging if you try to do it the first time you are trying to develop in Rust. Furthermore, even then you are starting out with a pre-compiled toolchain and trusting quite a few crates to not do the kinds of things you are expressing worry about.
If you really insist on doing everything manually, the first question becomes: Do you trust the officially provided pre-compiled Rust toolchain [1]?
If not, you will first have to build the toolchain from source.
That means downloading and building at least the following two from source:
https://github.com/rust-lang/rust
https://github.com/rust-lang/cargo
That includes building the bundled bits of LLVM from source. If your computer is beefy I think that will take about 20 to 30 minutes alone, which is not too bad, assuming that it builds successfully. If you are using say, a laptop from 2012 or there-around, I think the LLVM part alone is going to take somewhere around 3 to 6 hours probably. (Based on numbers from having compiled upstream LLVM from source in the past -- not a fun experience. I don't know how much of LLVM is bundled with Rust compared to upstream LLVM so take these number with a grain of salt.) And the point about if it builds successfully relates among other things to the amount of RAM and swap you have available on your machine.
But if you don't trust the officially provided pre-compiled Rust toolchain then the question is, why not? Is it the Rust project itself you distrust or do you fear that their infrastructure might have been compromised?
If you distrust the Rust project you will need to do a full code review of the Rust toolchain sources before you build it.
If you distrust the integrity of their infrastructure -- well, then someone might have snuck in malicious code in their repos. So better do a full code review of the Rust toolchain sources in that case as well.
I have no idea how much time that would take. It is not something I would willingly embark on myself. It's too much code that I think that myself or anyone I know could realistically do a full code review of it in any conceivable amount of time.
I do not have experience in compiler writing. And even if I did, how could I truly know that all of the complex things that was going on really only did what it appeared to? How could I know that certain combinations of seemingly benign instructions weren't exploiting a weakness in my CPU?
Anyway, once you've got that all out of the way, or if you do decide to trust the officially provided pre-compiled Rust toolchain you will have to then move on to do a full code-review of your dependencies and all of their dependencies and so on. And then you can build those and use them. And even reviewing all of those is likely to be a lot of work.
Because that is what it would take. I am sure we are all aware of that [2].
Otherwise, it doesn't help that your development VM is air gapped. If the compiler or any of your dependencies are really malicious then you can't trust the compiler output that was produced inside of your development environment either.
Although, if not just the environment that you develop in but also the environment that you run your software in is air gapped as well, then you could be pretty confident that your concerns are attended to.
But then, if the environment that you run your software in is air gapped and you are satisfactory content that nothing malicious could cause harm, why would you have to go through all of the trouble of manually reviewing everything and putting it together?
Instead I would think that in order to address your concerns what you should do is as follows: Start from a clean slate in terms of what data you have on your development system -- that is, start with a computer that has a completely clean drive (either by having wiped it with multiple passes of overwrites consisting of random data, or probably preferably by having bought a new drive that you haven't put any of your data on in the first place). Then install the operating system. Then install the officially provided pre-compiled Rust toolchain. Then install all of your dependencies. Then power the system off and physically remove the wireless NIC from your computer. Then put your data into the system, either by typing it in or by using read-only storage media, or by using a read-write storage media that will only ever be in contact with air gapped systems in the future. Then keep the system air gapped.
When you need to update your toolchain, or dependencies, or add new dependencies, put your data on a storage media that will only ever be in contact with air gapped systems. Then wipe the drives of your system, or physically destroy them and replace them. Then put the wireless NIC back in your computer, or use a network cable, and install the operating system and the Rust toolchain and your dependencies. Then power the system off and remove the WNIC / unplug the network cable. Then put your data back on the system.
Even all of that is a lot of work and takes time as well though. So strict firewall rules and monitoring of the network traffic might suffice.
Even that is a burden though. And I think that is why even though ideally we should all be far more careful, most of us will leave it to the open source community to catch the malicious code and bet on this being enough to protect the data that we keep on our personal systems.
My threat model is that none of my personal systems hold any sufficiently interesting data that it would make sense for anyone to target me in specific. So the types of attacks that my systems are likely to be exposed to are the same kind that anyone and everyone is exposed to. And because those kinds of threats hit everyone, they are discovered by others and remedied before they ever hit me.
That all being said, if you do decide to go on a code review spree I am all for it -- you will help us all if you do :)
And also, just because I don't do full code reviews of everything I use, and I don't compile all of it myself, doesn't mean I never read any of the code that I run on my system. I read a lot of it -- just not all of it and only to a certain level of depth. And I don't install just any random binaries either. But anyway, a bit of reading other peoples code, especially when you depend on that code, and being conscious of what you install and from where goes a long way in my experience. And reading code, as we know, is a great way to become a better programmer also.
[1]: https://rustup.rs/
It's not well documented, but there's a much simpler way, which I used until recently (newer versions of vim already come with syntax highlighting for Rust out of the box):
Checkout https://github.com/rust-lang/rust.vim and copy everything there except the README/LICENSE/.gitignore to your ~/.vim directory, keeping the exact same directory structure.
I attempted to reproduce your report by downloading the 1.2 compiler and compiling this code in both versions. Here is what I got:
---
First, the Vec.join report. I attempted to compile a crate with this body on both versions:
fn main() {
let v: Vec<&str> = vec!["hello, ", "world"];
v.join("");
}
On 1.2 this failed to compile with exactly the error you stated. This isn't surprising to me since I recall us adding .join to Vec several months after the 1.2 release. So you certainly weren't compiling that code on 1.2.In contrast, on 1.17, this code compiled just fine. This is exactly what I expected, since I use vec.join all the time on vectors of string slices.
In conclusion, I was unable to reproduce this bug.
---
Second, the Cargo star dependencies report. Manishearth is wrong, we disallow those dependencies from being uploaded to crates.io, but end users are allowed to use them just fine (strongly recommended that you don't, though!).
To attempt to reproduce, I built a crate with this dependencies section:
[dependencies]
chrono="*"
On 1.2, I got exactly the error you stated. I don't know what the source of it is, but I know that star dependencies are risky because they imply indefinite forward compatibility, which is impossible to guarantee.On 1.17, this successfully resolved to the 0.3.0 version of chrono, which compiled just fine.
Once again, I was unable to reproduce your bug report.
---
If you have any more bug reports, please post them on https://github.com/rust-lang/rust . And you can check which version of the compiler you're using with `rustc --version` (supported since before 1.0).
Look at the number of contributors for Scala [0], Go [1], and Rust [2].
[0] https://github.com/scala/scala/
IRC is used for quick casual conversation. The internals forum is used for discussion, "pre-RFC", and the like. The rfcs repo is use for actually discussing formal RFCs, as well as wishlist language and library bugs. The rust repo is for the actual implementation of rustc and the standard library itself.
I will be hiring for two positions in the coming month, associated primarily with Servo (https://github.com/servo/servo ) and Rust (https://github.com/rust-lang/rust/ ).
1) Senior browser engineer. I am looking for a developer with deep familiarity with web platform standards, especially related to the implementation of the DOM and integration with the JS engine, to help build out this support in Servo. Experience developing systems software required.
2) Experienced operations engineer. Working in concert with the larger Mozilla release and build teams, build out the Mozilla Research continuous automation, release, testing, etc. systems, focusing first on Servo and Rust. We explicitly want candidates with a history of reuse and contribution to existing projects. Experience with build systems, automation, and cloud systems preferred.
Please feel free to contact me directly with more questions - larsberg AT mozilla DOT com. Job postings with more details should be coming online soon...