Coming from C/C++. Is working with the online package manager (Cargo?) mandatory? Or is there a sustainable way of working/developing with Rust while completely offline?

I'd like to start a project, manually import libraries (downloaded manually, no dependency hell), read documentation, etc. Is it possible?

> is there a sustainable way of working/developing with Rust while completely offline?

Yes. Once you have downloaded your dependencies the first time while online you are able to work with them completely offline.

> read documentation

Rust docs:

rustup downloads docs for Rust itself alongside the toolchain when you download it.

`rustup doc --book` will open the locally downloaded copy of the book The Rust Programming Language in your browser.

`rustup doc` will open the locally downloaded copy of the overview of Rust Documentation in your browser. The locally downloaded docs include things like the docs for the Rust Standard Library.

Project and dependencies docs:

`cargo doc --open' will build the docs for your project and for your dependencies as offline HTML files and open the locally build docs in your web browser for you.

Subsequently running the same command while offline will open the already built docs in your web browser again.

You will find the built docs under target/doc/ in your project. This includes the docs for your dependencies and their dependencies and so on.

And even if you delete the built docs, for example by running `cargo clean', cargo can rebuild the docs offline because it has cached the source code of your dependencies and their dependencies and so on.

> manually import libraries (downloaded manually, no dependency hell)

Rather than attempt to do it manually I would advice that you run `cargo build' once while online, so that your dependencies are fetched and made available offline. Attempting to do it completely manually has no benefit that I can see and would only serve to waste time and probably introduce problems that would not happen if you leave it to cargo to fetch it all for you.

And you can write crates of your own locally, never publish them online and import them by relative local path.

Thank your for the detailed answer.

> Attempting to do it completely manually has no benefit that I can see

Package managers and build tools can do basically whatever they want - run commands, execute binaries, download data, upload data, send telemetry - whatever any one of the package maintainers wants.

I prefer to develop in an offline VM. When a new library is required I just download it using the host machine, copy into VM and use it there. It's easy with make/cMake.

I found a comment on /r/rust that details the setup that one guy is using for doing Rust development in an air gapped environment. https://www.reddit.com/r/rust/comments/793evq/using_cargo_on...

I do think though, that going down that route might be challenging if you try to do it the first time you are trying to develop in Rust. Furthermore, even then you are starting out with a pre-compiled toolchain and trusting quite a few crates to not do the kinds of things you are expressing worry about.

If you really insist on doing everything manually, the first question becomes: Do you trust the officially provided pre-compiled Rust toolchain [1]?

If not, you will first have to build the toolchain from source.

That means downloading and building at least the following two from source:

https://github.com/rust-lang/rust

https://github.com/rust-lang/cargo

That includes building the bundled bits of LLVM from source. If your computer is beefy I think that will take about 20 to 30 minutes alone, which is not too bad, assuming that it builds successfully. If you are using say, a laptop from 2012 or there-around, I think the LLVM part alone is going to take somewhere around 3 to 6 hours probably. (Based on numbers from having compiled upstream LLVM from source in the past -- not a fun experience. I don't know how much of LLVM is bundled with Rust compared to upstream LLVM so take these number with a grain of salt.) And the point about if it builds successfully relates among other things to the amount of RAM and swap you have available on your machine.

But if you don't trust the officially provided pre-compiled Rust toolchain then the question is, why not? Is it the Rust project itself you distrust or do you fear that their infrastructure might have been compromised?

If you distrust the Rust project you will need to do a full code review of the Rust toolchain sources before you build it.

If you distrust the integrity of their infrastructure -- well, then someone might have snuck in malicious code in their repos. So better do a full code review of the Rust toolchain sources in that case as well.

I have no idea how much time that would take. It is not something I would willingly embark on myself. It's too much code that I think that myself or anyone I know could realistically do a full code review of it in any conceivable amount of time.

I do not have experience in compiler writing. And even if I did, how could I truly know that all of the complex things that was going on really only did what it appeared to? How could I know that certain combinations of seemingly benign instructions weren't exploiting a weakness in my CPU?

Anyway, once you've got that all out of the way, or if you do decide to trust the officially provided pre-compiled Rust toolchain you will have to then move on to do a full code-review of your dependencies and all of their dependencies and so on. And then you can build those and use them. And even reviewing all of those is likely to be a lot of work.

Because that is what it would take. I am sure we are all aware of that [2].

Otherwise, it doesn't help that your development VM is air gapped. If the compiler or any of your dependencies are really malicious then you can't trust the compiler output that was produced inside of your development environment either.

Although, if not just the environment that you develop in but also the environment that you run your software in is air gapped as well, then you could be pretty confident that your concerns are attended to.

But then, if the environment that you run your software in is air gapped and you are satisfactory content that nothing malicious could cause harm, why would you have to go through all of the trouble of manually reviewing everything and putting it together?

Instead I would think that in order to address your concerns what you should do is as follows: Start from a clean slate in terms of what data you have on your development system -- that is, start with a computer that has a completely clean drive (either by having wiped it with multiple passes of overwrites consisting of random data, or probably preferably by having bought a new drive that you haven't put any of your data on in the first place). Then install the operating system. Then install the officially provided pre-compiled Rust toolchain. Then install all of your dependencies. Then power the system off and physically remove the wireless NIC from your computer. Then put your data into the system, either by typing it in or by using read-only storage media, or by using a read-write storage media that will only ever be in contact with air gapped systems in the future. Then keep the system air gapped.

When you need to update your toolchain, or dependencies, or add new dependencies, put your data on a storage media that will only ever be in contact with air gapped systems. Then wipe the drives of your system, or physically destroy them and replace them. Then put the wireless NIC back in your computer, or use a network cable, and install the operating system and the Rust toolchain and your dependencies. Then power the system off and remove the WNIC / unplug the network cable. Then put your data back on the system.

Even all of that is a lot of work and takes time as well though. So strict firewall rules and monitoring of the network traffic might suffice.

Even that is a burden though. And I think that is why even though ideally we should all be far more careful, most of us will leave it to the open source community to catch the malicious code and bet on this being enough to protect the data that we keep on our personal systems.

My threat model is that none of my personal systems hold any sufficiently interesting data that it would make sense for anyone to target me in specific. So the types of attacks that my systems are likely to be exposed to are the same kind that anyone and everyone is exposed to. And because those kinds of threats hit everyone, they are discovered by others and remedied before they ever hit me.

That all being said, if you do decide to go on a code review spree I am all for it -- you will help us all if you do :)

And also, just because I don't do full code reviews of everything I use, and I don't compile all of it myself, doesn't mean I never read any of the code that I run on my system. I read a lot of it -- just not all of it and only to a certain level of depth. And I don't install just any random binaries either. But anyway, a bit of reading other peoples code, especially when you depend on that code, and being conscious of what you install and from where goes a long way in my experience. And reading code, as we know, is a great way to become a better programmer also.

[1]: https://rustup.rs/

[2]: http://wiki.c2.com/?TheKenThompsonHack