What does HackerNews think of spack?

A flexible package manager that supports multiple versions, configurations, platforms, and compilers.

Language: Python

#61 in Linux
#42 in macOS
#3 in npm
#79 in Python
In Spack [1] we can express all these constraints for the dependency solver, and we also try to always re-cythonize sources [2]. The latter is because bundled cythonized files are sometimes forward incompatible with Python, so it's better to just regenerate those with an up to date cython.

[1] https://github.com/spack/spack/ [2] https://github.com/spack/spack/pull/35995

Have you looked at Spack (https://github.com/spack/spack)? Disclaimer: I lead the project.

Spack essentially is nix with dependency resolution. spack packages can have conditional (nearly) anything: versions, options, compiler choices, dependencies (including virtual deps). The resolver can make choices about all of those.

See here for more details: https://arxiv.org/abs/2210.08404

You might as well try Spack, it's Python + a dsl to customize builds in a single line. Guix package descriptions look very daunting to me.

[1] https://github.com/spack/spack/

You may be interested by Spack [1] if you're fine with linux / macos, it gives you all the versions, and you can specify lower and upper bounds for dependencies. Not only that, it also gives you conditional dependencies through variants, compilers, architectures, etc. Also it allows you to compile everything from sources for your micro architecture.

[1] https://github.com/spack/spack

You should really try out the spack package manager, it will manage all these constraints between packages, variants, dependencies for you by feeding it a very simple description of what you want to install: https://github.com/spack/spack
You could consider using https://github.com/spack/spack and build python packages + binaries from sources optimized for your particular microarchitecture. Clearly there are no sources for cuda itself, but they are downloaded from the nvidia website or a mirror.

Also NVIDIA could consider breaking up their packages into smaller pieces. But then again, they're still doing better than Intel, which ships 15GB+ images for their OneAPI

> I guess I just don’t see the use case where this is compelling. If I write a handy Unix utility in C, I’ll just keep the source code around, and compile it as needed.

Isn't this exactly why you don't see the use case? You're willing to compile.

As someone working on a cross-platform, cross-language packaging tool (https://github.com/spack/spack), it's very appealing to not have to build for every OS and Linux distro. Currently we build binaries per-OS/per-distro. This would eliminate a couple dimensions from our combinatorial builds.

We still care a lot about non-x86_64 architectures, so that's still an issue, but the work here is great for distributors of binaries. It seems like it has the potential to replace more cumbersome techniques like manylinux (https://github.com/pypa/manylinux).

I came across "spack"[1], another unfortunately named repo recently as well.

Another good one is "nonce" - seen frequently in oauth and crypto - always gets a laugh out of the juniors.

[1] https://github.com/spack/spack

In Spack https://github.com/spack/spack), we encounter this when users install packages in deep paths within their home directories, or in deep paths within shared project directories (e.g., in NFS or Lustre on HPC machines). So we patch installed scripts that have long shebangs.

We want installed packages to work exactly as built, with the right versions of dependencies, and we don't want to rely on the user getting their environment right to do that. Spack users may install several versions of python or other interpreters. This ensures that scripts work without a special environment.

There is another use case that is less emphasized here. sbang also lets you pass arbitrarily many arguments on the shebang line. If you do this:

    #!/usr/bin/perl arg1 arg2 arg3
At least on Linux, you'll get one argument: "arg1 arg2 arg3". I think perl gets around this by parsing the shebang line itself, but sbang is more general. You can use it with /usr/bin/env to pass more arguments than would otherwise be allowed.

See https://www.in-ulm.de/~mascheck/various/shebang/ for an extremely comprehensive list of limitations on shebangs.

I am going to be hated and downvoted for that but let's go...

Bazel, like Bucks and other, try to bring on table a build system / deployment system that is multi-language, multi-platform and developer oriented. A holy Graal that many developer ( like me ) looked for decade and that many (large) organizations more or less tried to do one day (and most failed)

It is a good idea. It is a required tool to improve productivity. However, if the idea is good on paper, in the implementation, Bazel is damn wrong.

- Bazel is centered around "mono-repo" culture, making it much harder to integrated with multi-source, multi-repo, multi-version projects like many of us have. If I have no doubt that it is great at Google, the external world is not google.

- Bazel is made in JAVA, requires the JVM and this is a problem. That make Bazel not a "light" tool easy to deploy in a fresh VM or in a container.

- Bazel mix the concepts Build System ( like Make, ant, co ) and Deployment System like ( rpm, pkgsrc, etc). That makes Bazel pretty hard to integrate with projects that have existing build system, and almost impossible to integrate INSIDE an other Deployment System (usual package manager, deployment pipeline). The problem that Bazel faces with some languages ( python, go ) is a cause of that.

- Bazel venerates and follows the cult of "DO NOT INSTALL": compile and execute in workspace, there is no "make install", not installation phase. If "convenient" in mono-repo, this is often a nightmare because the boundary between components can be easily violated... and you finish by having many project that use internal headers or interface.

- Bazel makes mandatory ( almost ) to have internet to compile. This is a problem, a major problem in many organization (like mine) where downloading random source and binary from the Web is not acceptable for security reasons. Try to run Bazel in a sandbox.... and cry.

- Related to what I said before, Bazel mixes Build system and Deployment system. Doing so, it makes the same mistake that many "language specific" package manager and make uselessly hard to depend on an already installed, local library / component.

- And finally, last but not least... The options.... Bazel throw away 30 years of conventions / naming from the ( GNU / BSD world ) to create its own.... That make the learning curve difficult... Specially with a (up to recently) very sparse and outdated documentation.

I have no doubt that inside Google or Facebook, Bazel or Bucks are amazing.But they have been released too late for the external world in my mind.

Nowadays platform independant package managers like spack (https://github.com/spack/spack), Nix (https://nixos.org/nix/), GUIX (http://guix.gnu.org/) gives 95% of the advantages of Bazel without the pains of it.

Lawrence Livermore National Laboratory (LLNL) | Livermore, CA ONSITE | Spack Developer | ability to obtain a Q Clearance (US citizenship) required

Want to work on open source for science? Come work on the Spack package manager (https://spack.io, https://github.com/spack/spack) at LLNL!

Spack is a tool for building and installing scientific software on laptops, clusters, and the world’s largest supercomputers. It allows users to build optimized packages with many different compilers, build options, optimization flags, and dependency versions. Spack facilitates individual development workflows, but also allows supercomputing facilities to deploy large suites of software for their users.

LLNL (https://llnl.gov) is home to the world’s 2nd fastest supercomputer, Sierra (see https://www.nextplatform.com/2018/06/26/peeling-the-covers-o...), as well as ~25 other large HPC clusters. Your work will directly support simulations run on these machines, and you’ll get to collaborate with other DOE national laboratories via the Exascale Computing Project (https://exascaleproject.org). Spack has an international community, and you'll also collaborate with major supercomputing sites around the world.

We’re looking for the following skills/experience (not all required): strong Python skills, with lower-level languages (C, C++, Fortran), strong systems programming skills, building HPC and scientific libraries, build systems (CMake, autotools, make). Experience with development of any package manager is a plus.

Bonus skills: experience with large OSS projects, experience with SAT, SMT, ILP solvers or Prolog.

Email [email protected] with your resume, and a bit about yourself and relevant experience.

HPC has the same problem (ancient bespoke Linux distributions with strange tooling) one way to solve it is spack https://github.com/spack/spack. The ops person where I work is so much in love with it that he tried to get it to work on Mac OS, 2 days later and several patches it took only 8 hours to compile all nescessary transitive dependencies. In my opinion the only good way to solve this is to ignore the distribution package manager and keep track of all dependencies explicitly. Most build systems for C++ make that relatively easy. To choose the compiler / standard library you can then use http://modules.sourceforge.net