What does HackerNews think of fpm?
Effing package management! Build packages for multiple platforms (deb, rpm, etc) with great ease and sanity.
There are a load os SaaS companies that do that allow you to make multiple targets though, so perhaps some integrations there would work.
[1]: https://github.com/jordansissel/fpm [2]: https://github.com/goreleaser/nfpm
I then scp the file into the pool directory on my server, and re-run a script that calls apt-ftparchive and regenerates the contents of the repo (https://manpages.debian.org/buster/apt-utils/apt-ftparchive....).
My web server hosts that directory with indexing enabled, but I don't use apache for it like most examples do. There's nothing special about it, it's just a directory tree built in a way that apt likes. (https://pkg.kamelasa.dev/). In fact, the entire configuration of the repo is visible there.
There's a step in the middle where I sign the packages with my GPG key, and the public key is available on Ubuntu's keyserver (http://keyserver.ubuntu.com/).
I don't need to run this workflow very often, as it'll take about 40 minutes to rebuild and push. But if I do update my SSG I know it'll end up in my debian repo with a version bump, so I'm happy.
On a second pipeline I can just do a simple 'add-apt-repository' and 'apt install'.
A lot of Docker is a response to how difficult it is to build packages and how invasive package installation is to the base system (Nix gets this right).
If in the future you need to build packages again, I'd take a look at fpm.
The goal of fpm is to make it easy and quick to build packages such as rpms, debs, OSX packages, etc.
If you haven't seen it, I highly recommend looking at fpm for packaging. Unless you're doing something weird or need an obscure format, it is the tool you want.
It doesn't have to. Packaging using FPM [0] allows many targets (deb, rpm, etc) and using ELF2deb [1] (shameless plug) allows packaging any files to a .deb with no effort.
I mean, what and the how and the why?
It's supposed to be a team decision. If the team decides to use packages, then at least someone has to be able to implement that into the workflow, release and deploy process. If the team decides to not do RPMs or DEBs, then they have to solve build/deploy/release some other way.
For example since Docker I haven't even touched spec files or fpm ( https://github.com/jordansissel/fpm ).
https://linuxconfig.org/easy-way-to-create-a-debian-package-...
And then a quality of life enhancement says push the package building off to FPM.
https://github.com/jordansissel/fpm
Personal Repo
https://www.digitalocean.com/community/tutorials/how-to-use-...
Then you proceed with bundling in higher level OS dependencies, because each app is not just a Python package but also a collection of shell scripts, configs, configuration of paths for caches, output directories, system variables, etc. For this we throw everything into one directory tree and run FPM [3] command on it which turns the directory tree into a RPM package. We use a fpm parameter to specify installation location of that tree to /opt or /usr elsewhere.
The way to bundle it properly is to actually use two rpm packages linked together by rpm dependency. One for the app and the other for the deployment configuration. The reason is you only have one executable, but many possible versions of deploying it. Ie. depending on environment (dev, staging, prod) or you just simply want to run the same app in parallel with different configuration.
eg. one rpm package for the app executable and statix files
my_app_exe-2.0-1.noarch.rpm
and many othe related config rpms
my_app_dev-1.2-1.noarch.rpm (depends on my_app_exe > 2.0)
my_app_prod-3.0-1.noarch.rpm (depends on my_app_exe == 2.0)
You then install only the deployment packages and rpm system fetches the dependencies for you automatically.
There are other mature places who use similar techniques for deployments, for example [4].
All of this then can be wrapped in even higher level of packaging, namely Docker.
[1] https://github.com/pantsbuild/pex [2] https://code.fb.com/data-infrastructure/xars-a-more-efficien... [3] https://github.com/jordansissel/fpm [4] https://hynek.me/articles/python-app-deployment-with-native-...
If it was just about packaging, everyone would just have a build server that creates their binary once and then slams it through https://github.com/jordansissel/fpm.
But there are more fundamental differences between OSes than how the insides of their packages look. The packages are "differently shaped" to prevent you doing something stupid and damaging your OS by installing a package that will spew files in all the wrong places, and rely on classes of protections that aren't there.
Or you know developers could package their own software. It's some upfront effort but usually set and forget. Things like FPM[1] make this even easier. Personally I don't know why developers find packaging so hard, I've had to package hundreds of bit of software for different distros (and versions of said distro) over my career and it's usually set and forget with some changes when there are big underlying changes to the OS like sysv > systemd. Granted my experience is with non GUI apps so I can imagine there is likely some pain points between different distros/version when it comes to the hot mess that is DEs.
> Good? Why does it matter to you.
Because I have to go from trusting 1 vendor to install 1 package (and dependencies) to 1 3rd party repo that anyone can push to. That is a huge change in the trust model.
> Nothing stops you or any distribution from having a repo with reviewed or specially selected software.
We already have those.
> Also, the waste majority of packagers would not have found that malware anyway.
This isn't about trusting the software in the package it's about trusting the package maintainer, who could now be absolutely anyone with no verification or validation. See malware in other user run repos like NPM, pip, AUR etc...
* Creating standalone executables / installers for the app itself is already not so easy (I use - and recommend - PyInstaller [1]).
* Code signing the executables so users don't get an ugly "this app is untrusted" warning is tedious for the three different platforms
* Auto-updating is a pain to implement as well. I'm using Google Omaha (same as Chrome) on Windows [2], Sparkle on Mac [3] and Debian packages / fpm on Linux [4]. In total, I probably spent two to three months just on auto-update functionality.
* You really can tell that Qt is "drawing pixels on screen". Sometimes you have to draw pixels / perform pixel calculations yourself. The built-in "CSS" engine QSS works to some extent, but often has unpredictable results and weird edge cases.
I considered Electron as well. But its startup performance is just prohibitive. I blogged about this (and which other technologies I considered) [5].
I've been wondering for a while whether I should not open source my solutions to all of the above problems, to save other people the months required getting everything to work. Would anybody be interested in that? It would be something like a PyQt alternative for Electron.
[edit] People are very interested so I'm starting a MailChimp list. If you want to know if/when I open source a solution then please subscribe at http://eepurl.com/ddgpnf.
[0]: https://fman.io
[1]: http://www.pyinstaller.org
[2]: https://fman.io/blog/google-omaha-tutorial/
[3]: https://sparkle-project.org/
[4]: https://github.com/jordansissel/fpm
[5]: https://fman.io/blog/picking-technologies-for-a-desktop-app-...
A .deb is basically a tarball with some manifest information. You can build 'non-standard' packages in this way (also see FPM[1] - which will do this and more rpm etc). However if you ever want to upstream a package, there are guidelines that debian produce around this.
Here's a quick command to build a golang-1.8.3 package with fpm (download and extract go1.8.3.linux-amd64.tar.gz first; get fpm from https://github.com/jordansissel/fpm):
#!/bin/bash
DEBIAN_REVISION=1
fpm -s dir -t deb -n golang-go -v 1.8.3-$DEBIAN_REVISION go1.8.3.linux-amd64/bin/go=/usr/local/bin/go go1.8.3.linux-amd64/bin/gofmt=/usr/local/bin/gofmt go1.8.3.linux-amd64/bin/godoc=/usr/local/bin/godoc go1.8.3.linux-amd64/=/usr/local/go
For what it's worth though FPM is awesome, and has made my life better a number of times. If you have to have software that isn't packaged and you aren't familiar with packaging, look into FPM.
I'm not sure that really changes with an install script. You've got several major operating systems, hundreds of flavours with all kinds of quirks. And you don't even know what shell you're really running on. How do you know your install script will work in any reasonable way?
For example, all reasonable package managers will make sure existing files aren't overwritten, existing configs are not modified, all ownership/modes are reasonable by default. Sure, you can override that in post-install script, but it will stand out that you're doing something non-standard, because there's a post-install script.
> how can we make it easy to install something, while still being safe and maintainable?
Have you seen FPM? (https://github.com/jordansissel/fpm) It provides a nice, simple(ish) abstraction over all the packaging craziness.
> Are you crazy!? This isn't an issue. If you don't trust the installer, you sure as hell can't trust the product.
I do not trust either the installer or the app. If I have a simple package to deploy, I can: 1) check that there are no post/pre-install scripts 2) install the files on the system 3) contain/sandbox them using selinux / grsec / apparmor / chroot / separate user. I cannot easily do the same thing with an installer script, which by definition wants to merge foreign files into my running system.
Even better, it's in the interest of app creator to care about this and provide sandboxing by default, even if they trust the app.
If you want to do that, you will be better off using Jordan Sissel's fpm tool: https://github.com/jordansissel/fpm It can take any project and quickly package it into a .deb, .rpm, .whatever package.
That being said, it's a three letter name. It's very unlikely NOT to run into naming conflicts here.
edge - taken by Microsoft
jpm - JPMorgan
ppm - taken by Perl package manager: https://en.wikipedia.org/wiki/Perl_package_manager
fpm - taken by Effing package management: https://github.com/jordansissel/fpm
bpm - beats per minute
ayp - terrible to type, although taken by "Adequate Yearly Progress": https://en.wikipedia.org/wiki/Adequate_Yearly_Progress
nnm - What happens when it's no longer new?
Just in Germany for example there are a ton of companies called ISIS (just google "ISIS GmbH"). Being offended by a three letter shell command seems a bit over the top to me to be honest.
Edit: I won't respond to further comments on the naming issue. It wasn't my intention to name it after a weapon. As I said earlier, I will change the name as soon as anyone proposes a better one.
I have some hopes to wrap a pex into deb/rpm, but I would not call this approach simple.
That's unfortunate since Python is a wonderful language for many data-sciency tasks - Python makes things possible and pleasant, that would be a pain in other languages.
* [1] https://github.com/jordansissel/fpm
* [2] https://github.com/kevinconway/rpmvenv
* [3] https://pex.readthedocs.org/en/latest/
* Another tool: https://github.com/spotify/dh-virtualenv
https://github.com/jordansissel/fpm
I didn't use the build system heavily and it may be more overhead than you'd like but I thought it was a pretty neat way of doing things. Repo is here:
About a year and a half ago, I was playing around with Docker and made a build of memcached for my local environment and uploaded it to the registry [2] and then forgot all about it. Fast-forward to me writing this post and checking on it: 12 people have downloaded this! Who? I have no idea. It doesn't even have a proper description, but people tried it out and presumably ran it. It wasn't a malicious build but it certainly could have been. I'm sure that it would have hundreds of downloads if I had taken the time to make a legit-sounding description with b.s. promises of some special optimization or security hardening.
The state of software packaging in 2015 is truly dreadful. We spent most of the 2000's improving packaging technology to the point where we had safe, reliable tools that were easy for most folks to use. Here in the 2010's, software authors have rejected these toolsets in favor of bespoke, "kustom" installation tools and hacks. I just don't get it. Have people not heard of fpm [3]?
[1] http://output.chrissnell.com/post/69023793377/stop-piping-cu...
http://stackoverflow.com/questions/15104089/packaging-golang...
packager.io (which upstream googs uses) seems to be a nice way to just get packages out there, but as far as I can tell it's pretty well walled-off behind a service, so no easy way to build locally, off-line, or without using packager.io etc. In that sense it strikes me as a poor choice for Free software, as there is no promise that things will continue to work, or can be made to work, long term.
Not really, it's just a tar file with some metadata. Using fpm¹, making packages from a directory is extremely simple. I've been building internal packages from our different components, and the build script only has three or four lines. And besides, even Dockerfiles often use apt/yum.
- The "puts things on disk" part (posix "install" or a low level tool like dpkg or rpm is most like this)
- The "determine what needs to be installed" part, (aptitude, yum)
I'm all for a per-language or system version of the latter, but once that's done, you should generate packages installed by the former.
Heck, wrap all the commands (cp/mv/install/chmod/chown/etc.) that write stuff to permanent places on disk to actually do "add to a package", give it a basic name/version number, and have the low level tool handle adding/removing it from the system (or multiple systems, or deploy it, etc.). All the dependencies, compatibility, etc. are handled by the higher level system.
This gets you the best of both worlds - system level packages, and the ability to install whatever you need. FPM (https://github.com/jordansissel/fpm) is a pretty good example of this philosophy.
But, instead, we get every punk ass CPAN descendent spraying it's crap all over the filesystem, needing the whole build environment installed, touching the internet in weird ways that don't guaranteed repeatable behavior, etc. sigh
You wouldn't do this for a production deployment, right? Application starup that may or may not require access to the artifact repository to complete successfully. When that idea bounces around my developer neocortex, my sysadmin hindbrain starts reaching forward to strangle it.
And if you're not going to do it in production, doing it in development means having a gratuitous difference between development and production, which, again, is something i have learned to fear.
A zip with startup scripts is OK, but it requires installation.
'gradle installApp' works out of the box, and 'installs' the jars and scripts in your build directory, which is all you need to run locally. It's work of minutes to write an fpm [1] invocation that packages the output of installApp as an operating system package, which you can then send down the pipe towards production. This is simple, easy, standard, and fully integrated with a world of existing package management tools. Why would i use Capsule instead of doing this?
> Open Source doesn't mean that you cannot put a price tag on it.
The other things could be done, but you're right, they don't fall under the strict definition of opensource. It would require a different license :)
Oh, btw, the op should take a look at https://pkgr.io/ and https://github.com/jordansissel/fpm
One edge-case, where fpm cannot help by itself, occurs in the python (w/ C-bindings) world, when your dev and prod environments use different libc versions. I got around that issue by running fpm inside a VM, which matches the libc version of the target system.
But it's still not as easy as it can be - so pkgr.io looks promising.
This.
Has anyone ever tried https://github.com/jordansissel/fpm FPM yet?
There are some pitfalls, however:
* It can be time-consuming dealing with the arcane details of Debian package metadata or RPM spec files. If you're deploying your own application code, you're likely better off using fpm to generate a package from a directory tree:
https://github.com/jordansissel/fpm
* If you have a complex stack, e.g., a specific version of Ruby and a large number of gem dependencies, you should avoid trying to separate things into individual packages. Just create a single Omnibus-style package that installs everything into a directory in /opt:
https://github.com/opscode/omnibus-ruby
* Maintaining build machines and repository servers takes ongoing effort. Shameless plug: This is why I created Package Lab---a hosted service for building packages and managing repositories. Currently in private beta and would love feedback:
If you're running any form of Unix, it's very likely that you already have a package management system. It's also likely that system has more features, and is better designed from a management and consistency perspective than any one of CPAN and it's descendants (gem, cabal, etc.).
A much better solution - either make your own packages, or use a tool like FPM (https://github.com/jordansissel/fpm) to make native packages, then deploy the result as you would any other package.
I hope for an era when running CPAN or gem interacts with the package manager, building a real OS-level package and installing/deploying it, rather than the current "you need to run this script incantation on every production machine, oh, and you need the whole toolchain too" idiocy.
Although for just installing rails, using rubygems is the way to go, IMHO.
I'm not sure what the issue is with MongoDB. But if you aren't aware there is the EPEL (Extra Packages for Enterprise Linux) repository for RHEL and CentOS. It's a semi-official and safe repository run by the Fedora Project to add addition packages to EL. MongoDB is in there for EL5 (pretty old) and EL6. Also, easy_install is available for Red Hat and Debian in the 'python-setuptools' package.
I know it's common for people to install things form source on their own systems, but I feel that if you are doing something on someone else's system you need to extra careful. And that means ask permission before installing, use package management, and if you can't use package management suggest ways to resolve the dependencies (easy_install, pip, gem) without actually doing it. See the Homebrew install script as a good example (https://gist.github.com/323731). Another thing is following convention. The way you are using /usr/local is how /opt is supposed to be used. If you use /usr/local the files should be in /usr/local/{bin,etc,lib}. If want to create a package directory and have things like 'bin/' in it, use something like /opt/amon/{bin,etc,sbin,var}. http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html
Jordan Sissel, who created logstash, also created fpm (Effing Package Management) (https://github.com/jordansissel/fpm). It makes it easy to package things. It might help avoid bash script hell.
Create an rpm from the files in /tmp/install:
fpm -s dir -t rpm --name amon --version 0.2.0 --depends mongodb --maintainer "Martin Rusev" -C /tmp/install etc/init.d/amon etc/init.d/amond opt/amon
Create a dep of tornado from pypi: fpm -s python -t dep tornado
It makes it incredibly easy to create an rpm/deb/etc. from a directory, rpm, npm, gem, python module, etc.
Here's a BayLISA video about it: http://vimeo.com/23940598