What does HackerNews think of fpm?

Effing package management! Build packages for multiple platforms (deb, rpm, etc) with great ease and sanity.

Language: Ruby

If you ever revisit that decision, check out FPM. It can shave off a few of the rough edges related to packaging: https://github.com/jordansissel/fpm
Making a .deb isn't as simple as it might seem. If you want to use shared libraries, especially so. Which version of Ubuntu/Debian/Mint/... whatever are you targetting? You can tools like FPM[1] (which is awesome btw, used it for some great hacks in the past), but that won't make you .debs that are necessarily done to the debian guidelines but usable.

There are a load os SaaS companies that do that allow you to make multiple targets though, so perhaps some integrations there would work.

[1]https://github.com/jordansissel/fpm

Since this is written in Go, it's almost trivial to use fpm [1] to generate a variety of packages. Alternatively you can use nfpm [2] if you don't want to have to deal with installing Ruby & a gem.

[1]: https://github.com/jordansissel/fpm [2]: https://github.com/goreleaser/nfpm

I rely on FPM the Effing Package manager[0]. Occasionally, I run into php-fpm [1]. There's an unfortunate naming collision, at least with the first.

[0]: https://github.com/jordansissel/fpm

[1]: https://www.php.net/manual/en/install.fpm.php

But there is a lot of work to be done before you can do the simple apt install. I (gladly) don't know how it is nowadays but before Dockerfiles/Docker creating your own packages according to the various standards was a pita. Most companies needed a 'packaging specialist/release engineer' role as most developers where not up to the task. Solutions like FPM[0] did help somewhat, but it was still hard when dealing with non-homogeneous environments. Containers solved that problem universally for all distributions.

[0] https://github.com/jordansissel/fpm

After compiling the binary (with haskell-stack) I use fpm to package it up as a .deb (https://github.com/jordansissel/fpm).

I then scp the file into the pool directory on my server, and re-run a script that calls apt-ftparchive and regenerates the contents of the repo (https://manpages.debian.org/buster/apt-utils/apt-ftparchive....).

My web server hosts that directory with indexing enabled, but I don't use apache for it like most examples do. There's nothing special about it, it's just a directory tree built in a way that apt likes. (https://pkg.kamelasa.dev/). In fact, the entire configuration of the repo is visible there.

There's a step in the middle where I sign the packages with my GPG key, and the public key is available on Ubuntu's keyserver (http://keyserver.ubuntu.com/).

I don't need to run this workflow very often, as it'll take about 40 minutes to rebuild and push. But if I do update my SSG I know it'll end up in my debian repo with a version bump, so I'm happy.

On a second pipeline I can just do a simple 'add-apt-repository' and 'apt install'.

Devs might consider something like FPM (https://github.com/jordansissel/fpm) to make DEB/RPM/whatever packages that are a lot easier to install and manage.
Making deb _is_ painful, the politics of how packages are folded, bent, welded, drilled and smashed is not a simple landscape to navigate.

A lot of Docker is a response to how difficult it is to build packages and how invasive package installation is to the base system (Nix gets this right).

If in the future you need to build packages again, I'd take a look at fpm.

The goal of fpm is to make it easy and quick to build packages such as rpms, debs, OSX packages, etc.

https://fpm.readthedocs.io/en/latest/

https://github.com/jordansissel/fpm

Yeah, there's more than a whiff of entitlement to that phrasing.

If you haven't seen it, I highly recommend looking at fpm for packaging. Unless you're doing something weird or need an obscure format, it is the tool you want.

https://github.com/jordansissel/fpm

> distributing binaries on Linux is an enormous pain.

It doesn't have to. Packaging using FPM [0] allows many targets (deb, rpm, etc) and using ELF2deb [1] (shameless plug) allows packaging any files to a .deb with no effort.

[0] https://github.com/jordansissel/fpm

[1] https://github.com/NicolaiSoeborg/ELF2deb

I still go to FPM (https://github.com/jordansissel/fpm) for any distro-native packaging needs I have.
It's been a long time since I've used it, but you may be interested in https://github.com/jordansissel/fpm if you haven't seen it before.
Huh?

I mean, what and the how and the why?

It's supposed to be a team decision. If the team decides to use packages, then at least someone has to be able to implement that into the workflow, release and deploy process. If the team decides to not do RPMs or DEBs, then they have to solve build/deploy/release some other way.

For example since Docker I haven't even touched spec files or fpm ( https://github.com/jordansissel/fpm ).

Probably won't help with Debian official packages but I've found success with these materials.

https://linuxconfig.org/easy-way-to-create-a-debian-package-...

And then a quality of life enhancement says push the package building off to FPM.

https://github.com/jordansissel/fpm

Personal Repo

https://www.digitalocean.com/community/tutorials/how-to-use-...

Well my team does that for one. We use Python packaging ecosystem, specify Python dependencies using standard tools like setup.py, requirements.txt and pip. All Python dependencies are baked into a fat Python package using PEX format[1]. Also tried Facebook's xar format[2], without success yet. What matters is to have a statically linked dependencies packaged in one executable file. Like a Windows exe file.

Then you proceed with bundling in higher level OS dependencies, because each app is not just a Python package but also a collection of shell scripts, configs, configuration of paths for caches, output directories, system variables, etc. For this we throw everything into one directory tree and run FPM [3] command on it which turns the directory tree into a RPM package. We use a fpm parameter to specify installation location of that tree to /opt or /usr elsewhere.

The way to bundle it properly is to actually use two rpm packages linked together by rpm dependency. One for the app and the other for the deployment configuration. The reason is you only have one executable, but many possible versions of deploying it. Ie. depending on environment (dev, staging, prod) or you just simply want to run the same app in parallel with different configuration.

eg. one rpm package for the app executable and statix files

my_app_exe-2.0-1.noarch.rpm

and many othe related config rpms

my_app_dev-1.2-1.noarch.rpm (depends on my_app_exe > 2.0)

my_app_prod-3.0-1.noarch.rpm (depends on my_app_exe == 2.0)

You then install only the deployment packages and rpm system fetches the dependencies for you automatically.

There are other mature places who use similar techniques for deployments, for example [4].

All of this then can be wrapped in even higher level of packaging, namely Docker.

[1] https://github.com/pantsbuild/pex [2] https://code.fb.com/data-infrastructure/xars-a-more-efficien... [3] https://github.com/jordansissel/fpm [4] https://hynek.me/articles/python-app-deployment-with-native-...

Those aren't "package formats"; they're different names for a very small set of actual container formats (e.g. tar, zip), with the names there to namespace different incompatible OS file-layout hierarchies and sandboxing technologies.

If it was just about packaging, everyone would just have a build server that creates their binary once and then slams it through https://github.com/jordansissel/fpm.

But there are more fundamental differences between OSes than how the insides of their packages look. The packages are "differently shaped" to prevent you doing something stupid and damaging your OS by installing a package that will spew files in all the wrong places, and rely on classes of protections that aren't there.

> That's an insane thing to say. Literally 100s of people doing the same thing adding no value to the end user. That is a waste amount of resources waste by the open source community.

Or you know developers could package their own software. It's some upfront effort but usually set and forget. Things like FPM[1] make this even easier. Personally I don't know why developers find packaging so hard, I've had to package hundreds of bit of software for different distros (and versions of said distro) over my career and it's usually set and forget with some changes when there are big underlying changes to the OS like sysv > systemd. Granted my experience is with non GUI apps so I can imagine there is likely some pain points between different distros/version when it comes to the hot mess that is DEs.

> Good? Why does it matter to you.

Because I have to go from trusting 1 vendor to install 1 package (and dependencies) to 1 3rd party repo that anyone can push to. That is a huge change in the trust model.

> Nothing stops you or any distribution from having a repo with reviewed or specially selected software.

We already have those.

> Also, the waste majority of packagers would not have found that malware anyway.

This isn't about trusting the software in the package it's about trusting the package maintainer, who could now be absolutely anyone with no verification or validation. See malware in other user run repos like NPM, pip, AUR etc...

1. https://github.com/jordansissel/fpm

I'm an indie dev and have been developing a cross-platform (Py)Qt app for the past 1.5 years (~2100 dev hrs) [0]. Given that Qt is cross-platform desktop development, it's very solid. But there are a lot of things one has to do that are not required for (say) web apps:

* Creating standalone executables / installers for the app itself is already not so easy (I use - and recommend - PyInstaller [1]).

* Code signing the executables so users don't get an ugly "this app is untrusted" warning is tedious for the three different platforms

* Auto-updating is a pain to implement as well. I'm using Google Omaha (same as Chrome) on Windows [2], Sparkle on Mac [3] and Debian packages / fpm on Linux [4]. In total, I probably spent two to three months just on auto-update functionality.

* You really can tell that Qt is "drawing pixels on screen". Sometimes you have to draw pixels / perform pixel calculations yourself. The built-in "CSS" engine QSS works to some extent, but often has unpredictable results and weird edge cases.

I considered Electron as well. But its startup performance is just prohibitive. I blogged about this (and which other technologies I considered) [5].

I've been wondering for a while whether I should not open source my solutions to all of the above problems, to save other people the months required getting everything to work. Would anybody be interested in that? It would be something like a PyQt alternative for Electron.

[edit] People are very interested so I'm starting a MailChimp list. If you want to know if/when I open source a solution then please subscribe at http://eepurl.com/ddgpnf.

[0]: https://fman.io

[1]: http://www.pyinstaller.org

[2]: https://fman.io/blog/google-omaha-tutorial/

[3]: https://sparkle-project.org/

[4]: https://github.com/jordansissel/fpm

[5]: https://fman.io/blog/picking-technologies-for-a-desktop-app-...

No, generally there is a debian directory with things like rules, control file, copyright etc.

A .deb is basically a tarball with some manifest information. You can build 'non-standard' packages in this way (also see FPM[1] - which will do this and more rpm etc). However if you ever want to upstream a package, there are guidelines that debian produce around this.

[1] https://github.com/jordansissel/fpm

There are a lot of ways to handle golang for Debian.

Here's a quick command to build a golang-1.8.3 package with fpm (download and extract go1.8.3.linux-amd64.tar.gz first; get fpm from https://github.com/jordansissel/fpm):

#!/bin/bash

DEBIAN_REVISION=1

fpm -s dir -t deb -n golang-go -v 1.8.3-$DEBIAN_REVISION go1.8.3.linux-amd64/bin/go=/usr/local/bin/go go1.8.3.linux-amd64/bin/gofmt=/usr/local/bin/gofmt go1.8.3.linux-amd64/bin/godoc=/usr/local/bin/godoc go1.8.3.linux-amd64/=/usr/local/go

Others wrote "apt-get dist-upgrade" in sibling comments, which should do the trick. If you want to build packages for multiple managers, fpm is a nice front-end for some of them: https://github.com/jordansissel/fpm
I think it would make sense to provide pre-build packages for few popular distros (Ubuntu, CentOS, Arch). There are some tools that make it much easier, i.e. fpm (https://github.com/jordansissel/fpm).
Because then you also need to host a feed, and keep it updated when new releases become available, and keep it updated when new plugin releases become available, and ensure the feed stays up, and maintain patches, and maintain required dependencies. All the things that package maintainers (thank you!) do for the ecosystem. Unless there's substantial gain why not just stick with Nginx?

For what it's worth though FPM is awesome, and has made my life better a number of times. If you have to have software that isn't packaged and you aren't familiar with packaging, look into FPM.

https://github.com/jordansissel/fpm

FPM[1] will get you most of the way there, but your packages are still going to seem a bit out of place unless you make distro specific changes.

[1] https://github.com/jordansissel/fpm

> However, when you have something like RVM which is used across several major operating systems, and hundreds of different flavours, each with their own quirks and package managers it suddenly gets difficult to manage each of these.

I'm not sure that really changes with an install script. You've got several major operating systems, hundreds of flavours with all kinds of quirks. And you don't even know what shell you're really running on. How do you know your install script will work in any reasonable way?

For example, all reasonable package managers will make sure existing files aren't overwritten, existing configs are not modified, all ownership/modes are reasonable by default. Sure, you can override that in post-install script, but it will stand out that you're doing something non-standard, because there's a post-install script.

> how can we make it easy to install something, while still being safe and maintainable?

Have you seen FPM? (https://github.com/jordansissel/fpm) It provides a nice, simple(ish) abstraction over all the packaging craziness.

> Are you crazy!? This isn't an issue. If you don't trust the installer, you sure as hell can't trust the product.

I do not trust either the installer or the app. If I have a simple package to deploy, I can: 1) check that there are no post/pre-install scripts 2) install the files on the system 3) contain/sandbox them using selinux / grsec / apparmor / chroot / separate user. I cannot easily do the same thing with an installer script, which by definition wants to merge foreign files into my running system.

Even better, it's in the interest of app creator to care about this and provide sandboxing by default, even if they trust the app.

This is a tutorial for building quick one-off packages that won't ever be accepted by debian but that you can use with your own servers.

If you want to do that, you will be better off using Jordan Sissel's fpm tool: https://github.com/jordansissel/fpm It can take any project and quickly package it into a .deb, .rpm, .whatever package.

If on the other hand your goal is not to get a package in Debian, but only to quickly make a deb file, check out fpm: https://github.com/jordansissel/fpm.
As I said earlier, it wasn't my intention to name it after a weapon.

That being said, it's a three letter name. It's very unlikely NOT to run into naming conflicts here.

edge - taken by Microsoft

jpm - JPMorgan

ppm - taken by Perl package manager: https://en.wikipedia.org/wiki/Perl_package_manager

fpm - taken by Effing package management: https://github.com/jordansissel/fpm

bpm - beats per minute

ayp - terrible to type, although taken by "Adequate Yearly Progress": https://en.wikipedia.org/wiki/Adequate_Yearly_Progress

nnm - What happens when it's no longer new?

Just in Germany for example there are a ton of companies called ISIS (just google "ISIS GmbH"). Being offended by a three letter shell command seems a bit over the top to me to be honest.

Edit: I won't respond to further comments on the naming issue. It wasn't my intention to name it after a weapon. As I said earlier, I will change the name as soon as anyone proposes a better one.

Highly recommend FPM for creating packages (deb, rpm, osx .pkg, tar) from gems, python modules, and pears.

https://github.com/jordansissel/fpm

I tried a few things myself, like fpm[1], rpmvenv[2] and lately pex[3] - but so far, I haven't found something, that I would call solid and simple. We have constrained deployment environments that do not allow most of the tools, that would ease the process on a development machine. Basically, I need a deb or rpm.

I have some hopes to wrap a pex into deb/rpm, but I would not call this approach simple.

That's unfortunate since Python is a wonderful language for many data-sciency tasks - Python makes things possible and pleasant, that would be a pain in other languages.

* [1] https://github.com/jordansissel/fpm

* [2] https://github.com/kevinconway/rpmvenv

* [3] https://pex.readthedocs.org/en/latest/

* Another tool: https://github.com/spotify/dh-virtualenv

I added an RPM package to a project that used a Makefile to spin up a docker container which then built various packages via FPM:

https://github.com/jordansissel/fpm

I didn't use the build system heavily and it may be more overhead than you'd like but I thought it was a pretty neat way of doing things. Repo is here:

https://github.com/tutumcloud/tutum-agent

If you don't have experience with packaging, https://github.com/jordansissel/fpm is probably your best bet. And it is super easy to use.
I think you're wrong. I think most users are not installing trusted builds from their OS vendors. Piping curl to bash is incredibly common--many popular software packagers are doing it [1].

About a year and a half ago, I was playing around with Docker and made a build of memcached for my local environment and uploaded it to the registry [2] and then forgot all about it. Fast-forward to me writing this post and checking on it: 12 people have downloaded this! Who? I have no idea. It doesn't even have a proper description, but people tried it out and presumably ran it. It wasn't a malicious build but it certainly could have been. I'm sure that it would have hundreds of downloads if I had taken the time to make a legit-sounding description with b.s. promises of some special optimization or security hardening.

The state of software packaging in 2015 is truly dreadful. We spent most of the 2000's improving packaging technology to the point where we had safe, reliable tools that were easy for most folks to use. Here in the 2010's, software authors have rejected these toolsets in favor of bespoke, "kustom" installation tools and hacks. I just don't get it. Have people not heard of fpm [3]?

[1] http://output.chrissnell.com/post/69023793377/stop-piping-cu...

[2] https://registry.hub.docker.com/u/chrissnell/memcached/

[3] https://github.com/jordansissel/fpm

fpm or: How I Learned to Stop Worrying and Love Linux Package Management

https://github.com/jordansissel/fpm

See this stackoverflow q/a -- it appears to contain most of the current highlights. Basically Ubuntu has started packaging a few go apps, and fpm[f] seems to be an ok alternative in the meanwhile:

http://stackoverflow.com/questions/15104089/packaging-golang...

packager.io (which upstream googs uses) seems to be a nice way to just get packages out there, but as far as I can tell it's pretty well walled-off behind a service, so no easy way to build locally, off-line, or without using packager.io etc. In that sense it strikes me as a poor choice for Free software, as there is no promise that things will continue to work, or can be made to work, long term.

[f] https://github.com/jordansissel/fpm

I use FPM (https://github.com/jordansissel/fpm) for that and it works wonderfully. You can convert a tarball to RPM/deb pretty seamlessly most of the time.
DEB packages are very complicated

Not really, it's just a tar file with some metadata. Using fpm¹, making packages from a directory is extremely simple. I've been building internal packages from our different components, and the build script only has three or four lines. And besides, even Dockerfiles often use apt/yum.

¹ https://github.com/jordansissel/fpm

I think the solution is to separate the problem into two parts

- The "puts things on disk" part (posix "install" or a low level tool like dpkg or rpm is most like this)

- The "determine what needs to be installed" part, (aptitude, yum)

I'm all for a per-language or system version of the latter, but once that's done, you should generate packages installed by the former.

Heck, wrap all the commands (cp/mv/install/chmod/chown/etc.) that write stuff to permanent places on disk to actually do "add to a package", give it a basic name/version number, and have the low level tool handle adding/removing it from the system (or multiple systems, or deploy it, etc.). All the dependencies, compatibility, etc. are handled by the higher level system.

This gets you the best of both worlds - system level packages, and the ability to install whatever you need. FPM (https://github.com/jordansissel/fpm) is a pretty good example of this philosophy.

But, instead, we get every punk ass CPAN descendent spraying it's crap all over the filesystem, needing the whole build environment installed, touching the internet in weird ways that don't guaranteed repeatable behavior, etc. sigh

debs built with fpm[1] have been working for me. Unless you need something particularly complex, it should just be a matter of setting up a directory with the right layout and calling fpm with a couple of parameters.

[1] https://github.com/jordansissel/fpm

There's nothing like a good old deb (or rpm) package. Learn fpm[1] and bundle your dependencies instead of hoping that they get there.

[1] https://github.com/jordansissel/fpm

It can point to Maven dependencies that are downloaded on the first launch

You wouldn't do this for a production deployment, right? Application starup that may or may not require access to the artifact repository to complete successfully. When that idea bounces around my developer neocortex, my sysadmin hindbrain starts reaching forward to strangle it.

And if you're not going to do it in production, doing it in development means having a gratuitous difference between development and production, which, again, is something i have learned to fear.

A zip with startup scripts is OK, but it requires installation.

'gradle installApp' works out of the box, and 'installs' the jars and scripts in your build directory, which is all you need to run locally. It's work of minutes to write an fpm [1] invocation that packages the output of installApp as an operating system package, which you can then send down the pipe towards production. This is simple, easy, standard, and fully integrated with a world of existing package management tools. Why would i use Capsule instead of doing this?

[1] https://github.com/jordansissel/fpm

@jarofgreen nice of you, but I think you got me a little wrong, or I wasn't clear enough. My main message regarding opensource was:

> Open Source doesn't mean that you cannot put a price tag on it.

The other things could be done, but you're right, they don't fall under the strict definition of opensource. It would require a different license :)

Oh, btw, the op should take a look at https://pkgr.io/ and https://github.com/jordansissel/fpm

Packaging is a real pain, kind of a schlep. One project, which solved a lot of problems for me was https://github.com/jordansissel/fpm.

One edge-case, where fpm cannot help by itself, occurs in the python (w/ C-bindings) world, when your dev and prod environments use different libc versions. I got around that issue by running fpm inside a VM, which matches the libc version of the target system.

But it's still not as easy as it can be - so pkgr.io looks promising.

So how do I rebuild the compost heap infrastructure that I used to build my environment?

This.

Has anyone ever tried https://github.com/jordansissel/fpm FPM yet?

Why not just build your own packages? fpm[1] is very simple to use.

[1] https://github.com/jordansissel/fpm

Packages are great because they simplify automation. Once you've got a package built and uploaded to a repository, you can install it across a large fleet of machines one-line of Chef or Puppet code.

There are some pitfalls, however:

* It can be time-consuming dealing with the arcane details of Debian package metadata or RPM spec files. If you're deploying your own application code, you're likely better off using fpm to generate a package from a directory tree:

https://github.com/jordansissel/fpm

* If you have a complex stack, e.g., a specific version of Ruby and a large number of gem dependencies, you should avoid trying to separate things into individual packages. Just create a single Omnibus-style package that installs everything into a directory in /opt:

https://github.com/opscode/omnibus-ruby

* Maintaining build machines and repository servers takes ongoing effort. Shameless plug: This is why I created Package Lab---a hosted service for building packages and managing repositories. Currently in private beta and would love feedback:

https://packagelab.com/

I'm lazy when it comes to these things, so I use FPM. It's great if you're dealing with RPM and .deb packages (and a few others). I saw someone else (vacri) mention it in this thread, but it definitely deserves consideration if you want to side-step setting up a packaging environment.

Link: https://github.com/jordansissel/fpm

I prefer native system packages to tarballs, so here's a script [1] to generate .deb and .rpm packages using Jordan Sissel's FPM [2] (gem install fpm).

[1] https://gist.github.com/rasschaert/8915657

[2] https://github.com/jordansissel/fpm

CentOS still has python2.6 as the default. Several people I know who use CentOS with python just build their own python2.7 anyway -- at $dayjob, I build python2.7 and package it up with fpm[1] and drop it in our internal repo.

[1]: https://github.com/jordansissel/fpm

I like to get the best of both pip and the distro's native package manager ...

https://github.com/jordansissel/fpm

This is stupid. The entire reason that people have issues with multiple versions of software is that they had to "roll their own" and don't bother to update it, thus they hit incompatibilities and need some sort of "bundling" utility like this.

If you're running any form of Unix, it's very likely that you already have a package management system. It's also likely that system has more features, and is better designed from a management and consistency perspective than any one of CPAN and it's descendants (gem, cabal, etc.).

A much better solution - either make your own packages, or use a tool like FPM (https://github.com/jordansissel/fpm) to make native packages, then deploy the result as you would any other package.

I hope for an era when running CPAN or gem interacts with the package manager, building a real OS-level package and installing/deploying it, rather than the current "you need to run this script incantation on every production machine, oh, and you need the whole toolchain too" idiocy.

A nice way to build apt repositories is to use fpm. https://github.com/jordansissel/fpm

Although for just installing rails, using rubygems is the way to go, IMHO.

Without Chef there would be no way for me to rollout a new server in our cluster. Investing time into Chef was one of the greatest things I ever did. Chef is the best documentation of our infrastructure. The second best tool I'm using is fpm[1] to make custom debian packages.

[1] https://github.com/jordansissel/fpm

I really appreciate the hard work you have done, like I said, it is something I want, and I haven't put in the effort to do it. I know how annoying it is to log JSON, I made a tornado app the logged JSON through the logging module, and it was really annoying that the logging Formatter pickled objects into strings.

I'm not sure what the issue is with MongoDB. But if you aren't aware there is the EPEL (Extra Packages for Enterprise Linux) repository for RHEL and CentOS. It's a semi-official and safe repository run by the Fedora Project to add addition packages to EL. MongoDB is in there for EL5 (pretty old) and EL6. Also, easy_install is available for Red Hat and Debian in the 'python-setuptools' package.

I know it's common for people to install things form source on their own systems, but I feel that if you are doing something on someone else's system you need to extra careful. And that means ask permission before installing, use package management, and if you can't use package management suggest ways to resolve the dependencies (easy_install, pip, gem) without actually doing it. See the Homebrew install script as a good example (https://gist.github.com/323731). Another thing is following convention. The way you are using /usr/local is how /opt is supposed to be used. If you use /usr/local the files should be in /usr/local/{bin,etc,lib}. If want to create a package directory and have things like 'bin/' in it, use something like /opt/amon/{bin,etc,sbin,var}. http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html

Jordan Sissel, who created logstash, also created fpm (Effing Package Management) (https://github.com/jordansissel/fpm). It makes it easy to package things. It might help avoid bash script hell.

Create an rpm from the files in /tmp/install:

  fpm -s dir -t rpm --name amon --version 0.2.0 --depends mongodb --maintainer "Martin Rusev" -C /tmp/install etc/init.d/amon etc/init.d/amond opt/amon
Create a dep of tornado from pypi:

  fpm -s python -t dep tornado
Check out fpm if you're concerned about packaging issues - https://github.com/jordansissel/fpm

It makes it incredibly easy to create an rpm/deb/etc. from a directory, rpm, npm, gem, python module, etc.

Here's a BayLISA video about it: http://vimeo.com/23940598