What does HackerNews think of oban?
đź’Ž Robust job processing in Elixir, backed by modern PostgreSQL or SQLite3
Oban's been great, especially if you pay for Web UI and Pro for the extra features [3]
The main issue we've noticed though is that due to its simple fetching mechanism using locks, jobs aren't distributed evenly across your workers due to the greedy `SELECT...LIMIT X` [2]
If you have long running and/or resource intensive jobs, this can be problematic. Lets say you have 3 workers with a local limit of 10 per node. If there are only 10 jobs in the queue, the first node to fetch available jobs will grab and lock all 10, with the other 2 nodes sitting idle.
[1] https://github.com/sorentwo/oban [2] https://github.com/sorentwo/oban/blob/main/lib/oban/engines/... [3] https://getoban.pro/#feature-comparison
This is a non-issue if you're using a Elixir/Erlang monolith given its fault tolerant nature.
The noisy neighbour issue (resource hogging) is still something you need to manage though. If you use something like Oban[1] (for background job queues and cron jobs), you can set both local and global limits. Local being a single node, and global across the cluster.
Operating in a shared cluster (vs split workload deployments) give you the benefit of being much more efficient with your hardware. I've heard many stories of massive infra savings due to moving to an Elixir/Erlang system.
> We built a few services
So you never really committed to it in the first place. Also, this complicates the deployment problem.
> after a few years some of the original people that introduced it left the company
Probably left for a company that actually committed to Elixir. :P
> and it became very difficult to hire for
In a world where everyone is remote and where 10 Elixir people apply to every job, this product must have been pretty unappealing
> New hires were either people wanting to learn (so we had to spend a good bunch of resources into teaching + end up with a system built by noobs to the language) or very expensive developers with a lot of experience in erlang and elixir.
"We didn't want to pay employees their worth and instead bitched about what we couldn't get without hiring those employees"
(Why couldn't you hire an assortment? One experienced guy and a couple noobs?)
> We also found many times missing libraries, or found libraries which are incomplete, or unmaintained or just not well documented
Alright, fine. Sometimes you have to "roll your own" in this space, still.
> Tooling is just terrible. The VSCode plugin is crap
You should not use the word "tooling" here because VSCode is not an IDE, Elixir should not require an IDE, and moreover, Elixir should not be judged because "there is no good IDE for Elixir". "Tooling" should refer to the support libs and tools that ship with the language, all of which are excellent.
> At that point you're just reinventing your crappy undocumented and untested version of delayed_job.
Spotted the guy who never heard of Oban https://github.com/sorentwo/oban Benefit of the doubt: Perhaps it didn't exist yet.
> Most of what you get from elixir in terms of redundancy, high availability, etc you can have that anyway from kubernetes, heroku or any PaaS
This is entirely missing the point. If a bug or runtime error crashes your Ruby interpreter, you better have another one ready to go from a pool (because Rails stacks can take a while to load), and then you better not exhaust that pool! If such an error crashes Elixir, it just restarts the process, which only takes a millisecond because forking an immutable language's state is trivial compared to a mutable language's state.
> Liveview
I actually haven't played with it much yet so can't comment
> In the end, we are back to Rails and much happier
"We can underpay cheap devs again"
You also repurchased entire classes of bugs that are impossible in Elixir such as: mutation bugs, monkeypatch bugs, and concurrency bugs (just forget running your test suites in parallel). Also, these are literally the hardest types of bugs to fix (nondeterministic behavior), and will likely cost you more in the long run than any differential salary you balked at (I have spent entire months debugging something in the Ruby space, you'll remember my comment once this bites you in the *** one day).
I am using VSCode too and the experience is not as good as with Typescript/Javascript but is not that terrible either. I would compare it with Rust's experience, so far.
> Also, I've read some comments where people mention "we don't need redis", "we don't need workers" everything is so much easier. That was our thinking at first. But then you realize on deployments you will lose your cache, or your background jobs, etc. So you have to persist them either in mnesia or in the database. At that point you're just reinventing your crappy undocumented and untested version of delayed_job.
Of course you need _other_ libraries to achieve some of those things. You do not have a queue built it but there are some very good tools like Oban https://github.com/sorentwo/oban that are basically doing what Sidekiq does, just relying on the main database you are already using. There are also very good libraries for caching that rely on ETS and that simply replaces what you could do with Redis.
> Most of what you get from elixir in terms of redundancy, high availability, etc you can have that anyway from kubernetes, heroku or any PaaS.... you will need more than 1 server anyway, so...
This is partially true. The ability of BEAM to cluster and execute processes inside the cluster, internal communication between actors and so on are not _just_ achievable with k8s, not without a good added complexity.
Anecdotally speaking, he hasn't been hands on with Oban[1], yet he still offers advice and guidance around the project because it is in the Elixir community.
It's true but in practice this usually doesn't pan out.
For example with just background jobs alone there's the idea of queues, tracking failures / successes, exponential back-off retries, guaranteeing uniqueness, draining, periodic tasks and everything else you'd likely want in a production ready app.
Typically you'd use Redis, Postgres or something else to help with this. Fortunately https://github.com/sorentwo/oban exists and uses Postgres as a back-end with close to 10,000 lines of Elixir.
That said, HN readers from outside the Elixir should know there are great libraries for all of those use cases. Point in case is background job processors—we have plenty of options, though I’m especially fond of Oban[0] (full disclosure, I’m the author).
That is a broad generalization that assumes most applications are operating at mega scale. The benefits of simplified dependencies (a single database instance), transactional guarantees (a single database instance) and persistence (not using Redis) far outweigh the eventual possibility that the queue will place too large a load on your database.
As the author of Oban[0] (an PG backed persistent queue in Elixir) I'm definitely biased. However, the level of adoption in the Elixir community seems to signal that a lot of companies favor simplicity and safety over a possible scale issue down the road. The primary application I work on processes ~500k-1m jobs a day and the queue overhead is virtually invisible.
After replicating most of Sidekiq’s pro and enterprise behavior using older data structures I attempted to migrate to streams. What I discovered is that all the features I really wanted were available in SQL (specifically PostgreSQL). I’m not the first person to discover this, but it was such a refreshing change.
That led me to develop a Postgres based job professor in Elixir: https://github.com/sorentwo/oban
All the goodies only possible by gluing Redis structures together through lua scripts were much more straightforward in an RDBMS. Who knows, maybe the recent port of disque to a plug-in will change things.
It's incredibly well written and I am using it in a project.