It would be nice to have one less dependency in the tech stack but honestly have no complaints with Sidekiq so far.

I'm quite new to Ruby/Rails though — would be interesting to hear from others how they think it stacks up (unfortunately the author doesn't compare in the blog post)

The standard advice used to be that neither Redis or Postgres are very good for using as a queue.

The creator and former maintainer of Redis was up until a few years ago discouraging its use as a queue, I think mainly because of its lack of durability and high availability at the time. He built a prototype Disque[0] to address the issues but it never became production ready. The other downside is that Redis is in-memory which means the queues have less capacity/are more expensive for the same capacity than an on-disk solution, but as memory gets cheaper over the years this becomes less and less of an issue. The upside is the throughput of Redis is very high.

I have personally worked on rails apps using redis-based queues like resque (and to a lesser extent sidekiq), and actually haven't run into any redis crashes or downtime in years of runtime, redis is very solid in general. You can also snapshot the redis instance periodically, to limit the number of jobs you would lose if it did crash.

In terms of using a primary db like postgres or mysql as a queue, I have personally run into issues with this multiple times. I would recommend never to do it, except on the smallest of side projects.

The issue is that eventually your queues will back up, whether it's due to a bug, surge of traffic, or just complex interaction of behavior in your app that cascades a ton of jobs at once when you run a backfill or something. When your app starts to get overloaded it's pretty trivial to increase the number of web instances running, so your bottleneck in these situations is going to be the db performance. As your queues get backed up, your queue workers are running at full speed processing jobs nonstop, which puts strain on your DB. Additionally, the act of enqueueing and dequeuing a job itself also puts strain on the db, so you can easily get into an unstable situation where each job that gets added to the queue makes every other job take longer.

If you allocate a separate DB instance that is only running your queue, that is much safer. Still, a DB like postgres is not great at doing constant writes and deletes, it creates additional auto-vacuum pressure for instance. But this will manifest as just getting worse throughput on the same hardware than you could get from a dedicated queue like rabbit mq, so if you're not at large scale it's a fine option.

Edit: And one other thing to add, for a lot of web apps the scope of what is needed from a queue these days is a lot less now than it was in the past. It used to be, and in large enterprise systems it often still is, the case that when people talked about a message queue they wanted something to facilitate passing messages between many completely separate apps. Now most apps just use a rest api for that (or perhaps protobufs or graphql or something but still over http). So I think historically an additional reason against using a simple datastore as a queue was that it didn't have enough features so you'd end up re-inventing the wheel with things like brokers, fan out and broadcast patterns, at-most-once vs at-least-once semantics, etc. But here I'm just considering the very limited usecase of a sidekiq-like queue, for processing jobs in the background for a single web app.

tl;dr: Never use your primary DB as a queue. Using a separate Postgres instance can work if you over-provision capacity and don't need to maximize throughput, and a Redis-based solution can work if you don't need high availability and can tolerate some messages lost if something goes wrong.

[0] https://github.com/antirez/disque

> Never use your primary DB as a queue. Using a separate Postgres instance can work if you over-provision capacity and don't need to maximize throughput, and a Redis-based solution can work if you don't need high availability and can tolerate some messages lost if something goes wrong.

That is a broad generalization that assumes most applications are operating at mega scale. The benefits of simplified dependencies (a single database instance), transactional guarantees (a single database instance) and persistence (not using Redis) far outweigh the eventual possibility that the queue will place too large a load on your database.

As the author of Oban[0] (an PG backed persistent queue in Elixir) I'm definitely biased. However, the level of adoption in the Elixir community seems to signal that a lot of companies favor simplicity and safety over a possible scale issue down the road. The primary application I work on processes ~500k-1m jobs a day and the queue overhead is virtually invisible.

[0] https://github.com/sorentwo/oban