What does HackerNews think of que?
A Ruby job queue that uses PostgreSQL's advisory locks for speed and reliability.
IDK maybe <1000 messages per minute
Not saying SKIP LOCKED can't work with that many. But you'll probably want to do something with lower overhead.
FWIW, Que uses advisory locks [1]
so pg_try_advisory_lock/pg_advisory_unlock can lock over transactions while for update skip locked can't, thus you would either need to keep a transaction open or use status+job_timeout (and in postgres you should not use long transactions)
basically we use c#, but we looked into https://github.com/que-rb/que which uses advisory_locks, since our jobs take like 1 min to 2 hours it was a no-brainer to use advisory_locks. it's just not the best thing if you have thousands of fast jobs per second, but for a more moderate queue where you have like 10000 jobs per minute/10 minutes/30 minutes and they take like 1 min to 2 hours its fine.
we also do not delete jobs, we do not care about storage since the job table basically does not take a lot. and we have a lot of time to catchup at night since we are only in europe
There are a lot of ways to implement a queue in an RDBMS and a lot of those ways are naive to locking behavior. That said, with PostgreSQL specifically, there are some techniques that result in an efficient queue without locking problems. The article doesn't really talk about their implementation so we can't know what they did, but one open source example is Que[1]. Que uses a combination of advisory locking rather than row-level locks and notification channels to great effect, as you can read in the README.
This one seems to be the most performant. By a lot too, from my understanding (haven't ran any benchmark myself, but the readme shows some good postgres knowledge)
There is a Go port of Que but you can also easily port it to any language you like. I have a currently non-OSS implementation in Rust that I might OSS someday when I have time to clean it up.
We implemented a similar design to Que for a specific use case in our application that has a known low volume of jobs and for a variety of reasons benefits from this design over other solutions.
[1]: https://github.com/que-rb/que [2]: https://brandur.org/postgres-queues
With Postgres you also need to worry about high churn, especially since you are creating/locking/deleting rows constantly. This can be alleviated through a variety of means, of which personally I would use per-day table partitioning and truncate older partitions on a cron, not to mention the sharp increase in database connections to the host now required.
Ignoring the literal elephant in the room of synced writes to the store. Redis can be used quite effectively in a blocking manner with RPOPLPUSH/LMOVE(6.2+) for a reliable queue, allowing an item to not be lost because atomically the pop and push from two different lists are done together.
[1] https://github.com/que-rb/que [2] https://gist.github.com/chanks/7585810