If I understand the article correctly, the two main features of pg_later is that a SQL batch won't be aborted when your Postgres connection dies - and all results are retained until you retrieve them, and this is all mediated by Tembo's Postgres message-queue extension.

The thing is, both of those things can be done today without any extensions: just modify your SQL scripts to run under an Agent account, and change every `SELECT` into a `SELECT INTO` statement, all dumping into a tempdb - this technique also works on SQL Server and Oracle too.

(On the subject of Agents, I'm surprised pgAgent isn't built-in to Postgres; while MSSQL and Oracle have had it since the start)

I'm confused. Can I already do this today without extensions or do I need to install pgAgent first?

Idk about pgagent but any table is a resilient queue with the multiple locks available in pg along with some SELECT pg_advisory_lock or SELECT FOR UPDATE queries, and/or LISTEN/NOTIFY.

Several bg job libs are built around native locking functionality

> Relies upon Postgres integrity, session-level Advisory Locks to provide run-once safety and stay within the limits of schema.rb, and LISTEN/NOTIFY to reduce queuing latency.

https://github.com/bensheldon/good_job

> |> lock("FOR UPDATE SKIP LOCKED")

https://github.com/sorentwo/oban/blob/8acfe4dcfb3e55bbf233aa...