I agree whole-heartedly that writing SQL feels right. Broadly speaking, you can take the following approaches to mapping database queries to Go code:

- Write SQL queries, parse the SQL, generate Go from the queries (sqlc, pggen).

- Write SQL schema files, parse the SQL schema, generate active records based on the tables (gorm)

- Write Go structs, generate SQL schema from the structs, and use a custom query DSL (proteus).

- Write custom query language (YAML or other), generate SQL schema, queries, and Go query interface (xo).

- Skip generated code and use a non-type-safe query builder (squirrel, goqu).

I prefer writing SQL queries so that app logic doesn't depend on the the database table structure.

I started off with sqlc but ran into limitations with more complex queries. It's quite difficult to infer what a SQL query will output even with a proper parse tree. sqlc also didn't work with generated code.

I wrote pggen with the idea that you can just execute the query and have Postgres tell you what the output types and names will be. Here's the original design doc [1] that outlines the motivations. By comparison, sqlc starts from the parse tree, and has the complex task of computing the control flow graph for nullability and type outputs.

[1]: https://docs.google.com/document/d/1NvVKD6cyXvJLWUfqFYad76CW...

Disclaimer: author of pggen (https://github.com/jschaf/pggen), inspired by sqlc

I like this design! Asking the database to tell you the schema of your result does seem like the simplest, most reliable option.

However it does require you to have a running database as part of your build process; normally you'd only need the database to run integration tests. Doable, but a bit painful.

Yep, that’s the main downside. pggen works out of the box with Docker under the hood if you give it some schema files. Notably, the recommended way to run sqlc also requires Docker.

I check in the generated code so I only run pggen at dev time, not build time. I do intend to move to build time codegen with Bazel but I built out tooling to launch new instances of Bazel managed Postgres in 200 ms so not that painful.

More advanced database setups can point pggen at a running instance of Postgres meaning you can bring your own database which is important to support custom extensions and advanced database hackery.

What does your bazel tooling look like? We use postgres and bazel together in automated tests and I'm curious about how others start and manage postgres instances in similar scenarios. Currently were driving postgres from Python (py.test) while running tests.

The issue with our current approach is that we need to keep the bazel tests fairly coarse (e.g., one bazel test for dozens of python tests) to keep the overhead of starting postgres instances down.

I uploaded most of our Workspace setup here: https://github.com/jschaf/bazel-postgres-sketch. The tooling is a bunch of Go to manage the Postgres process. Basically, give it schema files and receive a running database with a tear down command.

We make temp instances of Postgres quickly by:

- avoiding Docker, especially on Mac

- keeping the data dir on tmpfs

- Disable initdb cleanup

- Disable fsync and other data integrity flags

- Use unlogged tables.

- Use sockets instead of TCP localhost.

For a test suite, it was 12x faster to call createdb with the same Postgres cluster for each test than than to create a whole new db cluster. The trick was to create a template database after loading the schema and use that for each createdb call.

Cool, thanks for the link.

For what it's worth, we use rules_nixpkgs to source Postgres (for Linux and Darwin) as well as things such as C and Python toolchains, and it's been working really well. It does require that the machine have Nix installed, though, but that opens up access to Nix's wide array of prebuilt packages.

https://github.com/tweag/rules_nixpkgs