Getting very close in my large side projects for everything to just be Go + PostgreSQL. Go is the API, business logic, and very soon the entire web layer. This is all on Linode, there are no containers I just put a binary on an LTS Ubuntu and then iptables to just those ports (with SSH configured in their Cloud Firewall to only be possible from my home IP).

I have a long term itch that I want to scratch soon, whether it's possible to dispose of PostgreSQL and just store my data in an S3 compatible object store and yet still have a SQL interface to it. https://blevesearch.com/ is on my list of things to look into for this. I'd already structured the PostgreSQL schema with the possibility of putting individual tables into an object store, but as my databases grow I'm keen to avoid becoming a DBA and I don't use most PostgreSQL features so if I have the chance to reduce my tech stack to just Go I may take it.

What I've learned from having a mix of Python + Django + some NPM... bitrot and maintaining dependencies is a debt that you have to start paying way too soon. Originally only the API was in Go, but that is the one part that over years has required virtually no debt paydown due to the language, environment, dependencies. You can leave a Go application for 8 years and all you need do is compile it to the latest Go and everything works perfectly... I can't even run or upgrade Python apps from half that time ago without major work to bring it up to date. As the side projects proliferated or grew, it is less overall effort for me to rewrite in Go than maintain everything else.

How do you handle DB interactions in Go? I've been interested in how people choose between ORMs and hand written wrappers and how they structure them recently.

Manual via https://pkg.go.dev/database/sql with handwritten SQL in the majority of places... but with a wrapper to handle the more complex search scenarios.

For example these from a multi-tenant SaaS forum platform (it's old, but it's also OSS so I can show you)...

This helper to get connections: https://github.com/microcosm-cc/microcosm/blob/master/helper... used like this for inserts: https://github.com/microcosm-cc/microcosm/blob/master/models... and this for reads https://github.com/microcosm-cc/microcosm/blob/master/models... .

But searches... i.e. highly consistent SELECT queries with different WHERE statements (and potentially FROM statements), then in each project I tend to have an idea of a search struct ( https://github.com/microcosm-cc/microcosm/blob/master/models... ) that will validate the inputs and represent the search query, and then something that will consume that and build the SQL for it ( https://github.com/microcosm-cc/microcosm/blob/master/models... ). This isn't pretty... but it's easy for me to tune, debug, and keeps the rest of the code base very maintainable... all the complexity is here in the search.

The vast majority of SQL is very very simple and needs no ORM, and the complexity is just in the search scenario where I want to be able to tune the performance more than an ORM would allow me to do so.

Is there a way to write and import the queries from a SQL file in Go? Like HugSQL (Clojure) or PugSQL (Python).

To benefit from the SQL tooling (syntax highlighting, autocomplete and so on).