I'll never be comfortable with any tool for that automatically generates schema changes, as I'm just never sure at what point it decides to delete parts of my prod db.

All of my migrations are dumb DDL statements rolled up into a version. I know exactly what the final state is as it gets run and used when integration testing, staging etc.

It's boring but pretty bulletproof. I can rename a table and be confident it'll work, rather than some tool that might decide to drop and create a new table.

I can also continue to use SQL and not have to learn HCl for databases. This is useful when I want to control how updates are done, if I want index updates to be done concurrently Vs locking the table.

I feel the same way. I'd prefer to program an SQL database in SQL.

What I personally do these days is to write migrations as normal (sql files for the "up" and "down" steps), and then have a `go generate` step that creates a randomly-named database, applies each migrations, and dumps the schema into a file that gets checked in. A test assures that the generated content is up to date with the rest of the repository. This gives you 3 things:

1) PR reviewers can see what your migration does to the database in an easy-to-read diff. (A tiny bit of regexing needs to be done to make PostgreSQL dumps compatible between machines; it puts the OS name and the build time in there.)

2) You have the full database schema in an easy-to-read form. I open the SQL dump all the time to remind myself of what fields are named, what the default values are, etc.

3) Unit tests against the database are faster. Instead of creating a new database and applying 100s of migrations, you just apply one dump file. Takes milliseconds.

In general, I think that database migrations are fundamentally flawed. The database schema and the application should have a schema version that they expect, and rules for translating between versions. That way, you could upgrade your code to something that reads and writes "version 2", but understands "version 3", and then apply the database migration at your leisure, and update the code to start writing "version 3" records. But, nobody does this. They just cross their fingers and hope they don't have to roll back the migration. And honestly, I don't think I've ever had to roll back a migration, because they're so scary that you test it a billion times before it ever makes it to production. But, that testing-a-billion-times comes at the cost of writing new features, and the team that only tests it 999 million times no doubt has a horror story or two.

I thought the established wisdom is to make schema migration compatible with both the old app and the new app, whenever it is possible? E.g. you can safely add a nullable column, and it shouldn't trip up the old app nor the new app, unless you are using some crappy ORM that does "SELECT *", or you are on an older MySQL version that may take hours/days/weeks to rewrite the whole table just to add a nullable column.

https://github.com/fabianlindfors/reshape, on HN front page a couple weeks ago, has some nice tricks to help with the incompatible cases.