If you haven't tried SQLite, please do. For years I ignored SQLite and used MySQL (it does the job) but once you see how fast SQLite is, and advantages of having a DB contained in a single file... just go play around with SQLite instead of ignoring it for years like me. It's neat.
I'm using SQLite at the moment; on the one side there's a 'legacy' (read: poorly written 2012) application, on the other there's the new and rebuilt version. The old one was not built very well, it does not use foreign keys or any kind of database constraints (it references other entries by name in a column of comma-separated values) and it runs like trash. But the performance problem is not in the dozen queries it runs to load the data, it's in the fact that it converts the query result to XML (via string concatenation, because of course) and that is converted to JSON; the conversion is at least 60% of each request. The other problem is that it writes and re-queries the data whenever you leave one of the hundreds of form fields in the application.
I'm rebuilding the application in a modern tech stack, still using SQLite but properly this time, along with Go and React. API requests take 20-40ms instead of 300-1500ms, and there's much less of them.
The main downside to using SQLite is that it does not support "proper" database migrations; you cannot alter a column. You can add columns to an existing table, but you can't change existing columns. The database abstraction I'm using at the moment, Gorm (a different subject entirely) work around this by moving stuff to a temp table, recreating the table with the updated columns and moving stuff back, I believe.
Anyway TL;DR sqlite is not the bottleneck.
Anyways, the process you describe is also used in MySQL for doing online schema migrations. [2] "proper" database migrations cause downtime
[1]: https://stackoverflow.com/questions/805363/how-do-i-rename-a... [2]: https://github.com/github/gh-ost