This is vital if you are designing apis or clients that deal with charging a user money. It should be literally impossible for a user to accidentally get charged twice due to a flakey connection if you design correctly.

The trick is to have the client generate a random 'idempotency key' (a uuid) to start each logical transaction and have the server use that id to prevent double charges of the same transaction. By always passing that key, client can request that the payment be processed a 100 times with no fear of it being processed more than once.

This stripe blog post has as good a description as any: https://stripe.com/blog/idempotency

So I think this is actually the secret to creating actually dependable, no-downtime transitioning endpoints. It's just an idea that has been rolling around in my head but:

- Express all operations as log messages (ez pz distribution)

- Ensure all operations are idempotent

- Record the operations (this is the log you can distribute if you please)

- Disallow API code modification, only allow accretion/use of new API endpoints.

- All APIs that come up have their own databases, a bit of the CQRS model here (but without events -- just the actions performed)

- When you need to stand up new API servers, start the new ones (with handling code for old operations completely unchanged) next to the old ones, and update the http-server code (like request handlers) to output the new commands. Older servers that don't understand the new commands will ignore (or redirect), and new servers that do understand will process and add to the distributed log. New nodes just stream the replications of the already existing nodes and no one spends any time with an inconsistent view of the database

Of course, writing to a distributed log is slow (pick whichever consensus algo you want, you either have durability with a quorum or best-effort without), but this only is a huge deal if you're doing lots of writes, and for most web applications, that's not what's happening, the vast majority is reads.

CRDTs might even fit in here, because if you want a multi-master setup, you could literally keep the log as a set (I'm not quite sure how truncation of super old records would want) keyed by transaction ID -- Assuming the same request doesn't go to multiple servers, their logs should be easily combinable at the end of the day -- API1 is gonna see events A B and E, API2 might see C and F, and API3 will likely see D G and H.

Honestly everything I've described here is really more like moving the coordination/distributed log problem to the application level (up until now all this action would just happen @ the Postgres/DB level), but I'm not yet convinced it's a terrible idea.

I haven't found the time to actually try to make what I'm describing here a thing but would love to hear thoughts

You will enjoy the fact-based database Datomic, which is the reason Rich Hickey made Clojure: https://www.youtube.com/watch?v=Cym4TZwTCNU

Datalog is a much, much, much query language better than SQL: http://www.learndatalogtoday.org/

I do remember Datomic and I think it's a great tool but I fell out of love with the Clojure ecosystem and JVM-based languages as a whole and don't think I'll be getting back into it/them.

I do remember wanting to check out Datomic (I believe after seeing a talk on how it was being used at a bank in southern america?[0]), but I found it unreasonably hard to find and download/experiment with the community edition -- compare this to something like Postgres which is much more obvious, more F/OSS compliant (I understand that they need to make money) and Datomic doesn't really look that appealing to me these days.

At this point in my learning of software craftmanship I can't do non-statically type-checked/inferenced languages anymore -- I almost never use JS without Typescript for example. Typed clojure was in relatively early stages when I was last actively using clojure, and I'm sure it's not bad (probably way more mature now), but it's a staple in other languages like Common Lisp (the declare form IIRC). The prevailing mood the clojure community seemed to be against static type checking and I just don't think I can jive with that anymore.

Thinking this way right now Datomic wouldn't be a good fit for me pesonally but I believe that it is probably high quality paradigm.

[EDIT] - I found the talk: https://www.youtube.com/watch?v=7lm3K8zVOdY

[0]: https://www.datomic.com/nubanks-story.html

Yes, Datomic is the killer app for Clojure [^1]. Have a look at Datascript[^2] and Mozilla's Mentat[^3], which is basically an embedded Datomic in Rust.

Hickey's Spec-ulation keynote is probably his most controversial talk, but it finally swayed me toward dynamic typing for growing large systems: https://www.youtube.com/watch?v=oyLBGkS5ICk

The Clojure build ecosystem is tough. Ten years ago, I could not have wrangled Clojure with my skillset - it's a stallion. We early adopters are masochists, but we endure the pain for early advantages, like a stable JavaScript target, immutable filesets and hot-reloading way before anyone else had it.

Is it worth it? Only if it pays off. I think ClojureScript and Datomic are starting to pay off, but it's not obvious for who - certainly not very ever organisation.

React Native? I tore my hair out having to `rm -rf ./node-modules` every 2 hours to deal with breaking dependency issues.

Whenever I try to use something else (like Swift), I crawl back to Clojure for the small, consistent language. I don't think Clojure is the end-game, but a Lisp with truly immutable namespaces and data structures is probably in the future.

[^1]: In 2014 I wrote down "Why Clojure?" - http://petrustheron.com/posts/why-clojure.html [^2]: https://github.com/tonsky/datascript [^3]: https://github.com/mozilla/mentat