We considered tailing binlogs directly but there's so much cruft and complexity involved trying to translate between types and such at that end, once you even just get passed properly parsing the binlogs and maintaining the replication connection. Then you have to deal with schema management across both systems too. Similar sets of problems using PostgreSQL as a source of truth.
In the end we decided just to wrap the whole thing up and abstract away the schema with a common set of types and a limited set of read APIs. Biggest missing piece I regret not getting in was support for secondary indexes.
This comment from the GitHub project page is pretty important. Configuration data often sees slow change, and isn't huge so a custom approach seems viable. I wonder how close they are to that 100/s ceiling.
There's also an unmentioned transition to eventual consistency happening here:
> The implications of this decoupling is that the data at each instance is usually slightly out-of-date (by 1-2 seconds).
> The reader API provides a way to fetch an approximate staleness measurement that is accurate to within ~5 seconds.
That's could lead to more complex application logic or risk of confusing users with stale behavior. No free lunch here.
[1] https://segment.com/blog/separating-our-data-and-control-pla...