https://github.com/grafana/loki might work for you. It’s not a drop in replacement for Splunk, FWIW.
Maybe https://github.com/grafana/loki , but haven't yet tried it.
(Or https://github.com/phaistos-networks/TANK ..?)
> I think a better strategy is to store logs in flat files with several replicas
Agreed. We just used beats + logstash and put the files into Ceph.
> x-request-id and maybe a trigram index of messages, and actually be able to debug full request cycles in a handful of milliseconds when necessary.
Yes, yes, yes. That would be great.
There's a writeup on the differences with the EFK stack here: https://github.com/grafana/loki/blob/master/docs/overview/co...
After working with a client for multiple years continually hitting bottlenecks and complexity with the EFK stack, I'm really looking forward to something different.
IIRC, the recommended way to integrate it with Grafana is via promtail but we weren't too keen on the added complexity of yet-another service in the middle so we developed a custom client library in Go to just send the logs straight to Loki (which we should probably open source at some point).
I don't think there's any fancy graph integration yet, but the Grafana explore tab with log level/severity and label filtering works well enough esp. since they introduced support for pretty printed JSON log payloads.
There's also Loki - https://github.com/grafana/loki which integrates with Grafana natively which may be the Kibana replacement you're looking for.
For logs without full indexing, Loki (https://github.com/grafana/loki) is a recent entry into the space, and it probably a good option to look at. It indexes metadata (labels), so it allows searching by labels but not full text. It is also supposed to be horizontally-scalable, which is probably something you want in a log storage solution.