What's the differences of using HAProxy or Envoy between using the cloud load balancers of AWS or Google Cloud?
I've found that the cloud load balancers lag behind the state of the art in features and that their assumptions and configurations can be pretty brittle.
I haven't used Amazon's ALB, but with the legacy ELB, they can't speak ALPN. So that means, if you use their load balancer to terminate TLS, you can't use HTTP/2. Their automatic certificate renewal silently broke for us as well; whereas using cert-manager to renew Let's Encrypt certificates continues to work perfectly wherever I use it. (At the very least, cert-manager produces a lot of logs, and Envoy produces a lot of metrics. So at the very least, when it does break, you know what to fix. With the ELB, we had to pray to the Amazon gods that someone would fix our stuff on Saturday morning when we noticed the breakage. They did! But I don't like the dependency.)
I have also used Amazon's Network Load Balancer with EKS. It interacts very weirdly. The IP address that the load balancer takes on changes with the pods that back the service. The way the changes happen is that the NLB updates a DNS record with a 5 minute TTL. So you have a worst case rollout latency of 5 minutes, and there is no mechanism in Kubernetes to keep the old pods alive until all cached DNS records have expired. The result is, by default, 5 minutes of downtime every time you update a deployment. Less than ideal! For that reason, I stuck with ELB pointing to Envoy that terminated TLS and did all complicated routing.
The ALB wouldn't have these problems. It's just a HTTP/2 server that you can configure using their proprietary and untestable language. It has some weak integration with the Kubernetes Ingress type, so in the simplest of simple cases you can avoid their configuration and use a generic thing. But Ingress misses a lot of things that you want to do with HTTP, so in my opinion it causes more problems than it solves. (The integration is weak too. You can serve your domain on Route 53, but if you add an Ingress rule for "foo.example.com", it's not going to create the DNS record for you. It's very minimum-viable-product. You will be writing custom code on top of it, or be doing a lot of manual work. All in all, going to scale to a large organization poorly unless you write a tool to manage it, in which case you might as well write a tool to configure Envoy or whatever.)
In general, I am exceedingly disappointed by Layer 3 load balancers. For someone that only serves HTTPS, it is completely pointless. You should be able to tell browsers, via DNS, where all of your backends are and what algorithm they should use to select one (if 503, try another one, if connect fails, try another one, etc.) But... browsers can't do that, so you have to pretend that you only have one IP address and make that IP address highly available. Google does very well with their Maglev-based VIPs. Amazon is much less impressive, with one IP address per AZ and a hope and a prayer that the browser does the right thing when one AZ blows up. Since AZs rarely blow up, you'll never really know what happens when it does. (Chrome handles it OK.)
For instance: https://github.com/kubernetes-sigs/aws-alb-ingress-controlle...
Also, and you probably already know about this, but it's true that ingress won't create the record automatically for you - but external-dns ( https://github.com/kubernetes-sigs/external-dns ) will - with the correct annotations (pretty simple), external-dns will watch for changes to ingress and publish the dns records on R53 (and many other DNS providers) for you. It works really well for us, even when the subdomain is shared with other infrastructure not managed by itself.