Do they talk at all about what they're using to provide the VPC overlay? I have a DO k8s cluster and it uses Cilium for the CNI, which turns out to be quite useful, so I guess I'm wondering if they're also using Cilium for this.

(Over in AWS land, they wrote a CNI for their own VPC networking. It turns out to have many strange limitations. For example, you can only run 17 pods on a certain type of node, because that node is only allowed to have 19 VPC addresses. I was quite surprised when pods stopped scheduling even though CPU and memory were available. Turns out internal IP addresses are a resource, too. DigitalOcean has the advantage of starting fresh, so might be able to use something open source that can be played with in a dev environment and extended with open source projects.)

> Turns out internal IP addresses are a resource, too.

That's not what is happening in AWS. IP address are resources (duh), but that's no the issue. With their CNI plugin each pod gets its own Elastic Network Interface. ENIs aren't just virtio's virtual network, it could be ENA (100Gbps) or Intel VF (10Gbs). It's a hardware limitation of amazon virtualization stack starting with previous generation instances.

> I was quite surprised when pods stopped scheduling even though CPU and memory were available.

This is well documented here: https://github.com/aws/amazon-vpc-cni-k8s