What does HackerNews think of automaxprocs?

Automatically set GOMAXPROCS to match Linux container CPU quota.

Language: Go

#58 in Go
#41 in Go
This is not very robust. You probably should use the cgroup cpu limits where present, since `docker --cpus` uses a different way to set quota:

    if [[ -e /sys/fs/cgroup/cpu/cpu.cfs_quota_us ]] && [[ -e /sys/fs/cgroup/cpu/cpu.cfs_period_us ]]; then
        GOMAXPROCS=$(perl -e 'use POSIX; printf "%d\n", ceil($ARGV[0] / $ARGV[1])' "$(cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us)" "$(cat /sys/fs/cgroup/cpu/cpu.cfs_period_us)")
    else
        GOMAXPROCS=$(nproc)
    fi
    export GOMAXPROCS
This follows from how `docker --cpus` works (https://docs.docker.com/config/containers/resource_constrain...), as well as https://stackoverflow.com/a/65554131/207384 to get the /sys paths to read from.

Or use https://github.com/uber-go/automaxprocs, which is very comprehensive, but is a bunch of code for what should be a simple task.

We use https://github.com/uber-go/automaxprocs after we joyfully discovered that Go assumed we had the entire cluster's cpu count on any particular pod. Made for some very strange performance characteristics in scheduling goroutines.
Besides GOMAXPROCS there's also GOMEMLIMIT in recent Go releases. You can use https://github.com/KimMachineGun/automemlimit to automatically set this this limit, kinda like https://github.com/uber-go/automaxprocs.
AFAIK, it hasn't changed, this exact situation with cgroups is still something I have to tell fellow developers about. Some of them have started using [automaxprocs] to automatically detect and set.

[automaxprocs]: https://github.com/uber-go/automaxprocs

A lot of them do have things to do with k8s, though. Admission webhooks, Istio sidecar injection, etc.

The CPU limits = weird latency spikes also shows up a lot there, but it's technically a cgroups problem. (Set GOMAXPROCS=16, set cpu limit to 1, wonder why your program is asleep 15/16th of every cgroups throttling interval. I see that happen to people a lot, the key point being that GOMAXPROCS and the throttling interval are not something they ever manually configured, hence it's surprising how they interact. I ship https://github.com/uber-go/automaxprocs in all of my open source stuff to avoid bug reports about this particular issue. Fun stuff! :)

DNS also makes a regular appearance, and I agree it's not Kubernetes' fault, but on the other hand, people probably just hard-coded service IPs for service discovery before Kubernetes, so DNS issues are a surprise to them. When they type "google.com" into their browser, it works every time, so why wouldn't "service.namespace.svc.cluster.local" work just as well? (I also love the cloud providers' approach to this rough spot -- GKE has a service that exists to scale up kube-dns if you manually scale it down!)

Anyway, it's all good reading. If you don't read this, you are bound to have these things happen to you. Many of these things will happen to you even if you don't use Kubernetes!

Explicitly setting GOMAXPROCS is probably the cleanest way to limit CPU among the runtimes that are out there, however. For example, if you set requests = 1, limits = 1, GOMAXPROCS=1, then you will never run into the latency-increasing cfs cpu throttling; you would be throttled if you used more than 1 CPU, but since you can't (modulo forks, of course), it won't happen. There is https://github.com/uber-go/automaxprocs to set this automatically, if you care.

You are right that by default, the logic that sets GOMAXPROCS is unaware of the limits you've set. That means GOMAXPROCS will be something much higher than your cpu limit, and an application that uses all available CPUs will use all of its quota early on in the cfs_period_us interval, and then sleep for the rest of it. This is bad for latency.