Clutching pearls about binary size is and always will be hilarious to me.

Author doesn't even say why they object to the size. Are they aware that file-backed executables are paged on demand and only the active parts of the program will be resident?

Granted these days everyone is used to applications consuming massive amounts of drive space. But perhaps they're using legacy hardware for a home lab, or a IoT device with limited disk space.

From a security stand point, reduced application code decreases risk. It was service discovery code he removed, what if it reached out to discover services on application start up, that's a potential attack vector.

> From a security stand point, reduced application code decreases risk. It was service discovery code he removed, what if it reached out to discover services on application start up, that's a potential attack vector.

Agreed. I've see a similar pattern with certain open source libraries.

The first example I think of is the spf13/viper [1] library, used to load configuration into go applications. Viper is equipped with code for reading config from various file formats, environment variables, as well as remote config sources such as etcd, consul. If you introduce the viper library as a dependency of your application to merely read config from environment variables and YAML files in the local filesystem, then your go application suddenly gains a bunch of transitive dependencies on modules related to remote config loading for various species of remote config provider. It's not uncommon for these kind of remote config loading dependencies to have security vulnerabilities.

As well as the potential increased attack surface if a bunch of unnecessary code to load application configuration from all manner of remote config providers ends up in your application binary [2], if you work in an environment that monitors for vulnerabilities in open source dependencies, if you depend on an open source library that drags in dozens of transitive dependencies you don't really need, it adds a fair bit of additional overhead re: detecting, investigating and patching the potential vulnerabilities.

I guess there's arguably a "Hickean" simple-vs-easy tradeoff in how such libraries are designed. The "easy" design, that makes it quick for developers to get started and achieve immediate success with a config loading library, is to include code to load config from all popular supported config sources into the default configuration of the library, reducing the amount of steps a new user has to do to get the library to work for their use case. A less easy but arguably "simpler" design might be to only include a common config-provider interface in the core module and push all config-provider-specific client/adaptor code into separate modules, and force the user to think about which config sources they want to read from and then manually add and integrate the dependencies for the corresponding modules that contain the additional code they want.

edit: there has indeed been some discussion about the proliferation of dependencies, and what to do about them, in viper's issue tracker [3] [4]

[1] https://github.com/spf13/viper [2] this may or may not actually happen, depending on which function calls you actually use and what the compiler figures out. If your application doesn't call any remote-config-provider library functions then you shouldn't expect to find any in your resulting application binary, even if the dependency is there at the coarser-grain module dependency level [3] https://github.com/spf13/viper/issues/887 [4] https://github.com/spf13/viper/issues/707