In my eyes, Caddy is a lovely web server that works pretty well as ingress for container clusters (e.g. Nomad, Docker Swarm etc.). That said, i can't help but to feel that v1 was easier and in some ways nicer to use than v2, even though it's abandoned at this point.

That said, i have certain grievances with most of the web servers out there.

Apache2/httpd - actually decently usable even nowadays, but if the fragmentation of service names (httpd vs apache2, with additional scripts like a2enmod) between different distros doesn't hurt it, then the configuration format and how it does reverse proxying and path rewriting most certainly will. The performance is still passable, no matter what anyone says, my applications still have been the bottleneck in approx. 95% of the cases, though that might change with frameworks like Vert.X or such. The further down you scroll, the less user friendly it becomes: https://httpd.apache.org/docs/2.4/rewrite/remapping.html Admittedly, the docs themselves are good, though, despite the syntax that you're stuck with.

Nginx - recently migrated my ingress to it at work, seems pretty okay so far, the configuration format seems to make a bit more sense and probably lies somewhere between Apache and Caddy as far as its ease of use and pleasantness goes. I no longer even need rewrite rules to get websockets working properly, which is nice. And my containers can have all of the necessary config in a single file vs the unnecessary boilerplate fragmentation that httpd forces upon me. For example, both of these seem more passable to me when compared to Apache2: https://docs.nginx.com/nginx/admin-guide/web-server/reverse-... and https://www.nginx.com/blog/creating-nginx-rewrite-rules/

Currently, my biggest gripe is that Nginx kills itself when it cannot resolve an upstream host, for example, while Docker containers are still starting, their health checks haven't passed and therefore their DNS records also haven't been created: https://stackoverflow.com/questions/42720618/docker-nginx-st... The worst part is that none of the suggested answers actually work for me, so i can't have a single Nginx instance be in front of the development environment with about 20 containers, a few of those being down when Nginx is being restarted will not let many of them be used until the startup finishes. Unacceptable.

Caddy - as stated before, i liked v1 more than v2, though the project itself is pretty close to as good as a web server might get. What i don't enjoy is them taking the old docs offline, merely letting you download an archive, nor am i a fan of the current docs, since at the current point in time they are a bit like running "tar --usage": https://caddyserver.com/docs/caddyfile/directives/reverse_pr...

It's nice that there are a few examples for the common use cases, but there probably could be even more, just look at what the PHP documentation has at the bottom for a good example: https://www.php.net/manual/en/function.str-replace.php (crowd sourced, but i like the idea of letting the community contribute useful information like that).

Apart from that, some of the behavior is weird and you will get a 200 when you'd expect to get a 502/404 in most other web servers: https://caddy.community/t/why-does-caddy-return-an-empty-200... which will sometimes be misleading ("Huh, i'm not getting any data in the response to my request, even though the status is 200 in my log, weird...")

Also, i remember when v1 had this "fail-fast" habit of shutting down the entire server when renewing/obtaining a certificate failed, something that i utterly hate when web servers do: https://github.com/caddyserver/caddy/issues/642 Admittedly, things are a bit better now: https://caddyserver.com/docs/automatic-https#errors I just don't understand why web servers can be so opinionated about these things and not provide something like "failure_action" in Docker Compose (https://docs.docker.com/compose/compose-file/compose-file-v3...) so that people can choose between either stopping everything as soon as problems manifest, or continuing with a "best effort" strategy.

If i'm hosting 100 sites behind a reverse proxy, i don't want 99 to be taken down just because 1 of them was misconfigured, the web server should be able to throw out a warning about that one host if i tell it to, and proceed to run the rest 99 as instructed. When no web server forces me to cope with such brittleness will be a good day.

Regarding Caddy directive docs, there's examples right at the bottom. What are you missing, exactly? If you could be more specific, we can address it. But as-is, your comment is too vague to be actionable. Feel free to open an issue on https://github.com/caddyserver/website with specific examples you think are missing.

Regarding empty 200 responses, this is because "Caddy worked as configured". A 404 Not Found would be incorrect, because there was no attempt to "find" anything. A 400 would be incorrect, because the request was probably fine. A 500 would also be incorrect, because there was no error. The only option remaining, really, is an empty 200 response. It's the user's responsibility to make sure the configuration handles all possible requests with a handler that does what they want.

Regarding fail-fast on cert issues, the problem was that shutting down often triggers container restarts, causing Caddy to attempt issuance again, usually rapidly hitting rate limits. Caddy v2 no longer has this problem. I really can't imagine any situation where shutting down the server makes sense. Servers are kinda by-design supposed to be stable, and shutting down for any other reason than config/startup issues seems counterproductive. Do you have any specific usecase where it would be useful? You're the first to bring up this point since v2 was released.