I think that switching on types in Go is gererally a total anti-pattern.

In case you need to handle errors, in the end the type doesn't matter at all for you. You are interested what the implications of the error are, so the library exposing the error should just expose functions like isTemporal(e error) bool. In case you need to inform the user about the exact error, no problem, the returned error has the Error() function which shows the message, no matter the type, use that.

In case you're not matching errors, then you shouldn't worry about the type. The function returns an interface because all that should matter to you is that interface with its common behavior.

Otherwise you're just fighting the language, which doesn't make any sense, as you can just switch to Rust then.

EDIT: Though I'm really happy that people make more tools for analyzing Go code. Good job!

"I think that switching on types in Go is gererally a total anti-pattern."

It's not.

A thing to bear in mind is that this tool is checking interface types that are strictly internal to a package. They contain a package-private implementation method. It is a very common pattern for me to have a package implement a server internally that runs in a goroutine or set of goroutines that is internally communicated with along a "chan command", and does a type switch within its main loop to dispatch the commands. It is useful to have completeness checking on this sort of thing. It would also make sense to have AST nodes, header types, or any number of other internal usages handled this way. The addition of an external tool to do completeness checking goes 80% of the way towards resolving people's complaints with this pattern, as long as they don't mind an external tool. (Which I acknowledge is a matter of strong opinion.)

Your criticism seems to focus on how objects present themselves externally. From that point of view, it is correct and I generally agree with it... the only issue is that it's not relevant here, because we're talking strictly about internal objects.

Separately, the way Go is a very, very simple language that is picking up a lot of useful tooling makes me wonder about the possibility of designing a language that even more deliberately reifies that. What if you had something like a Lisp, but instead of the macros staying in the code you had a culture of resolving the macros into the simple language and publishing the end result instead of the macros? Bearing in the very purpose of such a structure would be to forcibly ensure that the core language remains simple and the shared libraries stay simple, the core idea here is to use a second-order effect to constrain the power of macros and macro-like things, so I'm not "surprised" by the way this makes macros or syntax checks less powerful, nor am I surprised that writing such a tool requires a higher "activation energy" than simply plopping a macro down. It is well known that macros don't stay simple and often grow quite complicated; some people attribute this and the fact that macros become their own language and thus no two Lisp libraries are necessarily written in the "same language" to why Lisp hasn't managed to conquer the world.

I'm noodling around with ideas here here, not promoting Go as this language; if it were designed for that there's probably some things it would do differently. I just find it sort of intriguing that if you don't mind using tools, a lot of the common criticisms of Go can be largely mitigated without necessarily paying the price of putting the complexity in the language itself; even if that doesn't personally appeal to you it's interesting to consider it as a deliberate design paradigm. Is this a good idea? It's almost like having a modular language where you only use the complex stuff where you really get an advantage from it but don't pay for that stuff where you don't. You couldn't use this to bodge in things like the Rust borrow checker that really needs the whole program to use it to obtain the benefits, and a lot of what Haskell does with the typechecker would be infeasible, but it's not hard to see things like "sum type completeness checking" and "generics via code templating" and even things as large as "optional typing" being feasible this way, without the core language authors having to arbitrate all these things, and without the whole community necessarily having to adopt these things to see advantages. Compare with, for instance, the rather large mess Python has become over the years by absorbing feature after feature into the core language. See also the gometalinter [1], in which the community aggregates a collection of smaller tools all written fully independently, with little-to-no coordination needed, all standalone, and then tied into what is, frankly, not a half bad language linter, especially if you play with the config a bit.

[1]: https://github.com/alecthomas/gometalinter

Another clear lang vs tool split present in Go is related to returning the error type. The language could enforce that return values of type error must always be assigned (and used). But it doesn't, and now there's the errcheck tool.

https://github.com/kisielk/errcheck

I agree that having things like errcheck and go-sumtype all wired up using gometalinter is pretty novel and only partially gross.