Several of our clients have Go codebases. My general experience as a security auditor of Go projects has been that it is mostly akin to security auditing of Python projects (I'd say it's like Java auditing, but I'm very concerned/excited about deserialization bugs in Java programs and am not so optimistic about finding them in Go programs).
Which is to say, the low hanging fruit won't be exploitable type safety problems, but rather application logic issues: SQL injection (Go's database integration is still the wild west), failure to properly authorize RPCs or HTTP endpoints, SSRF, and stuff like that. I'm probably not going to use property testing to find an SSRF, or to spot a static nonce, or something like that (somebody feel free to put me my place over that! maybe I should use more property testing!)
It's interesting that the strategies described in this post lean so heavily on theoretical program correctness; as I read it, it felt super useful to me as a Go developer, and less directly applicable to my assessment work.
Relatedly, this post was circulating on Twitter yesterday, and it is great: a race condition in Go code exploitable for RCE:
https://github.com/netanel01/ctf-writeups/blob/master/google...
I probably had your attention with that summary! Race conditions in Go code could be a broadly exploitable bug class! Except: not so much, no: the conditions making that bug exploitable are both contrived and outlandish enough that no security reviewer, even one unfamiliar with Go, would have been comfortable with that design.
> SQL injection (Go's database integration is still the wild west)
Could you elaborate on this? I thought you were safe if you used the '?' parameter replacement.