I see this point everywhere about Rust's union types and it always kind of irks me:

> The point is, this [Result type] makes it impossible for us to access an invalid/uninitialized/null Metadata. With a Go function, if you ignore the returned error, you still get the result - most probably a null pointer.

It's all about framing. You can just as equally say it is "impossible" to access an invalid Go FileInfo, because you'll get a panic for derefencing a null pointer. Or you can just as equally say it is "possible" to access an invalid Rust Metadata, just by doing .unwrap(). Everyone knows an unchecked .unwrap() is just bad Rust code, but then again dereferencing a pointer without checking the returned error is just bad Go code.

Anyways the rest of the article seems like just a criticism of Go's file system API, which seems fair but also seems a little niche given how difficult it is to create a good cross-platform file system API. This particular point irks me though:

    > stat "$(printf "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98")"

    > fmt.Printf("      %s\n", e.Name())
> It... silently prints a wrong version of the path.

What did you want it to do? The author even admits go strings are just byte slices, not UTF8, and then passes a non-UTF8 string to a function that expects UTF8. If there's a chance the file path your program works with might not be UTF8, then you should validate it. I think moving the complexity UTF8-ness out of the type system was a necessary evil.

The difference is what the language makes easy to do and how it signals to you you're about to do something dangerous.

If you call `.unwrap()`, that's a big yellow flag that you're going to be taking the gloves off and maybe touching something radioactive. Go has the maybe-radioactive thing sitting right there; safely touching it and unsafely touching it look exactly the same.

I generally enjoy using Go, but this is one of the pieces of the language design that I was surprised Go went with; we've known as an industry for decades that including bare null / nil / undefined / whatever we want to call it without type-system assistance is leaving a bare third rail lying around.

Rust's approach is definitely safer, but my point is that the concerns are overblown. The Go compiler raise an error if a variable (error) goes unused, and just ignoring this error by naming it "_" is obviously dangerous. Yes, Rust makes it easier to never ignore an error, but I don't think I've ever accidentally ignored an error that I shouldn't have in Go.

> Go compiler raise an error if a variable (error) goes unused

It doesn't though. It's not a warning or error to not use the return value of a function that only returns an error, for instance (https://go.dev/play/p/se6-zHHVezH).

There are static error checking tools you can use like https://github.com/kisielk/errcheck to work around this, but most people don't use them.

I've run into a lack of Go error checking many times. Many times it's just the trivial case, where the compiler doesn't warn about not checking the result of an error-returning function.

But often it'll be subtler, and the result of Go's API design. One example is its file writing API, which requires you to close the file and check its error to be correct. Many times people will just `defer file.Close()`, but that isn't good enough - you're ignoring the error there.

Worse still is e.g: writing to a file through a bufio.Writer. To be correct, you need to remember to flush the writer, check that error, then close the file and check that error. There's no type-level support to make sure you do that.