I'm glad that Google is part of this conversation, and they're now applying tests for bias to new models that they release. (Some of their old models are pretty awful.)

If you want to see a further example, in the form of a Jupyter notebook demonstrating how extremely straightforward NLP leads to a racist model, here's a tutorial I wrote a while ago [1]:

[1] http://blog.conceptnet.io/posts/2017/how-to-make-a-racist-ai...

For those who aren't aware, @rspeer has been taking this problem seriously for years.

His ConceptNet NumberBatch embeddings[1] are one of the few pre-built releases which attempt to fix this.

[1] https://github.com/commonsense/conceptnet-numberbatch