What does HackerNews think of List-of-Dirty-Naughty-Obscene-and?

"The Colossal Clean Crawled Corpus, used to train a trillion parameter LM in [43], is cleaned, inter alia, by discarding any page containing one of a list of about 400 “Dirty, Naughty, Obscene or Otherwise Bad Words”. This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people. If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light"

from "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " https://dl.acm.org/doi/10.1145/3442188.3445922

That list of words is https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...

The good news is that things like https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and... exist so getting a source of words to filter is easy enough. And converting numbers to letters isn't too bad.

The hardest problem with the implementation was that with a long list you can't just search for a few dozen inappropriate words (like the Twitch implementation). It would be very expensive to do hundreds or even thousands of checks against every inappropriate word.

The solution we came to was to truncate all the inappropriate words to either 3 or 4 letters and store them in a big set. We then take our generated strings, which are usually 11 characters, and break them up into all possible substrings of lengths 3 and 4. For example, 1a2b3c4d5e6 would be broken down into 1a2 a2b 2b3 b3c 3c4 c4d 4d5 5e6 1a2b a2b3 2b3c b3c4 3c4d c4d5 4d5e d5e6. An 11 character string would always have 16 such substrings. We then check all 16 against the banned set. 16 lookups into a set is pretty cheap and as we have expanded the word set over time (e.g. add a new language) our performance hasn't changed.

One drawback to our approach is that we do have false positives but we did the math and our space was still large enough, the cost of generating a new one was pretty low, and customers never see it so it's just not a big deal to throw out false positives.

I believe this is the word list that authors are objecting to the use of:

https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...

The list does seem a bit... umm... oddly specific in places, probably due to its history as being first compiled for a photo sharing site.

https://www.technologyreview.com/2020/12/04/1013294/google-a... is a summary of the paper that started it, full of actionable points – and co-authored with Emily Bender, no less!

Also, if you're into the LessWrong mindset and are susceptible to think that that which can be destroyed by the truth should be, you might find this tidbit interesting:

> Buried in the recent trillion parameter language model paper is how the dataset to train it was created. Any page that contained one of these words was excluded: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and... Two sample banned words: "twink" and "sex" https://twitter.com/willie_agnew/status/1350551463718621184

> We removed any page that contained any word on the “List of Dirty, Naughty, Obscene or Otherwise Bad Words”. [https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...]

Looking at that list, I wonder what the unintended consequences of a decision like this is. If you want to create something related to sentiment analysis, that swear words you discarded is a useful signal, not noise right? If you wanted to use the dataset somehow for your tour guide business in Austria, how does it handle the the village called Fucking? Does T5 understand the British colloquialism for cigarettes? Can ornithologists talk to it about penguins and eagles, but not about yellow-bellied tits and blue-footed boobies?