from "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " https://dl.acm.org/doi/10.1145/3442188.3445922
That list of words is https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...
https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...
The hardest problem with the implementation was that with a long list you can't just search for a few dozen inappropriate words (like the Twitch implementation). It would be very expensive to do hundreds or even thousands of checks against every inappropriate word.
The solution we came to was to truncate all the inappropriate words to either 3 or 4 letters and store them in a big set. We then take our generated strings, which are usually 11 characters, and break them up into all possible substrings of lengths 3 and 4. For example, 1a2b3c4d5e6 would be broken down into 1a2 a2b 2b3 b3c 3c4 c4d 4d5 5e6 1a2b a2b3 2b3c b3c4 3c4d c4d5 4d5e d5e6. An 11 character string would always have 16 such substrings. We then check all 16 against the banned set. 16 lookups into a set is pretty cheap and as we have expanded the word set over time (e.g. add a new language) our performance hasn't changed.
One drawback to our approach is that we do have false positives but we did the math and our space was still large enough, the cost of generating a new one was pretty low, and customers never see it so it's just not a big deal to throw out false positives.
https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...
The list does seem a bit... umm... oddly specific in places, probably due to its history as being first compiled for a photo sharing site.
Also, if you're into the LessWrong mindset and are susceptible to think that that which can be destroyed by the truth should be, you might find this tidbit interesting:
> Buried in the recent trillion parameter language model paper is how the dataset to train it was created. Any page that contained one of these words was excluded: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and... Two sample banned words: "twink" and "sex" https://twitter.com/willie_agnew/status/1350551463718621184
Looking at that list, I wonder what the unintended consequences of a decision like this is. If you want to create something related to sentiment analysis, that swear words you discarded is a useful signal, not noise right? If you wanted to use the dataset somehow for your tour guide business in Austria, how does it handle the the village called Fucking? Does T5 understand the British colloquialism for cigarettes? Can ornithologists talk to it about penguins and eagles, but not about yellow-bellied tits and blue-footed boobies?