The spontaneous toxic content stuff is a little alarming, but probably in the future there will be gpt-Ns that have their core training data filtered so all the insane reddit comments aren't part of their makeup.
If you filter the dataset to remove anything that might be considered toxic, the model will have much more difficulty understanding humanity as a whole; the solution is alignment, not censorship.
While I share your belief, I am unaware of any proof that such censorship would actually fail as an alignment method.
Nor even how much impact it would have on capabilities.
Of course, to actually function this would also need to e.g. filter out soap operas, murder mysteries, and action films, lest it overestimate the frequency and underestimate the impact of homicide.
from "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " https://dl.acm.org/doi/10.1145/3442188.3445922
That list of words is https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...