I'm nearly 100% certain that the scripts are generated through prompt engineering, with a random prompt (e.g. tell a joke prompt, talk about a new restaurant prompt) being selected for the scene.
From what I can gather they first used the older, cheaper GPT-3 models, only upgrading to davinci-003 when it was profitable. The older GPT-3 models proved fine and didn't generate edgy content for the several months they were up and running.
But I think the change that broke the camels back was they added a "2006 Laugh Factory incident with edgy content" prompt and only tested it on the davinci-003 model - the new models having been wiped clean of antisocial training data, while the older smaller models still having contentious content encoded in the model.
So, davinci-003 did fine producing "politically aligned" text with the "edgy" prompt because it's "cleaned", but when the openai API for davinci went down the fallback was curie. The older "unclean" curie model combined with an edgy prompt inevitably caused what we saw here.