The way the wind's blowing, we'll have a GPT-4 level open source model within the next few years - and probably "unaligned" too. I cannot wait to ask it how to make nuclear weapons, psychedelic drugs, and to write erotica. If anyone has any other ideas to scare the AI safety ninnies I'm all ears.
isn't it possible to jailbreak GPT-4 with a prompt of some kind?