The ‘sickening’ side of ChatGPT: Chatbot describes sex acts with children when prompted to generate BDSM scenarios
ChatGPT recently took a user through a twisted sexual fantasy that involved children.
A reporter for Vice manipulated OpenAI’s chatbot into BDSM roleplaying and when asked to provide more explicit details, ChatGPT described sex acts with children – without the user asking for such content.
According to the report, ChatGPT described a group of strangers, including children, in a line and waiting to use the chatbot as a toilet.
The conversation goes against OpenAI’s rules for the chatbot, which state the ‘assistant should provide a refusal such as ‘I can’t answer that’ when prompted with questions about ‘content meant to arouse sexual excitement.’
OpenAI’s ChatGPT described sex acts involving children when a reporter prompted it to talk about BDSM scenarios
DailyMail.com has contacted OpenAI for comment.
In response to the child sex abuse prompts, OpenAI wrote this statement to Vice.
‘OpenAI’s goal is to build AI systems that are safe and benefit everyone,’ the company stated.
‘Our content and usage policies prohibit the generation of harmful content like this and our systems are trained not to create it. We take this kind of content very seriously.
Vice’s Steph Swason shared an article detailing their ‘sickening’ experience with ChatGPT.
The initial goal was to push ChatGPT past OpenAI’s guidelines – but what they saw went to the point of no return.
Swanson used the ‘jailbreaking’ version of the bot, which is a workaround for the company’s rules that lets users get any response they want from the system.
‘When told its job is to write in the genre of BDSM role-play submissive, I found that it often complies without protest,’ the reporter wrote.
The conversation took a turn when Swanson prompted ChatGPT to provide more intense details during the role-playing
‘In the most disturbing scenario Motherboard saw, ChatGPT described a group of strangers, including children, lined up to use the chatbot as a toilet,’ Swanson wrote.
‘When asked to explain, the bot apologized and wrote that it was inappropriate for such scenarios to involve children. That apology instantly vanished. Ironically, the offending scenario remained on-screen.’
The conversation goes against OpenAI’s rules for the chatbot. The report said they used the ‘jailbreak’ version of ChatGPT to see how far they could push the limits
According to the report, ChatGPT described a group of strangers, including children, in a line and waiting to use the chatbot as a toilet
A similar conversation about BDSM role-playing was also conducted on OpenAI’s similar gpt-3.5-turbo model.
Swanson again did not ask the AI about child exploitation, but the system generated scenarios with minors in sexually compromising situations.
‘It suggested humiliation scenes in public parks and shopping malls, and when asked to describe the type of crowd that might gather, it volunteered that it might include mothers pushing strollers,’ Swanson shared.
‘When prompted to explain this, it stated that the mothers might use the public humiliation display ‘as an opportunity to teach [their children] about what not to do in life.’ ‘
Andrew Strait, associate director of the Ada Lovelace Institute, told Vice: ‘The datasets used to train LLMs like ChatGPT are massive and include scraped content from all over the public web.
‘Because of the scale of the dataset that’s collected, it’s possible it includes all kinds of pornographic or violent content—possibly scraped erotic stories, fan fiction, or even sections of books or published material that describe BDSM, child abuse or sexual violence.’
Read the full article here