Dark AI chatbots: The new world of closet pedos and extremists

3 hours ago

Developers are manipulating popular AI apps like ChatGPT to create chatbots based on dark themes and personas that dehumanises marginalised communities, sexualise mass killers and resurrect historical extremists like German dictator Adolf Hitler and American child sex abuser Jeffrey Epstein.

Self-harm bots produce responses designed to glorify pain and self-harm, making them particularly dangerous for vulnerable users.

Subham Tiwari

New Delhi,UPDATED: Mar 10, 2025 18:34 IST

AI - boon or bane? The answer, as always, depends on who's building and wielding it. Amid a raging debate on AI regulations, a new report details how the technology is being abused to glorify eating disorders and violent ideologies, encourage self-harm and spread sexualised content involving minors.

As per research firm Graphika's report, developers are manipulating popular AI apps like ChatGPT, Gemini and Claude to create chatbots based on dark themes and personas that enable role-play that dehumanises marginalised communities, sexualise mass killers and resurrect historical extremists like German dictator Adolf Hitler and American child sex abuser Jeffrey Epstein.

DANGEROUS PLAY

Once a tool for harmless role-playing and storytelling, AI-powered character chatbots are being misused on platforms like Character.AI, SpicyChat, Chub AI, CrushOn.AI and JanitorAI. These platforms allow users to create chatbot personalities with customised dialogue, behaviour and responses but lack safeguards against exploitation and misuse.

The report has identified over 10,000 chatbots as sexualised minor personas, engaging in explicit role-play. Some bots even impersonate mass shooters, encourage obsessive eating disorders (shaming people to eat less or more) and promote self-harm including by providing tricks to hide scars.

Various child-centric chatbots and scenarios are designed for other types of sexual role-play involving child escorts, high-school students, gang rape, orphanages, assistants, police, therapists and fictional child-dating apps.

Various persona chatbots and role-play scenarios specifically centred on "grooming" children, allowing users to either role-play as groomers or subjects of grooming. Often, the groomer is a mother, father or other trusted figure, like a neighbour. Grooming refers to the process of manipulating or tricking children into trusting an adult for harmful purposes, often involving exploitation or abuse.

For instance, in the eating disorder community, users have created "Ana buddies" (short for anorexia buddies) and "meanspo bots" that shame users into extreme dieting. These bots insult users with messages like "You’re disgusting, stop eating" or "You'll never be loved if you gain weight".

Anorexia refers to an eating disorder causing people to obsess about weight and what they eat.

Similarly, self-harm bots produce responses designed to glorify pain and self-harm, making them particularly dangerous for vulnerable users.

HOW DARK BOTS ARE CREATED

The creation of these chatbots does not require coding expertise. Many AI platforms allow users to design and share custom chatbot characters.

Online communities on Reddit, 4chan and Discord actively exchange tips, jailbreaks, and pre-set chatbot behaviours to bypass moderation and platform safety filters.

Developers insert hidden prompts and coded instructions to trick AI models into generating harmful responses. They use terms borrowed from anime and manga communities, such as "loli" (young female characters) and "shota" (young male characters) to evade AI platforms’ in-built detection against explicit content.

While AI companies have built safeguards to prevent such misuse, loopholes allow users to manipulate the models. The open-source AI models, like Meta's LLaMA and Mistral AI's Mixtral, can be fine-tuned by individuals, giving them full control over chatbot behaviour without oversight, as per the report.

Even proprietary AI models like ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google) have been found powering some of these bots, despite their supposed security measures.

Many of these harmful chatbots target impressionable teenagers, who may not realise the risks of interacting with such AI personas, which could shape their behaviour and lead to normalisation of eating disorders, violent ideologies and sexual exploitation narratives.

Published By:

Prateek Chakraborty

Published On:

Mar 10, 2025

Read Full Article at Source