• Home
  • AI News
  • Blog
  • Contact
Monday, June 23, 2025
Kingy AI
  • Home
  • AI News
  • Blog
  • Contact
No Result
View All Result
  • Home
  • AI News
  • Blog
  • Contact
No Result
View All Result
Kingy AI
No Result
View All Result
Home AI News

AI Bots Targeted Radicals: The Ethics of Automated Intervention

Gilbert Pagayon by Gilbert Pagayon
May 19, 2025
in AI News
Reading Time: 8 mins read
A A

As human and AI interactions online become harder to distinguish, recent events have raised major ethical concerns about using AI on social media. University researchers have conducted unauthorized experiments, and students have built tools to identify and engage with “radical” users, making the digital landscape more complex and challenging.

College Student Creates Tool to Identify and Engage with “Radical” Reddit Users

AI Bots Targeted Radicals. A split-screen digital illustration. On one side, a human hand types on a keyboard, while on the other, a robotic hand mirrors the action. The background features a network of interconnected nodes and lines, symbolizing the digital landscape. The color scheme is a mix of warm and cool tones, representing the blend of human and artificial elements. A question mark hovers in the center, signifying the ethical questions surrounding AI deployment on social media.

A computer science student from SRMIST Chennai, India, has developed a controversial tool called PrismX that scans Reddit for users writing specific keywords, assigns them a “radical score,” and can deploy AI-powered bots to automatically engage with these users in attempts to de-radicalize them.

“I’m just a kid in college, if I can do this, can you imagine the scale and power of the tools that may be used by rogue actors?” Sairaj Balaji told 404 Media during a demonstration of his creation.

The tool works by searching Reddit for specific terms such as “fgc9,” a type of 3D-printed weapon popular among extremist groups—and then analyzing users’ posts through a large language model. Each user receives a “radical score” between 0 and 1, with higher scores indicating more concerning content.

For example, one Reddit user received a score of 0.85 for “seeking detailed advice on manufacturing firearms with minimal resources, referencing known illicit designs.” The tool also attempts to assess users’ “radical affinity,” “escalation potential,” “group influence,” and “psychological markers.”

Most controversially, PrismX can initiate AI-powered conversations with unsuspecting Reddit users. According to Balaji, the AI would “attempt to mirror their personality and sympathize with them and slowly bit by bit nudge them towards de-radicalisation.” Notably, Balaji admits he has no training in de-radicalization techniques.

While Balaji claims he hasn’t tested the conversation feature on real Reddit users due to ethical concerns, the tool’s development raises serious questions about consent, privacy, and the potential manipulation of online communities.

University Researchers Face Backlash for Unauthorized AI Experiment

The development of PrismX follows a controversial experiment by University of Zurich researchers who deployed AI-powered bots into the popular debate subreddit r/changemyview without users’ knowledge or consent. The researchers investigated whether AI could change people’s minds on various topics.

In this study, AI-powered bots posted more than a thousand comments while adopting various personas, including a “Black man” opposed to the Black Lives Matter movement, a “rape victim,” and someone claiming to work “at a domestic violence shelter.”

When news of the experiment broke, Reddit users and moderators immediately pushed back.
Reddit’s top lawyer, Ben Lee, called the research “deeply wrong on both a moral and legal level.” He said it violates academic research and human rights norms and breaks Reddit’s user agreement and rules.

Reddit subsequently issued “formal legal demands” against the researchers and the University of Zurich, threatening legal action for the unauthorized experiment. The university has since distanced itself from the research, with its ethics committee acknowledging that they had informed the researchers it would be an “exceptionally challenging” experiment.

“In light of these events, the Ethics Committee of the Faculty of Arts and Social Sciences intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies,” a university spokesperson told 404 Media.

The Ethical Minefield of AI Research

Illustration of a maze filled with ethical symbols (scales, hearts, dollar signs). An AI bot navigates the maze, facing obstacles. Muted color palette with bright spots on key decision points.

Dr. Andrew Lensen, a senior lecturer in artificial intelligence at Victoria University, expressed concerns about the Zurich University study, particularly regarding the lack of informed consent.

“Consent… in a lot of AI research especially it does come back to the idea of consent, which is that if you are going to run a study with human participants, then they need to opt in and they need to be consenting in an informed and free way,” Lensen explained to Newsroom.

The Zurich researchers had argued that “to ethically test LLMs’ [large language models] persuasive power in realistic scenarios, an unaware setting was necessary,” a justification that their university’s ethics committee initially accepted. However, Lensen questioned this reasoning, suggesting that the argument of prior consent being “impractical” wouldn’t pass ethical review in many countries.

Alternative approaches exist, as Reddit users pointed out. OpenAI conducted a similar study by analyzing existing threads and having human evaluators compare AI responses to human ones in a blind scoring system a methodology that didn’t require deceiving participants.

The Growing Presence of AI on Social Media

The increasing sophistication of AI chatbots makes them increasingly difficult to distinguish from human users. This development has significant implications for online discourse, potentially leading to what some call the “dead internet” theory the idea that much of online content and interaction is AI-generated rather than human.

Earlier research found that OpenAI’s GPT-4.5 model was deemed to be human 73 percent of the time when instructed to adopt a persona, effectively passing the Turing test. This capability raises concerns about AI’s potential use in disinformation campaigns, election interference, or other manipulative activities.

“It’s not necessarily that the things posted by bots online are ‘bad’… but as humans we also want to know what is AI-generated and what is human because we value those things differently,” Lensen noted.

The Future of AI Interaction and Regulation

AI Bots Targeted Radicals
Digital collage of news articles, academic papers, and social media posts about AI ethics. Layered, textured style with overlapping elements. Neutral color tones with pops of color. A magnifying glass hovers over the collage.


The increasing integration of AI into online spaces raises urgent questions about regulation, ethics, and transparency. The PrismX and University of Zurich incidents underscore the need for clearer boundaries and stronger ethical frameworks for AI deployment on social media. Lensen stresses the importance of continued research, with proper consent, to understand the effects of human-bot interactions.

“Is it going to polarize people or is it going to bring people together? How do people feel, how do they react when you tell them afterwards whether or not it was a bot or human and why do they feel that way? And what does that then mean for how we want the internet or social media or even our society to operate with this influx of bots?”

As tools like PrismX demonstrate, the capability to deploy sophisticated AI systems is becoming increasingly accessible, even to individual students. This democratization of AI technology presents both opportunities and challenges for online communities, platform governance, and society at large.

Reddit, for its part, appears to be taking a strong stance against unauthorized AI experimentation on its platform. However, as AI tools become more widespread and sophisticated, platforms may face growing challenges in detecting and regulating their use.

Sources

  • 404 Media: Student Makes Tool That Identifies ‘Radicals’ on Reddit, Deploys AI Bots to Engage With Them
  • Futurism: Reddit Threatens to Sue Researchers Who Ran “Dead Internet” AI Experiment on Its Site
  • Newsroom: Chatbot research an ethical minefield
Tags: AI EthicsAI Research EthicsArtificial IntelligencePrismXRadical detectionReddit BotsSocial Media AI
Gilbert Pagayon

Gilbert Pagayon

Related Posts

Perplexity AI Labs and Video Features
AI News

Perplexity AI Labs and Video Features to Rival Google and Grok

June 22, 2025
Apple Perplexity AI acquisition A sleek concept image showing a glowing Apple logo merging with a stylized AI brain graphic, overlayed on a digital globe filled with data nodes and search bars—symbolizing Apple’s foray into AI search dominance through the Perplexity acquisition.
AI News

Apple Perplexity AI acquisition: AI Deal to Boost Siri.

June 22, 2025
AI superintelligence prediction 2026. A digitally-rendered concept image of Elon Musk standing on a beach as a massive, translucent AI-generated wave (symbolizing the "tsunami" of AI) looms on the horizon. His silhouette is surrounded by glowing circuits and binary code washing up like sea foam, with AI server chips embedded in the sand around him.
AI News

AI superintelligence prediction 2026: Musk’s Bold AI Predictions

June 22, 2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Recent News

AI Regulation’s Hidden Danger: How Compliance Costs Are Creating Tech Monopolies

AI Regulation’s Hidden Danger: How Compliance Costs Are Creating Tech Monopolies

June 23, 2025
Perplexity AI Labs and Video Features

Perplexity AI Labs and Video Features to Rival Google and Grok

June 22, 2025
Apple Perplexity AI acquisition A sleek concept image showing a glowing Apple logo merging with a stylized AI brain graphic, overlayed on a digital globe filled with data nodes and search bars—symbolizing Apple’s foray into AI search dominance through the Perplexity acquisition.

Apple Perplexity AI acquisition: AI Deal to Boost Siri.

June 22, 2025
AI superintelligence prediction 2026. A digitally-rendered concept image of Elon Musk standing on a beach as a massive, translucent AI-generated wave (symbolizing the "tsunami" of AI) looms on the horizon. His silhouette is surrounded by glowing circuits and binary code washing up like sea foam, with AI server chips embedded in the sand around him.

AI superintelligence prediction 2026: Musk’s Bold AI Predictions

June 22, 2025

The Best in A.I.

Kingy AI

We feature the best AI apps, tools, and platforms across the web. If you are an AI app creator and would like to be featured here, feel free to contact us.

Recent Posts

  • AI Regulation’s Hidden Danger: How Compliance Costs Are Creating Tech Monopolies
  • Perplexity AI Labs and Video Features to Rival Google and Grok
  • Apple Perplexity AI acquisition: AI Deal to Boost Siri.

Recent News

AI Regulation’s Hidden Danger: How Compliance Costs Are Creating Tech Monopolies

AI Regulation’s Hidden Danger: How Compliance Costs Are Creating Tech Monopolies

June 23, 2025
Perplexity AI Labs and Video Features

Perplexity AI Labs and Video Features to Rival Google and Grok

June 22, 2025
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2024 Kingy AI

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • AI News
  • Blog
  • Contact

© 2024 Kingy AI

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.