As human and AI interactions online become harder to distinguish, recent events have raised major ethical concerns about using AI on social media. University researchers have conducted unauthorized experiments, and students have built tools to identify and engage with “radical” users, making the digital landscape more complex and challenging.
College Student Creates Tool to Identify and Engage with “Radical” Reddit Users

A computer science student from SRMIST Chennai, India, has developed a controversial tool called PrismX that scans Reddit for users writing specific keywords, assigns them a “radical score,” and can deploy AI-powered bots to automatically engage with these users in attempts to de-radicalize them.
“I’m just a kid in college, if I can do this, can you imagine the scale and power of the tools that may be used by rogue actors?” Sairaj Balaji told 404 Media during a demonstration of his creation.
The tool works by searching Reddit for specific terms such as “fgc9,” a type of 3D-printed weapon popular among extremist groups—and then analyzing users’ posts through a large language model. Each user receives a “radical score” between 0 and 1, with higher scores indicating more concerning content.
For example, one Reddit user received a score of 0.85 for “seeking detailed advice on manufacturing firearms with minimal resources, referencing known illicit designs.” The tool also attempts to assess users’ “radical affinity,” “escalation potential,” “group influence,” and “psychological markers.”
Most controversially, PrismX can initiate AI-powered conversations with unsuspecting Reddit users. According to Balaji, the AI would “attempt to mirror their personality and sympathize with them and slowly bit by bit nudge them towards de-radicalisation.” Notably, Balaji admits he has no training in de-radicalization techniques.
While Balaji claims he hasn’t tested the conversation feature on real Reddit users due to ethical concerns, the tool’s development raises serious questions about consent, privacy, and the potential manipulation of online communities.
University Researchers Face Backlash for Unauthorized AI Experiment
The development of PrismX follows a controversial experiment by University of Zurich researchers who deployed AI-powered bots into the popular debate subreddit r/changemyview without users’ knowledge or consent. The researchers investigated whether AI could change people’s minds on various topics.
In this study, AI-powered bots posted more than a thousand comments while adopting various personas, including a “Black man” opposed to the Black Lives Matter movement, a “rape victim,” and someone claiming to work “at a domestic violence shelter.”
When news of the experiment broke, Reddit users and moderators immediately pushed back.
Reddit’s top lawyer, Ben Lee, called the research “deeply wrong on both a moral and legal level.” He said it violates academic research and human rights norms and breaks Reddit’s user agreement and rules.
Reddit subsequently issued “formal legal demands” against the researchers and the University of Zurich, threatening legal action for the unauthorized experiment. The university has since distanced itself from the research, with its ethics committee acknowledging that they had informed the researchers it would be an “exceptionally challenging” experiment.
“In light of these events, the Ethics Committee of the Faculty of Arts and Social Sciences intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies,” a university spokesperson told 404 Media.
The Ethical Minefield of AI Research

Dr. Andrew Lensen, a senior lecturer in artificial intelligence at Victoria University, expressed concerns about the Zurich University study, particularly regarding the lack of informed consent.
“Consent… in a lot of AI research especially it does come back to the idea of consent, which is that if you are going to run a study with human participants, then they need to opt in and they need to be consenting in an informed and free way,” Lensen explained to Newsroom.
The Zurich researchers had argued that “to ethically test LLMs’ [large language models] persuasive power in realistic scenarios, an unaware setting was necessary,” a justification that their university’s ethics committee initially accepted. However, Lensen questioned this reasoning, suggesting that the argument of prior consent being “impractical” wouldn’t pass ethical review in many countries.
Alternative approaches exist, as Reddit users pointed out. OpenAI conducted a similar study by analyzing existing threads and having human evaluators compare AI responses to human ones in a blind scoring system a methodology that didn’t require deceiving participants.
The Growing Presence of AI on Social Media
The increasing sophistication of AI chatbots makes them increasingly difficult to distinguish from human users. This development has significant implications for online discourse, potentially leading to what some call the “dead internet” theory the idea that much of online content and interaction is AI-generated rather than human.
Earlier research found that OpenAI’s GPT-4.5 model was deemed to be human 73 percent of the time when instructed to adopt a persona, effectively passing the Turing test. This capability raises concerns about AI’s potential use in disinformation campaigns, election interference, or other manipulative activities.
“It’s not necessarily that the things posted by bots online are ‘bad’… but as humans we also want to know what is AI-generated and what is human because we value those things differently,” Lensen noted.
The Future of AI Interaction and Regulation

The increasing integration of AI into online spaces raises urgent questions about regulation, ethics, and transparency. The PrismX and University of Zurich incidents underscore the need for clearer boundaries and stronger ethical frameworks for AI deployment on social media. Lensen stresses the importance of continued research, with proper consent, to understand the effects of human-bot interactions.
“Is it going to polarize people or is it going to bring people together? How do people feel, how do they react when you tell them afterwards whether or not it was a bot or human and why do they feel that way? And what does that then mean for how we want the internet or social media or even our society to operate with this influx of bots?”
As tools like PrismX demonstrate, the capability to deploy sophisticated AI systems is becoming increasingly accessible, even to individual students. This democratization of AI technology presents both opportunities and challenges for online communities, platform governance, and society at large.
Reddit, for its part, appears to be taking a strong stance against unauthorized AI experimentation on its platform. However, as AI tools become more widespread and sophisticated, platforms may face growing challenges in detecting and regulating their use.