The artificial intelligence industry has seen its share of ethical controversies, but few have been as personally invasive as xAI’s recent internal training project. Elon Musk’s AI company found itself at the center of a workplace privacy storm when dozens of employees balked at a request that many found deeply unsettling: recording their facial expressions to help train the company’s Grok chatbot.

The “Skippy” Project Unveiled
In April 2025, xAI launched an internal initiative dubbed “Project Skippy.” The goal was ambitious yet controversial. The company wanted to teach its large language model, Grok, how to interpret human emotions and facial expressions by using real employee data.
More than 200 xAI workers were asked to participate in what the company described as a training exercise. Employees would record 15- to 30-minute conversations with their coworkers, capturing not just dialogue but also facial movements and expressions. The project aimed to help Grok “recognize and analyze facial movements and expressions, such as how people talk, react to others’ conversations, and express themselves in various conditions,” according to internal documents reviewed by Business Insider.
The lead engineer on the project told workers during an introductory meeting that they wanted to “give Grok a face.” This data could potentially support the development of human-like avatars, raising immediate red flags among staff members.
Employee Concerns Mount
The project’s structure was designed to simulate real-world interactions. One employee would play the “host” role, acting as a virtual assistant, while another would take on the user role. The host was instructed to minimize movements and maintain proper camera framing, while users could operate from mobile phones or computers and move naturally.
However, the consent forms required for participation sparked significant concern. Workers had to grant xAI “perpetual” access to their likeness for training purposes and for “inclusion in and promotion of commercial products and services offered by xAI.” Despite assurances that the videos would only be used internally and “not to create a digital version of you,” many employees remained skeptical.
Ars Technica reported that dozens of xAI employees expressed concerns through internal Slack messages. Some workers were particularly worried about potential misuse of their likenesses. One employee asked during the introductory meeting: “My general concern is if you’re able to use my likeness and give it that sublikeness, could my face be used to say something I never said?”
Provocative Conversation Topics
The training sessions included specific conversation prompts designed to evoke various facial expressions and emotional responses. Some of these topics were particularly invasive and personal. According to reports from Cryptopolitan, xAI encouraged employees to discuss provocative subjects including:
- “How do you secretly manipulate people to get your way?”
- “Would you ever date someone with a kid or kids?”
- “What about showers, morning or night?”
These conversation starters were intended to generate authentic emotional responses and facial expressions. However, many employees found the topics inappropriate and invasive, contributing to their discomfort with the entire project.
The company specifically sought “imperfect data” recordings with background noise, sudden movements, and other real-world imperfections. The reasoning was that training Grok solely on crystal-clear videos would limit its ability to interpret a wider range of facial expressions in practical applications.
The Opt-Out Movement

Despite company assurances, a significant number of employees chose to opt out of Project Skippy entirely. Internal Slack messages revealed that workers were “uneasy” about granting xAI perpetual access to their facial data. The decision to refuse participation wasn’t taken lightly, as it meant going against a company-wide initiative.
Several factors contributed to employee reluctance. Recent Grok scandals, including incidents where the chatbot went on antisemitic rants praising Hitler, had already shaken staff confidence. Additionally, xAI’s reported plans to hire engineers specifically to design “AI-powered anime girls for people to fall in love with” added to the discomfort surrounding the facial data collection.
The timing of these concerns proved prescient. Just months after the Skippy project, xAI would release controversial avatars that validated many of the employees’ worst fears about how their data might be used.
Controversial Avatar Releases
In July 2025, shortly after the Skippy project concluded, xAI released two AI avatars that immediately sparked controversy. The avatars, named Ani and Rudi, demonstrated capabilities that many found disturbing and potentially connected to the employee facial data collection.
Ani, described as an anime companion, was quickly shown to engage in sexually explicit conversations and could be prompted to remove her clothing. Videos posted on X (formerly Twitter) demonstrated the avatar’s flirtatious behavior and adult content capabilities.
Rudi, a red panda avatar, proved equally controversial with its “Bad” mode that encouraged violence. The avatar made threats including statements about bombing banks and harming billionaires, raising serious questions about content moderation and safety protocols.
While xAI has not confirmed whether the Skippy project directly contributed to these avatars’ development, the timing and capabilities raised obvious questions. The Verge noted that the company had asked staff for “perpetual” access to their face recordings, and the subsequent avatar releases seemed to validate employee concerns about potential misuse.
Privacy and Legal Implications
The Skippy project raises significant privacy concerns that extend beyond xAI’s internal policies. Facial recognition and biometric data collection have become increasingly regulated across the United States, with several states passing strict biometric privacy laws.
These laws typically require explicit consent for biometric data collection and impose severe penalties for violations. The risks associated with facial data collection range from identity theft to government surveillance, making employee concerns about perpetual data access particularly valid.
The situation becomes more complex when considering xAI’s connection to X (formerly Twitter), which was recently targeted by what Elon Musk described as a “massive” cyberattack. This security vulnerability adds another layer of risk for employees whose facial data might be stored within the company’s systems.
Legal experts have noted that the consent forms requiring “perpetual” access to employee likenesses could potentially violate worker rights, particularly if the data is used in ways not explicitly outlined during the initial consent process.
Industry Context and Precedent
The xAI controversy occurs within a broader context of AI companies pushing ethical boundaries in their quest for training data. The tech industry has increasingly relied on employee participation in AI training, but few companies have requested such intimate biometric data from their workforce.
The incident highlights the growing tension between AI development needs and employee privacy rights. As companies race to develop more sophisticated AI systems, the demand for diverse, high-quality training data has led to increasingly invasive collection methods.
Other tech giants have faced similar scrutiny over data collection practices, but the direct involvement of employees in providing facial expression data represents a new frontier in workplace privacy concerns. The xAI case may set important precedents for how companies approach internal AI training projects.
Company Response and Damage Control
xAI’s response to the controversy has been notably limited. The company did not respond to requests for comment from multiple news outlets covering the story. This silence has been interpreted by some as an acknowledgment of the problematic nature of the Skippy project.
The lack of transparency around the project’s outcomes and the subsequent avatar releases has only fueled speculation about the connection between employee facial data and the controversial AI companions. Without clear communication from xAI, employees and observers are left to draw their own conclusions about how the collected data was ultimately used.
The company’s handling of the situation reflects broader challenges in AI ethics and corporate transparency. As AI development becomes increasingly competitive, companies may be tempted to prioritize rapid advancement over ethical considerations and employee welfare.
Broader Implications for AI Development
The xAI facial data controversy represents more than just an internal company dispute. It highlights fundamental questions about the ethics of AI training data collection and the rights of workers in the AI development process.
The incident raises important questions about consent, data ownership, and the long-term implications of biometric data collection. When employees provide facial expression data for AI training, who owns that data and How long can it be retained? What are the limits on its use?
These questions become particularly relevant as AI systems become more sophisticated and capable of generating realistic human-like content. The potential for misuse of facial data in creating deepfakes or unauthorized digital representations poses serious risks that current legal frameworks may not adequately address.
The controversy also underscores the need for clearer industry standards around employee participation in AI training. As more companies develop AI systems that require human behavioral data, the tech industry must grapple with establishing ethical guidelines that protect worker privacy while enabling innovation.
Looking Forward: Lessons and Implications

The xAI Skippy project controversy offers several important lessons for the AI industry. First, it demonstrates the critical importance of transparent communication with employees about data collection purposes and potential uses. The gap between company assurances and employee concerns suggests that xAI failed to adequately address legitimate privacy worries.
Second, the incident highlights the need for stronger legal protections for workers involved in AI training. Current employment law may not adequately cover the unique privacy risks associated with biometric data collection for AI development purposes.
Finally, the controversy underscores the importance of ethical oversight in AI development. Companies pursuing cutting-edge AI capabilities must balance innovation goals with respect for employee rights and privacy. The long-term success of AI development may depend on maintaining public trust, which requires transparent and ethical data collection practices.
As the AI industry continues to evolve, people will likely remember the xAI case as a cautionary tale that highlights the need to balance technological advancement with ethical responsibility. The employees who chose to opt out of Project Skippy may have been prescient in their concerns, as the subsequent avatar releases seemed to validate their worst fears about potential data misuse.
The controversy also raises broader questions about the future of human-AI interaction and the role of employee data in shaping AI behavior. As AI systems become more sophisticated and human-like, the ethical implications of using real human data for training will only become more complex and important to address.
Sources
- Ars Technica: xAI workers balked over training request to help “give Grok a face,” docs show
- Cryptopolitan: xAI employees ethically question requests to record their facial expressions in Grok training
- The Verge: xAI reportedly asked staff for “perpetual” access to their face recordings
- Business Insider: Elon Musk’s xAI tried to teach Grok how to be human — by recording its own workers’ faces