X Under Fire for AI Chatbot Data Collection Practices
Elon Musk's social media platform X, formerly known as Twitter, is currently facing intense scrutiny from its main European privacy regulator, the Irish Data Protection Commission (DPC), over its practices of collecting users’ posts to train its AI chatbot, Grok.
This data harvesting has sparked significant controversy as it was implemented without notifying users or obtaining their consent, potentially infringing upon data protection rules established by the General Data Protection Regulation (GDPR) in Europe.
What is Grok
Grok, an AI chatbot developed by xAI, an artificial intelligence startup also owned by Musk, was first released in November 2023 as a rival to OpenAI’s ChatGPT.
Unlike the initial version of Grok, which was not trained on data from X, subsequent updates have incorporated user data from the platform to enhance its capabilities.
Grok, currently available only to premium subscribers on X, is designed to summarise news events and answer questions using real-time information gleaned from user interactions.
Default Data Collection Raises Privacy Concerns
The controversy erupted when it was discovered that X had silently enabled a setting allowing the collection of users' data for training Grok, defaulting to "yes" without any prior notice.
This move has been criticised for its lack of transparency and potential violation of user privacy rights.
Kevin Schawinski, CEO of a Swiss AI company, condemned the practice, stating,
"X added a setting for 'we'll take your data to train Grok' without any notice and just defaulted to 'yes' for everyone. This is BAD."
How to Disable Data Collection
In response to the backlash, X has provided users with a way to disable the data collection setting, although this option is currently only available on the web version of the platform, with support for mobile devices expected to roll out soon.
To prevent X from using their data for AI training, users need to:
- Open X on a web browser and log in with their account details.
- Click on "More" on the left side tab of the app.
- Navigate to "Settings and Privacy."
- Go to "Privacy and Safety."
- Scroll down to "Data sharing and personalisation."
- Click on "Grok" at the bottom.
- Uncheck the box next to the message, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”
Despite this option, there remains uncertainty about whether earlier posts and interactions will be excluded from future data usage even after disabling the setting.
Regulatory Scrutiny and Broader Implications
The DPC has been engaging with X for several months regarding its data practices and expressed surprise at the latest developments.
The regulator has followed up with the company and is awaiting further engagement.
This incident adds to X's existing troubles, as it is already under investigation by the Irish Data Protection Commission in at least five other cases, which could potentially lead to fines of up to 4 percent of the company's annual global revenue.
The issue at X is not isolated.
Other major tech companies like Meta and Google have also faced similar scrutiny over their AI data practices in Europe.
Back in June, Meta had to pause its plans to collect European users’ posts and images on Facebook and Instagram for AI training following GDPR complaints.
Google delayed the launch of its generative AI tools due to privacy concerns raised by the Irish privacy authority.
These incidents highlight the growing regulatory challenges tech companies face in leveraging user data for AI advancements in Europe, where data protection laws are stringent and enforcement is robust.
Elon Musk's stance on AI safety is sharply contrasted by X's data practices.
While Musk publicly warns about the dangers of unchecked AI development, his own platform is under fire for potentially violating privacy laws by harvesting user data to train its AI chatbot.
This hypocrisy highlights the gap between rhetoric and reality in the AI ethics debate.
Contradictions in Musk's AI Safety Stance Highlighted by X's Data Practices
Elon Musk’s vocal warnings about the perils of unregulated AI development stand in stark contrast to the practices of his own platform, X.
Despite Musk's public advocacy for rigorous oversight and safety measures in AI, X is currently facing criticism for its questionable approach to user privacy.
The platform’s decision to collect and utilise user data for training its AI chatbot, Grok, without explicit consent raises significant concerns about compliance with privacy laws.
This discrepancy between Musk’s stated concerns and X’s data practices reflects a troubling gap between the rhetoric of AI ethics and the reality of how AI technologies are implemented.
While Musk emphasises the need for caution and regulation in AI development, X’s actions reflect a more cavalier attitude towards user data, revealing a potential hypocrisy in the ongoing debate about AI governance.
Upcoming Launch of Grok 2 and Grok 3 by xAI
Elon Musk's xAI startup is preparing for the August release of Grok 2, the next generation of its AI language model.
Grok 2 is anticipated to deliver significant improvements over its predecessor, with Musk promising advancements across all performance metrics.
This new version builds on the foundation laid by the first Grok model, which debuted in November 2023 as part of xAI’s bid to rival OpenAI.
Following Grok 2, xAI is set to introduce Grok 3 by the end of the year.
Grok 3 aims to be on par with or surpass the upcoming GPT-5 from OpenAI, requiring a vast array of 100,000 Nvidia H100 GPUs for its training.
Musk has described Grok 3 as “really something special,” reflecting its expected advancements and scale and has also confidently declared that:
“Grok 3 should be the most powerful AI in the world”
To support the development of these models, xAI has been leveraging cloud services from Oracle and data centers provided by X, Musk's social media platform.
The initial Grok model launched in November 2023, marking xAI’s entry into the competitive AI space. The subsequent Grok 1.5, released in April, featured enhanced reasoning and larger context handling.
As the project scales, Musk is working with Nvidia, Dell, and Supermicro to establish a large-scale computing infrastructure, referred to as a “gigafactory of compute,” to build one of the world's largest supercomputers.
This effort highlights xAI’s commitment to pushing the boundaries of AI technology and competing robustly in the AI landscape.