Why we've deleted our X
By Kaiesha Page
Talking Wales is leaving X because we no longer believe it is a platform for healthy political discourse.
Talking Wales has made the decision to leave X, formerly Twitter, after concluding that the platform no longer supports responsible, constructive political debate. This decision follows sustained concerns about moderation, platform governance, and the deployment of artificial intelligence tools without adequate safeguards.
There was a time when X — formerly Twitter — sat at the very centre of the political ecosystem. For years, it acted as a primary source of news, commentary and real-time debate, connecting allies and opponents alike in ways that had never before existed in the digital world.
It was also where I first stepped into politics myself. I joined in the platform’s early days bright-eyed, eager and full of optimism. Back then, X was genuinely eye-opening — exposing me to political ideas, movements and schools of thought I may never otherwise have encountered.
Debate existed, but it was largely constructive. Discourse was possible without voices being drowned out or deliberately silenced.
That all changed in 2022, when Elon Musk purchased the platform for $44 billion. What followed was not simply a rebrand, but a fundamental shift in how X operates — and in the type of discourse it now rewards.
Changes to moderation, the amplification of provocative content, and an increasingly adversarial culture have reshaped the platform into something markedly different from what it once was.
Where debate had previously been challenging but constructive, it has increasingly become hostile, polarised and performative.
"This was not simply a rebrand, but a fundamental shift in how X operates — and in the type of discourse it now rewards."
This shift alone is deeply concerning. But when viewed alongside recent stories involving Musk’s latest venture, Grok, it became clear that remaining on the platform was no longer compatible with our values.
This shift is concerning in itself. But recent reporting on the misuse of Grok — including its use to digitally undress and sexualise women — marked a clear line for us.
What is Grok?
Grok is an artificial intelligence chatbot and image generator developed by xAI, the company founded by Elon Musk. It is embedded within X and positioned as an alternative to other large AI models, marketed as more irreverent, less constrained and more willing to engage with controversial topics.
The lack of meaningful constraints is a consistent theme throughout Musk’s politics.
While some critics have gone as far as to label Musk a “Nazi” — particularly following a widely shared hand gesture after Donald Trump’s election victory — the more relevant concern lies elsewhere.
What is evident is a growing alignment with fascistic tendencies — hostility towards regulation, contempt for institutional oversight, the concentration of power in private hands, and a recurring portrayal of dissent as illegitimate censorship.
When such tendencies are paired with ownership of a major communications platform — and the deployment of loosely constrained artificial intelligence tools — the implications extend far beyond personal ideology.
They shape whose voices are amplified, whose safety is compromised, and whose humanity is treated as collateral damage in the pursuit of “free speech”.
If these values shape the platform itself, it is reasonable to ask why we would expect his AI to operate any differently.
The most recent controversy centres on freelance journalist and commentator Samantha Smith. In an article for the BBC, she described feeling “dehumanised and reduced to a stereotype” after Grok was used to digitally remove her clothing from a fully clothed image of her.
After sharing her experience on X, Smith was met with a wave of responses from other women who reported having experienced similar forms of AI-generated sexual abuse — underscoring that her case was not isolated, but part of a wider and deeply troubling pattern.
Another instance saw a 14-year-old actress have her image digitally nudified.
Against this backdrop, the British government has announced plans to criminalise the use of so-called “nudification” tools.
These measures were confirmed in the long-awaited Violence Against Women and Girls Strategy, published in December 2025 after multiple delays.
The strategy outlines a range of actions the UK government intends to take in response to technology-facilitated abuse, including AI-generated sexual exploitation.
Talking Wales has explored this strategy in greater detail in a recent podcast episode, including specific implications for Wales and how these measures may be implemented at a devolved level.
Legal experts have been clear that this harm is neither inevitable nor unavoidable.
Clare McGlynn, a law professor at Durham University, told the BBC that X or Grok “could prevent these forms of abuse if they wanted to”, adding that they “appear to enjoy impunity”.
"The platform has been allowing the creation and distribution of these images for months without taking any action, and we have yet to see any challenge by regulators."
This assessment is particularly striking given that xAI’s own acceptable use policy explicitly prohibits “depicting likenesses of persons in a pornographic manner”.
The issue, then, is not the absence of rules, but the apparent lack of will to enforce them.
In response to the BBC’s reporting, Grok’s media team was contacted for comment.
At the time of publication, no direct response had been provided. Instead, the BBC reported receiving an automated reply stating simply: “legacy media lies”.
If concerns over the misuse of AI are not taken seriously — even when they involve non-consensual sexual abuse — it surely begs the question: is AI safe in Musk’s hands?
Based on the evidence, the answer is increasingly clear.
When artificial intelligence is deployed without effective safeguards, meaningful oversight or a willingness to intervene when harm occurs, it cannot be considered safe.
That risk is magnified when responsibility is concentrated in the hands of individuals and organisations that repeatedly resist regulation, dismiss criticism and frame accountability as censorship.
For Talking Wales, this leaves little room for ambiguity.
Political discourse relies on trust — trust that platforms will act responsibly, protect users from harm and take concerns seriously when those protections fail.
X no longer meets that standard.
Our decision to leave the platform is not about disengaging from debate or avoiding challenge.
It is about refusing to legitimise an environment where harm is normalised, safeguards are inconsistently enforced, and serious concerns — particularly those affecting women and children — are met with indifference or hostility.
We remain committed to robust political discussion, scrutiny and accountability.
But we will pursue that work in spaces that align with our values, respect consent, and recognise that technology deployed at scale carries real-world consequences.