in ,

X says it will suspend Creators Over Unlabeled AI Posts on Armed Conflict

X says it will suspend Creators Over Unlabeled AI Posts on Armed Conflict

X says it will penalize creators who share AI-generated videos depicting armed conflict without clearly disclosing that the material was made with artificial intelligence.

On Tuesday, X’s head of product, Nikita Bier, announced that users who post undisclosed AI war footage will be removed from the platform’s Creator Revenue Sharing Program for 90 days. Repeat offenders who continue posting misleading AI content after that suspension will be permanently barred from monetization.

Hosting 75% off

“During times of war, people must have access to authentic information on the ground,” Bier wrote. “With today’s AI technologies, it is trivial to create content that can mislead people.” He added that the policy takes effect immediately.

How X Plans to Enforce It

The company says it will rely on a mix of automated generative-AI detection tools and its crowdsourced fact-checking system, Community Notes, to flag violations. Posts found to contain AI-generated footage of armed conflict without proper disclosure could result in temporary removal from the revenue program.

Importantly, the policy targets monetization — not platform access. Users can still post, but they won’t earn ad revenue if they violate the rule.

What’s at Stake for Creators

X’s Creator Revenue Sharing Program allows eligible users to earn a share of advertising revenue when their posts generate strong engagement. The initiative was designed to encourage higher-quality content and retain prominent creators.

However, critics argue the model can reward sensationalism, click-driven outrage, and emotionally charged posts. In an engagement-focused system, hyper-realistic AI videos showing explosions or battlefield scenes can spread rapidly before being verified.

By threatening financial incentives, X appears to be discouraging misleading AI war content without issuing outright bans.

A Limited Fix?

The update specifically addresses AI-generated depictions of armed conflict without disclosure. Other forms of AI-driven misinformation — including political deepfakes, manipulated campaign material, or deceptive influencer promotions — are not directly covered under this enforcement change.

That narrow scope suggests the crackdown is targeted rather than comprehensive.

The Bigger Picture

As generative AI tools become more advanced and accessible, social platforms face mounting pressure to distinguish between authentic footage and synthetic media — especially during geopolitical crises.

By tying enforcement to monetization penalties, X is signaling that creators who attempt to profit from unlabeled AI war content will face consequences.

Whether that’s enough to deter abuse remains to be seen.

Hosting 75% off

Written by Hajra Naz

Why ChatGPT’s New GPT-5.3 Instant Won’t Tell You to Relax Anymore

Why ChatGPT’s New GPT-5.3 Instant Won’t Tell You to Relax Anymore

How to Negotiate Pricing and Avoid Low-Paying Clients

How to Negotiate Pricing and Avoid Low-Paying Clients (Without Losing Opportunities)