X Cracks Down on AI-Generated War Content with Monetization Suspension Policy
Elon Musk has taken a firm stance against users on X who profit from artificial intelligence-generated videos of war-torn regions in the Middle East. The social media company announced that users posting AI-made content without clear labels will face a 90-day suspension from its monetization program. Any repeated violations could result in permanent removal from the program, according to Nikita Bier, X's head of product. Bier emphasized the ease with which AI tools can create misleading content, especially during times of war, when accurate information is vital. The new policy follows recent escalations in conflict between the US, Israel, and Iran, which have fueled a surge of AI-generated misinformation online.

One of the most widely viewed fake videos showed Israeli soldiers allegedly weeping in fear after an Iranian strike. The clip, which gained over 1.4 million views, depicted scenes that appeared authentic but were entirely fabricated. Another video, viewed more than 2.1 million times, falsely claimed that Dubai's Burj Khalifa had been engulfed in flames due to an Iranian attack. A third clip falsely showed Iranian missiles striking central Israel, with footage seemingly depicting a massive explosion on a building. These examples highlight the growing challenge of AI-generated content that can quickly distort public perception.
X has announced new measures to identify and label AI-generated content. The platform plans to use crowdsourced notes from users, metadata, and other signals to mark AI-made videos. Users will also be required to add a 'Made with AI' label to such posts by accessing the menu and selecting the 'Add Content Disclosures' option. These steps aim to ensure greater transparency and reduce the spread of misinformation during critical times. However, the question remains: Should social media platforms regulate AI-generated content, or should the responsibility fall to users to identify fakes themselves?

Experts have noted several telltale signs that can help distinguish AI-generated videos. These include low picture quality, very short durations, strange textures, or an almost airbrushed appearance. Unnatural shadows, lighting, or physical inconsistencies in the footage are also common indicators. Interestingly, typos can be a sign of human involvement, as AI-generated content tends to be error-free. These cues, combined with ongoing efforts by platforms like X, may help users navigate the complex landscape of AI-generated media.

Musk has long predicted that AI-generated content will become the norm in the coming years. In October, he stated that the majority of online content in five to six years — possibly even sooner — would be produced by AI. This vision of the future has placed Musk at the center of a broader debate about the role of AI in society. While X tightens its policies to combat misinformation, Musk continues to push forward with innovations that shape the trajectory of AI development.

The new policy has been praised by the Trump administration, which views it as a necessary step in combating online falsehoods. Sarah Rogers, the under secretary of state for public diplomacy, commended the move, stating that X's community notes system encourages truth by reducing the reach of annotated inaccuracies. She noted that such measures do not require a
Photos