In a new 44-slide pitch deck, obtained exclusively by ADWEEK, X is once again pitching itself as a highly safe environment for advertisers. The slides promotes various tools for transparency, campaign measurement, and brand safety that debuted between 2022 and 2025. X presented the deck to brand- and agency-side advertisers last week, according to a source familiar with the matter.
These include keyword controls, blocklists, as well as partnerships with leading media quality firms DoubleVerify, Integral Ad Science (IAS), and Trustworthy Accountability Group.
Notably, the deck positions Grok—the AI chatbot operated by X’s parent company xAI and integrated natively into X—as a cornerstone of the app’s brand safety and suitability mission. One slide claims Grok provides “enhanced contextual understanding of posts”; the ability to “identify sensitive topics based off of cultural trends on X”; and “superior text recognition across content formats including images with oddly-sized or distorted text.”
But throughout the past year, Grok has not only failed to ensure brand safety but in some instances disseminated the very content that brands deem most unsafe. In January, users prompted Grok en masse to flood X with sexually explicit nonconsensual deepfake images of real users. Around 6,700 such images appeared hourly during one 24-hour observational period, tallied by deepfake researcher Genevieve Oh. Now, that incident is the subject of various regulatory investigations across the globe. Six months earlier, Grok had spewed violent and antisemitic content on the social platform, producing rape fantasies and calling itself ‘MechaHitler.’
The company says in the document that it is “deeply committed to safety for all.”
In the new pitch deck, X claims Grok “exceeds industry benchmarks” for brand safety, with an “average brand safety score” above 99.99% according to IAS and DoubleVerify, though details about these methodologies were not shared. In 2024, DoubleVerify publicly said that X’s brand safety rate is 99.99%—a figure that referred not to the share of content that is brand-safe on X, but to the rate of effectiveness of X’s proprietary brand safety systems at the time. However, the rate was never presented as a representation of Grok’s capabilities.
“This reflects all content on the platform, adjacent to content from any account,” a spokesperson for the platform said. The spokesperson also added that the company “has not identified any brand safety incidents related to the Grok handle.”
DoubleVerify declined to comment. IAS declined to share specifics about its methodology but directed readers to learn more about its work with X in a blog post on its website. That post includes no mention of Grok.
The deck spotlights various other products and tools designed to give advertisers greater control over their placements and to assure them that content on X is being effectively moderated.



