NewsGuard Taps Startup Pangram to Identify AI-Generated News and Misinformation

America post Staff
7 Min Read


Media rating and misinformation-tracking firm NewsGuard is trying to stop the spread of AI-generated misinformation and slop in the news ecosystem with a new project—that also relies on AI.

On Thursday, NewsGuard launched an AI content farm detection tool designed to identify when news and information sites host a significant portion of content that appears to be created by large language models like ChatGPT, Claude, or Gemini. The project was launched in collaboration with the AI content detection startup Pangram Labs.

The system uses Pangram’s proprietary AI models, which are specifically trained to identify AI-generated content, to evaluate not just individual webpages but broad swaths of entire domains. Once Pangram’s tech has identified a site that appears to be an AI content farm,using automation to pump out digital content en masse, it flags the site to NewsGuard, whose analysts then conduct manual reviews. These experts review Pangram’s findings to determine the pervasiveness of AI content across the site, look for explicit disclosures that content is AI-generated, seek indicators that human writers are involved, and reach out to site owners for additional information to ensure they don’t assign false positives.

NewsGuard categorizes websites as AI content farms according to three criteria: a “substantial” share of the content is created by AI, as determined by Pangram; the site does not disclose that its content is AI-generated (unlike many reliable news outlets that explicitly share when their content is produced with the help of AI); and the appearance of the site could easily mislead the average user into believing its content is created by humans. This content is, at best, unreliable; at worst, purposeful and potentially dangerous misinformation or propaganda. 

“If we can’t detect AI content, then every communication space is going to be flooded with inauthentic content that’s cheap to produce and difficult to impossible to differentiate [from] something authentic,” said Max Spero, Pangram’s CEO. 

The detection system, which has been in testing for over six months, has already helped NewsGuard flag some 3,000 AI content farm sites, more than double what the organization was able to identify last year using primarily manual techniques. Many of these are branded under generic, newsy names like Times Business News or Business Post, while consistently putting out misinformation-riddled articles about real brands, political leaders, celebrities, and public health information. 

In one instance, a site called Citizen Watch Report, which bills itself as “a fine selection of independent media sources,” ran a story last year asserting that two U.S. lawmakers, Senator Lindsey Graham (R-SC) and Senator Richard Blumenthal (D-CT), shelled out $814,000 on hotels in Ukraine. The false claim spread on social platforms and was amplified further Russian state media before being shut down as fake news

In another example, a site called News 24 falsely claimed that Coca-Cola threatened to cut its Super Bowl LIX sponsorship over the announcement that Puerto Rican rapper Bad Bunny would headline the game’s halftime show. Coca-Cola was not even a sponsor of the Super Bowl. The article’s webpage displayed ads from global brands including AT&T, YouTube, Expedia, Hotels.com, Skechers, and others. 

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *