NewsGuard Taps Startup Pangram to Identify AI-Generated News and Misinformation

America post Staff
7 Min Read

Many of these sites can be categorized as made-for-advertising (MFA) sites, properties with low-quality content designed solely to generate ad revenues via arbitrage. 

NewsGuard first began monitoring AI-generated news and MFA sites a few years ago, when it was often easy to detect the use of LLMs in copy. “Sites would publish articles with AI error messages in them or quotes, such as, ‘As of my cut-off date of November 2024, I can’t answer this question,’” said Matt Skibinski, NewsGuard’s chief operating officer. 

But in the intervening years, these sites have spread like wildfire. Today, Pangram claims it’s seeing between 300 and 500 of these AI content farm sites emerge each month. “It’s a way to produce low-quality content for really low cost and generate some advertising revenue—and also [bad] actors who want to spread false information have figured out that they can sort of weaponize this technology and churn out, at a really high volume, false and misleading content and still make a quick buck because they run ads on those pages, too,” Skibinski said. 

In one two-month observational period, NewsGuard found 141 blue-chip brands advertising on MFA AI content farms. 

The new detection tool, NewsGuard hopes, will help both advertisers and consumers steer clear of AI-enabled misinformation and MFA sites. To help protect advertisers, it will allow them to license its data stream about AI content farms directly or through their agency. It also has a direct integration with popular demand-side platform The Trade Desk, through which advertisers can block these sites with specific pre-bid segments.

NewsGuard is also considering integrating the tool into its browser extension so that everyday consumers can see when they’re consuming AI-generated info, according to Skibinski.

Pangram, founded in 2023 by a former Google engineer and an ex-Tesla scientist, has already gained acclaim for the effectiveness of its tech. A report in Nature from September found that Pangram proved highly capable of flagging research papers and peer reviews that included LLM-generated text. Leading academic institutions including Wellesley and University of Maryland, are using Pangram’s tech to combat undisclosed or unwanted AI-generated content in academia. 

Spero expects demand for Pangram’s tech to spike in the coming months. “There’s just going to be so much spam and bots and slop online,” he said, “that it’s going to be pretty unusable without technology to help you wade through the slop.”



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *