It’s too soon to say how the spate of deals between AI companies and publishers will shake out. OpenAI has already scored one clear win, though: Its web crawlers aren’t getting blocked by top news outlets at the rate they once were.

The generative AI boom sparked a gold rush for data—and a subsequent data-protection rush (for most news websites, anyway) in which publishers sought to block AI crawlers and prevent their work from becoming training data without consent. When Apple debuted a new AI agent this summer, for example, a slew of top news outlets swiftly opted out of Apple’s web scraping using the Robots Exclusion Protocol, or robots.txt, the file that allows webmasters to control bots. There are so many new AI bots on the scene that it can feel like playing whack-a-mole to keep up.

OpenAI’s GPTBot has the most name recognition and is also more frequently blocked than competitors like Google AI. The number of high-ranking media websites using robots.txt to “disallow” OpenAI’s GPTBot dramatically increased from its August 2023 launch until that fall, then steadily (but more gradually) rose from November 2023 to April 2024, according to an analysis of 1,000 popular news outlets by Ontario-based AI detection startup Originality AI. At its peak, the high was just over a third of the websites; it has now dropped down closer to a quarter. Within a smaller pool of the most prominent news outlets, the block rate is still above 50 percent, but it’s down from heights earlier this year of almost 90 percent.

But last May, after Dotdash Meredith announced a licensing deal with OpenAI, that number dipped significantly. It then dipped again at the end of May when Vox announced its own arrangement—and again once more this August when WIRED’s parent company, Condé Nast, struck a deal. The trend toward increased blocking appears to be over, at least for now.

These dips make obvious sense. When companies enter into partnerships and give permission for their data to be used, they’re no longer incentivized to barricade it, so it would follow that they would update their robots.txt files to permit crawling; make enough deals and the overall percentage of sites blocking crawlers will almost certainly go down. Some outlets unblocked OpenAI’s crawlers on the very same day that they announced a deal, like The Atlantic. Others took a few days to a few weeks, like Vox, which announced its partnership at the end of May but which unblocked GPTBot on its properties toward the end of June.

Robots.txt is not legally binding, but it has long functioned as the standard that governs web crawler behavior. For most of the internet’s existence, people running webpages expected each other to abide by the file. When a WIRED investigation earlier this summer found that the AI startup Perplexity was likely choosing to ignore robots.txt commands, Amazon’s cloud division launched an investigation into whether Perplexity had violated its rules. It’s not a good look to ignore robots.txt, which likely explains why so many prominent AI companies—including OpenAI—explicitly state that they use it to determine what to crawl. Originality AI CEO Jon Gillham believes that this adds extra urgency to OpenAI’s push to make agreements. “It’s clear that OpenAI views being blocked as a threat to their future ambitions,” says Gillham.

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *