The digital publishing industry has been adapting for years to profound changes in how users discover, read, and share information. But the latest challenge may be one of the most delicate: the rise of AI-powered bots that crawl, copy, summarize, and reuse content without always translating into visits, subscriptions, or revenue for media outlets.
This is the main warning from Akamai’s new report on the publishing sector, which outlines a concerning scenario for newspapers, digital magazines, and information sites. According to the company, categorized AI bot activity grew by 300% in 2025, and the media sector—including video, social media, and publishing—already accounts for 13% of that traffic, ranking just behind e-commerce. Within this group, publishing outlets make up the largest share.
The problem is no longer just fewer reads
The report raises an uncomfortable idea for any digital publisher: content still holds value, but increasingly, users consume it without actually visiting the website producing it. Instead of entering via Google, browsing headlines, and reading full articles, many users rely on AI-generated responses—whether in chatbots, conversational search engines, or real-time information assistants.
Akamai argues that this dynamic is eroding the economic foundations of digital publishing. It’s not just a decline in clicks. It also impacts subscription models, advertising revenue, brand visibility, and the ability to monetize original content. The report cites data from TollBit showing that AI chatbots generated approximately 96% less referral traffic in Q4 2024 compared to traditional Google search. Additionally, a 2025 Pew Research Center study concluded that users click only about 1% of the time on sources cited in AI-generated summaries.
This shift has serious implications. If readers get enough of a summary without visiting the original publisher’s site, the publisher loses ad impressions, registration opportunities, subscription conversions, and direct engagement with its audience. In other words, AI is not only changing how information is distributed—it’s also altering who captures the economic value of that information.
Publishing at the center of automated scraping
One of the most striking findings in the study is that, within the media sector, publishing outlets represent 40.1% of the categorized AI bot activity according to Akamai. That means publishers are the primary targets of this type of automated traffic in the content industry. The logic is clear: news and editorial sites publish new, valuable, up-to-date information that feeds models, instant responses, and AI-powered search engines.
The report also examines the types of bots that are attacking this ecosystem most aggressively. The so-called AI training crawlers remain the most common—they crawl large volumes of data to train models. However, Akamai warns that AI fetchers might be even more damaging for publishers, as they collect content in real time to answer specific user questions. When news, breaking updates, or urgent stories lose their value quickly, this immediate use can directly impact traffic and revenue.
In late 2025, training crawlers accounted for 63% of AI bots targeting media, while fetchers made up 24%. Notably, within fetchers, publishing accounted for 43%. In other words, the most immediate threat doesn’t just come from model training, but from systems that query and summarize live content before the user visits the original site.
Akamai also highlights that OpenAI was the actor generating the most traffic of AI bots toward media companies during the analyzed period, with 40% of OpenAI’s requests within that segment specifically targeting publishers. Alongside OpenAI, Meta and ByteDance are among the owners of AI agents with the greatest impact on the sector.
Higher costs, less control, and brand damage
The report extends beyond traffic issues. It emphasizes something many media outlets are already noticing: these bots increase infrastructure costs. If thousands or millions of automated requests hit a website to extract content, that results in higher spending on servers, CDNs, cloud services, and site security. This means that news organizations not only lose potential income but also pay more to serve content to systems that may reuse it outside their control.
Reputational issues are also concerned. Akamai warns that articles, headlines, and reports might end up replicated on aggregation sites, dubious-looking pages, or platforms that obscure the original source. When attribution weakens, the authority of the brand and the relationship with the audience can erode.
Balancing between blocking everything and letting everything through
A key insight from the report is that it doesn’t suggest a simplistic solution. Akamai advocates against indiscriminately blocking all AI bots. The practical reason is that some may become part of licensing agreements or new monetization strategies. Blocking without understanding who’s accessing, why, and what compensation is involved could be a strategic mistake.
Instead, the company recommends a more selective approach. Its clients typically choose among three main technical responses: deny, tarpit, and delay. The first outright rejects requests; the second slows or traps the bot; the third introduces deliberate delays. Akamai notes that deny is currently the most popular choice among clients, followed by tarpit and delay. Many begin by monitoring and classifying traffic rather than blocking outright. One case cited in the report was able to control 97% of AI bot requests using tarpit instead of complete denial.
Furthermore, the report opens the door to paid and verified access models for automated content access. Initiatives like Skyfire and TollBit are presented as partners to turn some scraping into authenticated, traceable, and monetizable access. It also discusses frameworks like Really Simple Licensing (RSL) that aim to establish clearer protocols for responsible digital content use by AI systems.
The final message is clear: publishers still have time to defend themselves, but they must shift from complaining to active management. The battle is no longer just about producing great content but also about deciding who can use it, under what conditions, and with what compensation.
Frequently Asked Questions
Why do AI bots hurt digital publishers so much?
Because they can read, extract, and summarize content without sending users to the original website. This reduces traffic, ad revenue, subscription conversions, and brand visibility.
What’s the difference between a training crawler and an AI fetcher?
The training crawler collects large datasets to train models. The AI fetcher retrieves content in real time to answer specific questions in chatbots and conversational search engines.
What percentage of AI bot traffic in media is attributable to publishers?
According to Akamai, publishers accounted for 40.1% of AI bot activity in the media sector in the second half of 2025.
What can media outlets do to protect their content against AI bots?
Monitor and classify traffic, implement selective measures like deny, tarpit, or delay, verify bot identities, and consider licensing or monetization agreements instead of blind blocking.
via: Akamai

