The accusation is unequivocal: Google is dismantling the open web piece by piece, not with a single blow, but through a sequence of technical, commercial, and governance decisions that, taken together, outline a coherent strategy. The essay “Google is killing the open web” — published on Oblomov’s blog — reassembles this chronology and connects it with a common thread: the gradual replacement of open, multi-provider standards with mechanisms and APIs that favor a more centralized ecosystem, instrumental for the advertising and data business of the Mountain View giant.
This report expands on that thesis, contrasts the most significant milestones, adds technical context, and also includes objections and nuances from those who see much of these changes as a reasonable evolution of the platform. The result is not a summary verdict but a realistic snapshot of a process spanning over a decade that currently shapes how the web is developed, published, and consumed.
A necessary starting point: standards, browsers, and de facto power
The web was born as a realm of open standards (HTTP, HTML, CSS, DNS), governed by organizations like the W3C and the IETF. The first “browser wars” (1990s) taught that when a dominant player “extends” the web with proprietary technologies, the result is fragmentation and dependence. Chrome emerged in 2008 in a different context: the mobile explosion, rise of centralized services, and the weakening of Internet Explorer. Riding this wave, Google promoted a cycle of rapid innovation (V8 engine, release cycles, modern APIs), often coordinated through WHATWG, a specification forum where browsers — led by Google — coordinate the evolution of HTML and related standards. Critics argue that here a “de facto power” emerges that displaces the W3C and shrinks checks and balances. Supporters contend that it’s a way to avoid paralysis and steer web evolution towards where users are.
2013, a turning point: RSS, XMPP, MathML, and the first challenge to XSLT/XML
The essay sets 2013 as the year of a “plot twist”:
- Closure of Google Reader. It wasn’t just about shutting down a product; it weakened content discovery via RSS/Atom, fueling blogs, media, and later podcasts. The official reason was declining usage. The real effect: news consumption shifted even more to opaque algorithms on platforms.
- End of XMPP federation in Google Chat (Facebook continued with Messenger in 2014). Interoperable messaging shrank, giving way to walled gardens.
- MathML exited Chrome. A decade later, it returned thanks to external efforts. For education and accessibility, representing mathematics without relying on images or JavaScript remained important.
- First serious attempt against XSLT/XML in the browser. XSLT allows transforming documents (e.g., an RSS feed) into HTML without JavaScript. Critics say discouraging XSLT increases reliance on JS and server-side processing.
A realistic reading. None of these events alone “kills” the open web. Together, they shift the center of gravity: less data presentation standards on the client, more logic in JS and backend, and more control in layers where Google is strong.
2015, the year of AMP,
, and the start of “app mode”
- AMP (Accelerated Mobile Pages). Promising mobile speed. Critics argue that the “miracle” was loading less junk and caching on third-party CDNs (Google leading). It gained visibility in search results, prompting many media outlets to double efforts and cede control.
- Deprecation of
. An HTML element for generating key pairs to enable mutual authentication without third parties. Voices like Tim Berners-Lee lamented losing a mechanism that gave more sovereignty to users. - SMIL under scrutiny. Declarative animations in SVG were sidelined. Since then, almost everything relies on CSS+JS, strengthening client-side logic (and thus dependence on JS toolchains).
Balance. AMP no longer shines as it did in 2016, but it left a momentum: mobile performance was addressed primarily based on Google’s preferences. And the removal of
closed the door to more distributed identities.
2018–2020: RSS drops from browsers and URLs lose prominence
Firefox removed native RSS support (“Live Bookmarks”), pushing users toward extensions. Chrome never prioritized feeds. The average user stopped seeing RSS as core in the browser. Simultaneously, Chrome experimented with hiding URLs for “usability”. Critics saw it as another step to make the site more important than the address (and thus harder to evaluate origin and authenticity).
2019–2023: Manifest V3, Web Integrity, and the JPEG XL case
- Manifest V3 changed Chrome’s extension model for security. To observers and developers, it limited ad blockers by trimming filtering capabilities. Google denies anti-adblock intent; the public perception saw a direct benefit for advertising business.
- Web Environment Integrity (WEI). Presented as anti-cheating/anti-fraud, many saw it as “browser DRM”: the server would ask if the client is “trustworthy”. The community reacted, and the plan cooled, but the idea left a mark.
- JPEG XL. An open format with better compression (lossless and lossy), progressive rendering, transparency, and animation. Chrome removed it despite positive tests. Critics see it as a missed opportunity to reduce bandwidth costs for the entire web. Google claimed lack of adoption and insufficient benefits compared to AVIF/WebP.
2024–2025: RSS removed from Google News and the renewed push against XSLT
In 2024, Google stopped accepting feeds for inclusion in Google News. Discovering media and articles now depends even more on internal engines.
In 2025, the debate to remove XSLT from the browser re-emerged—this time championed by WHATWG. Critics point out that since 2007 (XSLT 2.0) and 2017 (XSLT 3.0), modern versions exist, even supporting JSON as input data. The uncomfortable thesis: if you haven’t maintained a feature for years, its low adoption becomes a self-fulfilling prophecy.
Why do XSLT/XML and RSS matter (even if you never used them)
- Presentation without JS. XSLT is a declarative templating: transforms node trees (XML/HTML/SVG) into others, with implicit structural validation. Less attack surface than string concatenation.
- Costs and simplicity. A site can serve XML+XSLT (feeds, sitemaps, catalogs, tabular data) and let the browser render. Less CPU on the server, fewer bytes if optimized well.
- Sovereignty and portability. RSS/Atom allow subscribing and migrating between clients without permission. The backbone supporting podcasts and many federated flows.
- Accessibility and science. MathML and TEI/XML in humanities are examples where the client should be able to visualize without heavy toolchains.
Can all this be done with JS? Yes. But the real question is: why eliminate open, mature options that diversify the ways to build the web?
Arguments from Google (and others) and why they don’t persuade everyone
- “Security and maintenance costs.” Maintaining old XML/XSLT parsers is expensive and may introduce CVEs. Counterpoint: if the issue is outdated implementation, update (XSLT 3.0, modern libraries) or replace the dependency, don’t delete the feature from the specification.
- “Low adoption.” Circular metric: years without investment lead to less use. Also, XSLT 1.0 never moved to 2.0/3.0 in browsers, missing many capabilities.
- “Simplifies browser code.” Vaild in theory, but it clashes with the reality of new JS APIs added every cycle. For the community, the issue isn’t cutting, but what gets cut: mostly what grants autonomy to the user.
A broader vision: it’s not all Google, but it weighs more
It would be unfair to turn this into a story of “a single villain”. Apple restricts Web APIs on iOS; Mozilla has made controversial decisions (RSS, integrations); Microsoft pivoted to Chromium and prioritizes its platform. The difference is scale: with Chrome’s market share and search dominance, what Google decides becomes the norm for much of the web. That’s why even “small” changes drag the rest along.
Is “killing the open web”? A sober perspective
The realistic approach is to accept two truths simultaneously:
- The web platform today is more powerful than ever (graphics, multimedia, typography, WebGPU, PWAs, advanced privacy features in alternative browsers).
- Effective control over what gets priority and what gets dropped is concentrated. When pieces like XSLT,
, or JPEG XL fall away, diversity in publishing, authentication, and distribution shrinks.
This isn’t an apocalypse. It’s a constant erosion that narrows paths that don’t rely on large JavaScript functions, centralized services, or API monopoly.
What users, media, and developers can do (without naivety)
- Keep feeds (RSS/Atom) visible. If you publish, expose them. If you read, use readers.
- Serve XML with XSLT in clear scenarios (e.g., “readable” sitemaps, listings, catalogs, documentation). If the user’s browser doesn’t support it, do server-side transformation or add a polyfill (SaxonJS) as a progressive fallback.
- Privacy and plurality. Not everything needs to run on the client: evaluate islands (HTMX/Alpine), SSR, and static rendering. Less JS where it doesn’t add value.
- Extensions and alternative browsers. Diversification reduces agenda power.
- Participate in issue trackers. Your technical and polite pressure shapes priorities (many controversial APIs have been affected this way).
- Education and documentation. Teaching what does what (XSLT, MathML, client certificates, etc.) prevents decision-making by default without debate.
What lies ahead: scenarios in 12–24 months
- XSLT in the browser. If removed, expect more server transformations and targeted polyfills. Some communities (digital humanities, technical documentation) may shift to custom toolchains.
- Images. In the absence of JPEG XL, AVIF/WebP will continue growing. Industry will demand better tooling and stable profiles to avoid an eternal “codec war”.
- Extensions. Manifest V3 is already reality; the pressure will maintain certain filtering capabilities, but the blocklist model at the network level will move outside the browser (DNS/routers).
- Environment integrity. The temptation of new “verifiers” will persist. The key will be limiting fraud cases and avoiding shortcuts that turn the browser into a user policeman.
Conclusion: the importance of preserving multiple pathways
A healthy web isn’t one that blindly adopts “the new” without looking back, but one that adds without excluding routes that provide autonomy and portability. RSS, XSLT, MathML, or client certificates are not relics; they are alternative paths that rebalance power among publishers, readers, and platforms.
When a systemic actor turns out the lights on these options, the map becomes poorer. It’s not the end of the open web, but it is less web and less open. And that warrants debate, informed pressure, and above all, building alternatives.
Frequently Asked Questions (FAQ)
1) Does XSLT still make sense in 2025?
Yes, in scenarios where separating data from presentation, reducing JS, and validating structures (feeds, sitemaps, catalogs, documentation) are priorities. XSLT 3.0 also works with JSON. If native support disappears, it can be rendered server-side or via polyfills.
2) Why such insistence on RSS if “nobody uses it anymore”?
RSS/Atom support podcasts, syndicates millions of sites (WordPress exposes it by default), and restores agency to the reader: choosing sources without algorithms. It’s a low-friction, open piece; if it disappears from the product ecosystem, it’s not due to uselessness, but priorities.
3) Don’t Google’s security arguments hold weight?
Security matters. The disagreement lies in the approach: updating implementations and maintaining standards versus removing features that diminish diversity of approaches. Maintenance costs exist, but what gets retained is a political and technical decision.
4) What are alternatives to relying on big platforms?
There’s no silver bullet. Diversify (browsers, hosting, RSS clients), minimize JS where it doesn’t add value, stick to standards (Web, DNS, interoperable email), and document reproducible practices. Independence isn’t absolute, but it is gradual and cumulative.
via: wok.oblomov.eu and SEOcretos.