For years, choosing a technology was an engineering decision: costs, risks, team skills, and actual needs. But in parts of the software industry, Kubernetes has gone from being a powerful tool to becoming a status symbol. In some technical interviews, business proposals, and even hallway conversations, the question seems obligatory: “Are you using Kubernetes?” And far too often, the answer is interpreted as a certification of maturity or obsolescence.
This is the underlying criticism that has resurfaced strongly within technical communities following a lengthy reflection published on LinkedIn by a cloud industry professional. Its thesis is provocative but recognizable to anyone who has worked with modern infrastructure: Kubernetes is extraordinary when it solves the right problem; for others, it can become an unnecessary complexity tax.
From engineering to “cargo cult”: copying form without understanding the cost
The text revisits a classic idea from the tech world: the “cargo cult,” a metaphor about copying external rituals in hopes of achieving the same results. Applied to the cloud, it’s like seeing hyper-scalers or global platforms use Kubernetes and assuming that replicating that architecture will automatically bring reliability, speed, or success.
In practice, many organizations discover too late that Kubernetes doesn’t just “orchestrate containers”: it introduces a whole operating system ecosystem. Ingress controllers, certificate management, automated DNS, network policies, observability, secret management, frequent upgrades, version compatibility… and, above all, specialized knowledge to operate all that without turning every incident into a long night.
The author puts it with irony: the industry has confused “scalability” with “credibility,” as if anyone were on the brink of a problem involving millions of users. But that’s not the case. Most companies want something much more mundane: rapid deployment, minimal human error, reasonable high availability, and without dedicating half a team to the platform.
The return of the sysadmin: fundamentals before liturgy
Another part of the message ties into a more cultural debate: the feeling that the traditional sysadmin profile—the person who understands processes, memory, networking, permissions, diagnostics—has been undervalued in favor of modern roles filled with acronyms and tools claiming to abstract everything away.
The critique isn’t against the cloud or DevOps as a discipline, but against the “show of complexity”: layering more and more as a sign of sophistication. The reminder is simple: when something breaks at 3 a.m., what saves the system isn’t slogans but mastery of fundamentals and troubleshooting skills.
Practical example: reducing deployment times from hours to minutes without Kubernetes
The most concrete part—and what most product teams are interested in—is how the author’s team went from manual deployments taking about 2.5 hours to an automated flow lasting just a few minutes, all without Kubernetes.
The alternative described relies on a common AWS strategy: ECS with Fargate (serverless containers in the sense that nodes aren’t managed), plus automation via scripts (for example, in Python with boto3) to create and configure resources idempotently. The recipe, summarized, sounds unglamorous but very effective:
- Build and publish the container image,
- Deploy or update the service,
- Configure load balancer, certificates, and DNS programmatically,
- Let the platform handle rolling deployments and health checks.
The point isn’t to sell Fargate as a universal solution but to demonstrate that, for certain team sizes and workloads, reducing operational surface is the key to victory, not expanding it.
When does Kubernetes make sense, and when doesn’t it?
Despite its combative tone, the reflection doesn’t fall into “anti-Kubernetes.” In fact, it offers a fairly reasonable criterion many consulting firms reiterate… but which isn’t always applied:
Kubernetes is usually appropriate when:
- dozens or hundreds of services with complex needs,
- there’s a dedicated platform team,
- portability (multi-cloud or true hybrid) is a requirement,
- advanced patterns are needed (operators, complex stateful sets, massive jobs, service meshes, etc.).
Alternatives like ECS/Fargate (or other managed platforms) tend to fit better when:
- the number of services is moderate,
- the team is small or prefers to focus on product,
- delivery speed is prioritized over precise control,
- minimizing maintenance of the orchestration layer is desired.
Ultimately, it’s a call to ask the right question again: “What is the real problem we are trying to solve?” If the answer is “deploy quickly and reliably,” the solution doesn’t always have to involve the same level of complexity as a platform operating at a global scale.
The trap of the “industry standard”
Kubernetes has become a standard, yes. But “standard” doesn’t mean “mandatory.” And this confusion has a cost: over-dimensioned architectures, burned-out teams, budgets spent on platforms instead of products, and paradoxically, slower deliveries.
The debate isn’t going to end soon, because it mixes technology with professional identity. But the discussion is valuable: in a time when many companies seek efficiency, reduce friction, and improve time-to-market, questioning the “because it’s standard” rationale can be the most profitable decision.
Frequently Asked Questions
Is Kubernetes necessary for a mid-sized company?
Not necessarily. It can be a great option if there are scale and complexity requirements and a team prepared to operate it, but many mid-sized companies achieve robust deployments with managed platforms and automation.
What does “cargo cult” mean when applied to Kubernetes?
It refers to adopting tools or architectures by imitation (because big companies use them) without assessing whether the operational cost and complexity fit the real problem and team size.
What alternatives exist to Kubernetes for deploying containers in production?
Depends on the provider and context: AWS ECS/Fargate, managed container services, PaaS platforms oriented to continuous deployment, or even simpler approaches with automation and good practices on VMs when suitable.
How do I know if my company “needs” Kubernetes or is just adopting it due to market pressure?
A practical sign: if operating the platform requires a dedicated team (or diverts focus from product), if upgrades and debugging consume too much time, and if your actual needs don’t require advanced orchestration, then you’re likely paying for unnecessary complexity.

