I've worked with Kubernetes in production. I've set it up from scratch, debugged networking issues at 2am, fought with ingress controllers, watched junior devs get completely lost in YAML, and yes, genuinely appreciated what it does when you need it.

But I've also watched teams choose Kubernetes for a service that gets 500 requests per day, struggle with the operational overhead for six months, and eventually wonder why they made their life so complicated.

Kubernetes solves specific, real problems. Those problems are not the problems most applications have. Here's the honest version.

What Kubernetes Is Actually For

Kubernetes is excellent when you need: container orchestration across many nodes, sophisticated auto-scaling under variable load, zero-downtime deployments with complex rollback requirements, multi-tenant isolation, and teams large enough that different teams own different services.

These are real needs. At significant scale, K8s addresses them well. The question is whether you have those needs.

What Most Small-to-Medium Projects Actually Have

One to four services. Maybe a handful of containers. Predictable load that doesn't vary by 10x within an hour. A team of two to fifteen engineers. In this environment, Kubernetes gives you: a steep learning curve, significant operational overhead, complex networking that's hard to debug, and new failure modes that didn't exist before.

The comparison isn't "Kubernetes vs nothing." It's "Kubernetes vs the alternatives."

The Alternatives That Actually Work Better at Smaller Scale

AWS ECS — Far simpler than EKS, tightly integrated with AWS services, good auto-scaling, no Kubernetes knowledge required. I've seen teams move from EKS to ECS and reduce their infrastructure maintenance time by 60%.

Railway / Render / Fly.io — For teams that want containers without managing infrastructure at all. Fly.io specifically has gotten good enough at scaling that it handles real production workloads. If your team is small, pay someone else to manage the infrastructure.

Docker Compose + a simple reverse proxy — For internal tools, lower-traffic services, or anything where "runs reliably on one server" is sufficient, Docker Compose with a well-configured Caddy or nginx proxy is genuinely robust and dramatically simpler to reason about.

When You Actually Do Need Kubernetes

Somewhere north of ~50 microservices, significant traffic variability (peak 20x+ above average), large dedicated platform teams, or specific requirements around workload isolation or multi-region complexity — these are the scenarios where Kubernetes complexity is justified because the problems it solves are real and significant.

If you're asking "should we use Kubernetes?" and your traffic fits in a spreadsheet, the answer is probably not yet. Build the product first, reach the scale where simpler solutions strain, then migrate. The knowledge will still be there.