The first time I built something significant with AWS Lambda I felt like I'd discovered a secret. No servers to manage. Infinite scaling built in. Pay exactly what you use, down to the millisecond. It seemed almost too good to be true.
It wasn't too good to be true, exactly. But it had more rough edges than the blog posts I'd been reading had let on.
The 3am Incident
We had a Lambda function that processed customer data exports. It ran fine for months on small datasets. Then a customer tried to export two years of data — about 800MB of records — and the function hit the 15-minute maximum execution timeout with 60% of the data processed. The customer got a partial export with no error message. We found out about it the next morning from a support ticket.
The 15-minute Lambda timeout exists for a reason and it's a hard ceiling. Workloads that can run long need to be designed around it from the start — either by chunking the work, using Step Functions to coordinate multiple invocations, or using a different compute primitive entirely. We'd simply not thought about it.
What Serverless Is Actually Great At
Event-driven, short-duration, variable-volume workloads. Image resizing on upload, webhook processing, scheduled data sync jobs, API backends with unpredictable traffic patterns. For these use cases, Lambda is genuinely excellent. You pay nothing when it's idle, it scales to thousands of concurrent executions automatically, and you don't manage any infrastructure.
The event-triggered model also naturally produces good architecture. Your functions stay focused (because they time out if they don't) and you think carefully about asynchronous design because you have to.
Where It Gets Complicated
Cold starts: Lambda functions that aren't invoked frequently have to initialise from scratch on the next invocation. For lightweight Node.js or Python functions this is usually 100-300ms. For Java or .NET functions, it can be 1-3 seconds. For user-facing APIs, this is occasionally noticeable. Provisioned Concurrency fixes it but costs money and partially defeats the "pay per use" value proposition.
Observability: Debugging distributed serverless architectures is genuinely harder than debugging a monolith or even a well-instrumented microservice. CloudWatch logs across hundreds of function invocations are hard to correlate. You need to invest in tracing (AWS X-Ray or a third-party APM) early, not as an afterthought.
Vendor lock-in: Lambda uses AWS-specific event source structures, IAM, environment configuration, and has no portable local development story. Migrating away later is a real project, not a configuration change.
My Current Position
Serverless is not a silver bullet and it's not a mistake. It's a tool. Use it for what it's good at — event-driven, stateless, short-duration workloads with variable traffic — and don't try to run your entire application on it. The teams I've seen struggle with serverless are almost always teams that reached for it as a default rather than because it fit the specific workload.