Article Sponsored By: DOrch Starter
Host containers without setting up or managing their Infrastructure. From as low as $1. Supports OCI Images, including Docker & Podman. Deploy with a Web Console. Click here to learn more
The Problem With Serverless: A Bad Bet for Most Businesses
Published: February 12, 2025: 6:40amBefore we can discuss the problem with serverless, we have to define the serverless paradigm of product development and operations.
The serverless paradigm, as opposed to other existing paradigms, states that its standard unit of deployment is a function. In the oldest of traditional paradigms, the paradigm used for languages like C, Go, and Rust, developers write their code, compile it, and the resulting executable becomes the standard unit of deployment. That is what is uploaded to a server and started using tools like tmux, systemd, or by running it in the background using '&', among other methods. In a server-oriented paradigm like PHP, developers write PHP files and deploy them under a PHP-enabled web server, making the PHP files themselves the standard unit of deployment. In the case of serverless, the function, regardless of the programming language it is written in, becomes the standard unit of deployment.
In the oldest traditional model used for C, Go, and Rust programs, the host operating system is responsible for picking up the program for execution. In the server-oriented operations paradigm, the server, along with its engine (such as PHP Zend), is responsible for picking up the PHP file for execution. In serverless, the cloud provider embeds the function within a custom framework and finds a way to execute it. Some providers have custom engines that invoke and directly run the functions, integrating them into their environment.
That, essentially, is what the serverless development and operations paradigm is about.
Of course, there are also container-oriented paradigms and many others, but this article isn’t about development and operations paradigms in general. Our focus here is on serverless, and now that we have established its definition, we can move forward.
2: The Promises of Serverless

The leading promises of serverless revolve around three key aspects. The first is the potential reduction in operational costs for customers. The second is the elimination of the need for customers to manage the underlying infrastructure of their product. The third is the promise of effortless and infinite auto-scaling.
2.1: Cost Reduction
When a developer deploys a Golang binary in the traditional manner, especially within small teams that do not have a large user base, there is often a scenario where a substantial portion of the server's resources remains underutilized. For instance, if a team rents a quad-core 4GHz server from AWS, they might find that their service fully utilizes the available CPU power only between 11 a.m. and 5 p.m., while the rest of the time, CPU usage does not exceed 10%. In a 24-hour period, this translates to an effective utilization rate of about 32%. If they are running an M7a instance in AWS’s Northern Virginia region, which costs approximately $85 per month, they are essentially using only $27 worth of resources while the remaining $58 is being wasted.
The promise of serverless is that by using it, developers can avoid paying for idle server resources, thereby eliminating wasted expenditure.
If this were the only promise of serverless, and it did not introduce other challenges, I and many others would be its biggest advocates. However, as we will see, this is not the case.
2.2: Infrastructure Management Relegation
Another major selling point of serverless is that it absolves development teams of the responsibility of managing the infrastructure that their product runs on. The appeal of this cannot be overstated. Development teams should ideally focus on product development rather than spending their time managing infrastructure.
However, while the goal of offloading infrastructure management is admirable, the way serverless attempts to achieve it introduces new problems that can, in some cases, outweigh the benefits. Instead of being a pure solution, it often results in trading one set of problems for another.
2.3: Easy and Infinite Auto-scaling
Anyone who has built a product that gained significant user traction understands that scaling can be a challenging and painful process. The ability to auto-scale effortlessly is an attractive promise, one that could be incredibly valuable to teams looking to grow their applications seamlessly.
But as with the previous promises, there are caveats to this claim. As we will explore further in this article, the supposed ease of scaling in serverless environments often comes with hidden drawbacks.
3: The Problems with Serverless

We have examined the biggest promises made about serverless. In an ideal world where serverless did not introduce new problems while attempting to solve existing ones, it would undoubtedly be the future of product development and operations. However, the reality is different, and we need to examine the issues that arise with serverless implementations.
3.1: Poor Investment for Engineers Who Use Serverless
One of the most critical concerns with serverless lies in the fact that it is not a good investment for engineers who choose to specialize in it. The technology industry is vast, and in 2025, no single individual can master every aspect of it. The most competent engineers are those who cultivate both breadth and depth of expertise. The fastest way to rise up the competence ladder is to explore various areas of technology, master established and standardized tools, and avoid proprietary technologies that have not yet gained industry-wide adoption or assurance of longevity.
The main issues with serverless in this regard are that it is an inferior alternative to well-established paradigms like container-based hosting, it lacks standardization across different vendors, and there is no guarantee that it will remain mainstream in the next 15 to 20 years. Engineers who dedicate time to mastering serverless may find that their investment does not yield long-term dividends, especially when compared to mastering foundational technologies like networking, system administration, and containerization.
Furthermore, serverless is a highly vendor-dependent solution. Each cloud provider implements serverless differently, forcing engineers to learn multiple variations if they wish to remain flexible. This vendor lock-in results in knowledge that is less transferable across different organizations and projects.
Instead of investing in a paradigm with uncertain longevity, engineers would be better served by deepening their understanding of more robust and established technologies such as container orchestration with Kubernetes, systemd, network classification, and identity and access management (IAM) systems.
3.2: Growing Complexity
Serverless initially promised simplicity by limiting what was possible with it. This meant that deploying simple projects with serverless could be easier than using traditional methods. However, as serverless providers attempted to position serverless as the future of development, they were forced to expand its capabilities. Serverless functions, which were once limited in scope, are now handling multiple responsibilities and supporting unrestricted imports of external functions. As a result, the simplicity that once defined serverless is beginning to fade, leading to growing complexity.
3.3: Limited Use Cases
The restriction of what is possible within the serverless paradigm is not just a challenge but a fundamental flaw. If a paradigm does not offer a net benefit over existing paradigms and simultaneously cannot be applied as broadly as its alternatives, then it begs the question of what it truly offers. Serverless, in its current form, lacks the flexibility required to be a one-size-fits-all solution.

3.4: Cold Start: Performance Penalty
A recurring theme in technology is that in trying to solve one problem, we inadvertently create others. Serverless computing is no exception.
One of the ways serverless platforms save costs is by shutting down customer services, specifically, functions, once they've completed a request. By deactivating dormant functions and reusing infrastructure for other customers' needs, they can reduce operational costs. On the surface, this model sounds like a win-win.
However, for those with experience in these systems, a significant problem quickly becomes apparent: the "cold start." As serverless functions grow more complex, the need to initialize a function from scratch when a new request comes in can lead to significant latency, sometimes up to 15 seconds. This delay results in a poor user experience, which, in the context of a paid product, might be a case of being penny-wise but pound-foolish. (Apologies for the cliché, but it conveys the point well.)
3.5: Limited Observability
Serverless providers often have to reinvent many aspects of infrastructure management. However, their implementations are rarely as mature or reliable as those developed by the broader industry over decades.
This issue is particularly evident when it comes to observability. Serverless users frequently complain about the difficulty of diagnosing issues with their functions. As serverless products continue to grow in complexity, the lack of robust monitoring tools will only become a more pressing problem for developers trying to troubleshoot or optimize their systems.
3.6: Limited Architectural Freedom
Another significant drawback of serverless is the limited architectural flexibility it imposes. Serverless platforms are highly opinionated, providing a predefined architecture that you must work within. Essentially, using serverless means adopting a specific architectural framework, with little room for deviation.
For hobby projects or portfolio-building applications, this constraint may not pose much of an issue. But when you're developing a solution with real-world implications, the inability to make architectural decisions based on your specific needs can become a serious obstacle.
This is one of the reasons I believe serverless will never become the dominant paradigm for large enterprises. While it might be useful for specific, isolated tasks, I don't foresee it replacing traditional architecture models in large-scale development and operations.
3.7: Confinement Penalty
Serverless, by definition, requires you to outsource infrastructure management to the provider of your choice. However, in the absence of any standardized approach to serverless, the implementation on one platform (say, Provider A) will likely differ significantly from that of another (Provider B). This leads to a lack of portability between providers.
The lessons of the past few decades have shown us that customers who become tied to a single provider are often at the mercy of that provider’s pricing, terms, and practices. With serverless, the inability to easily switch providers without incurring significant cost and effort makes customers vulnerable to exploitation, especially in an industry where the focus is often "profit above all."

3.8: Cost Penalty
As counter-intuitive as it may sound, serverless is often more expensive in practice than it is in theory.
When companies use traditional server models (like AWS EC2 instances), they pay a fixed fee, whether they utilize the resources or not. However, for well-established businesses with well-sized infrastructure, the cost of serverless is often not worth the savings. In these cases, the savings promised by serverless are so marginal that the overall cost of switching to it can become a penalty rather than a benefit.
There are instances where serverless offers significant savings, especially for solo developers or startups with minimal traffic. For them, serverless can be an ideal solution. However, as their traffic scales, the cost quickly rises, and the additional effort to migrate off serverless platforms and refactor code can become prohibitively expensive.
3.9: Forced Microservice
One of the core principles of serverless is that functions should be single-purpose. While this principle promotes simplicity, it often forces developers into an extreme form of the microservice architecture. While microservice can be beneficial in some cases, they come with substantial overhead, particularly for medium- to large-scale applications.
The complexities introduced by microservice, such as networking challenges, increased latency, and the overhead of managing multiple services, often outweigh the benefits, especially for smaller teams or projects.
It's also important to clarify the common confusion between Service-Oriented Architecture (SOA) and microservice. Many developers use "microservice" when they actually mean SOA, and the differences are significant. If you're going to advocate for microservice, be sure that you're discussing it in the correct context, as the term is often misapplied.
In short, serverless enforces a microservice-based architecture that can be burdensome, particularly for projects that do not require such an intricate design.
3.10: Development Headache
Another issue with serverless is the lack of local testing environments. Since serverless platforms are proprietary and managed by third-party providers, developers cannot replicate the entire platform on their local machines. As a result, they must deploy their code to the cloud for full testing, which can be time-consuming and inefficient, especially for medium- to large-scale projects.
This challenge further undermines the claim that serverless simplifies development. In practice, for any project of significant scale, serverless can become a major inconvenience and a source of development headaches.
3.N: Looking At It All
After examining these issues, it’s clear that serverless is not the panacea it’s often claimed to be. That said, this does not mean that serverless has no place in the tech ecosystem. Like any technology, it serves specific use cases well, particularly in niche areas. I believe it will persist, but it will likely remain confined to particular domains rather than becoming a universal solution.
Serverless is not the cost-saving, hassle-free innovation it’s often portrayed as.
It’s not cheap.
While it does offer easy auto-scaling, this benefit often comes with a host of hidden challenges. And the developers who benefit most from it, indie hackers and solo engineers working on smaller projects, are unlikely to need the advanced autoscaling features that serverless provides in the first place.
Finally, serverless may simplify infrastructure management for small teams, but it does so in a way that introduces more problems than it solves. For those looking for a better solution to infrastructure management, consider alternatives like DOrch Starter and DOrch Pro (when it launches), which provide a more effective approach without the trade-offs serverless demands.
Why Are People Using Serverless

1: Lack of Resources
One of the most understandable reasons people turn to serverless is due to limited financial resources. Many serverless providers offer free tiers that enable users to host sizable projects without incurring costs. For individuals or small teams with tight budgets, this makes serverless an attractive option, often offering deals they wouldn’t get from other providers.
2: Helpful for Specific Use Cases
Some users have niche problems where serverless has proven to be particularly effective. However, these users represent a small fraction of the overall serverless community. Even in these cases, serverless is rarely the primary development and operational paradigm; it serves a specific need rather than being a catch-all solution.
3: Shiny Object Syndrome
A significant portion of serverless users is driven by a kind of "shiny object syndrome." These are the people who gravitate toward serverless because it seems like the next big thing. Unfortunately, this group forms one of the largest segments of serverless adopters, often without fully understanding the long-term implications of their choice.
4: Herd Mentality
Then there are those who aren’t necessarily seeking out the next flashy technology, but rather feel compelled to adopt it simply because it’s seen as the new trend. This herd mentality often leads developers to jump on the serverless bandwagon, influenced by what they perceive to be the latest industry shift, even if it’s not the best fit for their specific needs.
5: Brainwashing
Another group of serverless users is the result of aggressive marketing campaigns by serverless providers. These users are sold a vision of serverless that doesn’t always align with reality. Lacking the tools or experience to critically evaluate these claims, they fall victim to marketing-driven hype and end up adopting serverless without fully understanding its limitations.
Final Words

Serverless is often touted as the "future of computing," but in reality, it’s simply a paradigm that can be effective in specific problem domains. This isn’t to say the challenges serverless aims to address aren’t important, quite the opposite, in fact. These are issues worth solving, which is why we’ve dedicated an entire product brand, DOrch, to tackling them. While we’re not perfect yet, we’re proud of the progress we’ve made, and we encourage you to check out DOrch and see how we’re addressing these challenges. We also believe that newer, container-focused hosting solutions from other providers are strong contenders for the future of product operations.

Brian is the Founder of DOrch, a Product Infrastructure Brand. Prior to founding DOrch, he was a Principal Architect. He was also a critical player in the exponential growth of many businesses around the world, with a lot of success stories to draw from