Today’s serverless offerings provide developers with the capabilities to run their applications without being concerned with any infrastructure or administration-related issues. Also known as Event-based, or “Functions as a Service” (FaaS), this implementation option has gained in popularity for several reasons. In this article, we will explore the role of serverless offerings, the value they bring, and explain how you can use serverless to elevate your business.
How Did We Get Here?
Over the years, we’ve progressed from monolithic applications to microservices, to containers, which created the need for orchestration. Now there is serverless, minimizing developers’ dependence on host environments, increasing flexibility, and lowering overhead costs. The evolutionary path of these programs is not rigid, with many development teams skipping the container movement and going straight to serverless. Most applications were migrated to the cloud, while many are newly developed, built with a Cloud-native approach. Teams can often struggle with the complexities of Kubernetes for container orchestration, driving the search for simpler and oftentimes cheaper options depending on the application workload. Not to mention that many engineering team leaders are continuously struggling to find and keep highly-qualified Kubernetes experts.
Public cloud providers began making things more accessible years ago by providing an attractive option for specific workloads. The infrastructure for these workloads is managed by the cloud provider, allowing architects and developers to focus more on building applications for their product or service – the very thing that should be their primary focus. This gives them the speed and agility needed for their product or service to be competitive in the market and distance themselves from their competition.
If you are considering an investment in Serverless, the following are advantages that Serverless architectures can offer:
- Development speed – for quick prototyping and early-stage development, Serverless is an attractive option. I don’t know of any small start-ups who choose implementations that are more complicated unless their specific workload demands it. (which is discussed more in-depth below)
- Simplicity – when the same code for an application CAN be run in multiple ways, usually the easiest one wins. And if the easiest is the cheapest, and possibly the most performant – it wins hands down.
- Hands-off admin – developers specify how many executions they expect they will have and the amount of memory they need, and that is the extent to which they will have to be concerned with scalability.
- Event-based – once you are in the ecosystem of the cloud provider, you can trigger your code to execute based on any number of things. It could be an API request (most common), or a new item in a queue, or new row in a database, etc. This gives developers a powerful capability to create robust applications by simply stitching together various services offered by the cloud provider. While this is an attractive option, be aware that this tends to tie you in more to a specific vendor, so tread carefully here if you prefer more of a cloud-agnostic approach.
- Cost – the “pay-for-what-you-use” model applies here through a per-execution charge. Cloud providers document the formula for costs and provide tools that help predict the monthly cost. Nobody likes surprises on monthly bills. The cost per execution is so low that most workloads would benefit from this model. Especially when considering an alternative “always-on” model in which you are billed hourly for computing capability regardless of workload activity.
- Pick a cloud, any cloud – once considered a limited offering through a select few cloud providers, serverless capabilities are now available across all providers. If you already have a significant investment in a particular cloud provider, they have you covered as you venture into the serverless world.
Candidate Workloads for Serverless
Not all workloads are right for utilizing the serverless approach.The following are five workload scenarios whose application makes them a good candidate for Serverless.
- Doesn’t execute excessively – per instance cost of executions is low, but as the number of executions increases, you may be better off running on a compute infrastructure, especially if cost reduction is a high priority.
- Finish quickly – apps that require a few seconds response time are good candidates. Most web apps would fall into this category where an API endpoint triggers a workload, does something with the database and returns a return code. Some cloud providers allow up to 15 minutes per execution as the maximum execution time. Others offer less, so be aware that your monthly costs will rise rapidly, the longer your executions take.
- Are event triggered – as described earlier, workloads triggered by events are great candidates for this approach.
- Are stateless – these workloads are ephemeral so the workloads need to be stateless to support this model. This is the foundation for high scalability whether using a serverless implementation or not (containers/Kubernetes).
- Have a small footprint – some applications are not ideal candidates based on their architecture and application footprint. This includes game servers that require massive server side footprint and compute capabilities, and are not built based on microservices.
What about vendor lock-in?
There are ways you can mitigate vendor lock-in concerns, which we will explore in-depth in our next Serverless article. We will demonstrate the same exact serverless code working in multiple cloud provider offerings. There is a trade-off between total flexibility (no vendor lock-in) and development speed and agility, and where you run your code is just one design decision in the overall technical architecture. If your application needs to run in multiple cloud providers and maybe even on prem, then different solutions would be in order. For instance, instead of using a cloud provider’s queueing service, you could consider implementing an open source equivalent and managing it yourself. While this does give you the portability to run anywhere, it creates more challenges specifically around deployment, management and administration, all of which will slow you down.
I recommend to evaluate the lock-in factor for each major component in the architecture and ask the following two questions:
- How much does this lock us in?
- Do we care?
Answers to these questions will help you determine the best course of action.
The serverless option is not perfect. However, it’s finally getting the attention it deserves, and it’s improving daily. Instead of asking ourselves, “what apps would we run on serverless?”, the question now becomes, “which ones would we not run this way?” Developers can now focus on building game-changing applications in record time and not worry about the infrastructure and administration required to run them. Sure sounds like a win-win. If you have questions or comments about this Serverless article or would like to speak with one of our architects directly about serverless implementation, please fill out the form and we will respond.
About Dave Moore
Dave Moore is GAP’s Chief Innovation Officer. He is a seasoned technology executive with more than 25 years of experience in conceptualization and crafting innovative solutions that provide scalability, widespread end-user adoption, and substantially increased revenue. Dave’s experience has given him unique insight into building diverse teams, and expert knowledge of microservices, Serverless, cloud optimization, CI/CD, security, big data and open-source technologies. You can connect with Dave on LinkedIn, or send him an email.