Equinox IT Blog

Presenting on Serverless Computing - your next step in Cloud

Senior Consultant Carl Douglas presenting on Serverless Computing at an Equinox IT Client Briefing

Cloud computing continues to evolve with many of the major providers now offering Serverless Computing, also often referred to as Functions-as-a-Service. To help make sense of how Serverless computing works and what it offers, I presented an overview to Equinox IT clients and team members. Based on experience so far with Cloud deployed applications I'm excited by the potential use-cases and the advantages that Serverless computing has to offer.

What is Serverless Computing?

The term Serverless Computing does not entail the disappearance of server infrastructure altogether. Rather, the premise is that developers, operators and owners of Cloud-based applications no longer need to think about servers. Serverless computing involves full automation of the provisioning and scaling of underlying infrastructure, involving virtualised machines, Operating Systems, containers and container orchestration. More than commoditisation of infrastructure it is near complete automation of infrastructure.

Another term often used when talking about Serverless Computing is Functions-as-a-Service (FaaS). FaaS can be viewed as a step in the evolution of Platform-of-a-Service, adding a language runtime including Python, Java, JavaScript, C#, Go, and event-driven invocation of code written in those programming languages. FaaS is a meaningful term in the sense that an application can be constructed out of a collection of functions and the unit of deployment can be as small as a single function.

One of the characteristics of Serverless Computing is that as far as we are concerned, there are no long-lived processes. A Serverless or Cloud Function is invoked by an event, the function handling that event then returns without effectively accumulating any local state. Statelessness in this context covers both memory and disk, although there is typically temporary disk space available, it may not persist between invocations of a function. Of course, a function may use another service such as a database connection to update or retrieve application data.

Other characteristics include automated provisioning of infrastructure, and automated scaling of infrastructure based on demand. This means that as demand is increased by arrival of events the underlying infrastructure is scaled outwards to service that load.

As hinted above, given that functions are consumers of events, another characteristic is that Serverless Computing naturally implies an Event-Driven Architecture. Functions may also be producers of events, and so communication between functions and other systems is achieved by patterns of event notification.

Senior Consultant Carl Douglas presents on Serverless Computing at an Equinox IT Client Briefing

Benefits

A primary benefit of Serverless is the ease of deployment of applications, as there is no need to provision or configure applications for specific infrastructure requirements. In fact, frameworks such as serverless.com simplify through tooling and templates for specific providers so that a deployment is as simple as running a single command.

Another significant benefit of Serverless Computing is that providers charge for the invocation of cloud functions and the time that a function is executed for. An unused function has a zero compute cost.

The benefit of having a scalable and available infrastructure means never being under or over capacity in terms of infrastructure. For applications where demand is difficult to forecast this takes the guess work out of capacity planning.

Senior Consultant Carl Douglas presents on Serverless Computing at an Equinox IT Client Briefing event

Use cases

A few use cases come to mind now we have some understanding of Serverless and its benefits. For example, it gives us the ability to rapidly build client specific APIs, Back Ends for Front Ends (such as ReSTFUL endpoints), callbacks for SaaS applications, and scheduled jobs that can be triggered by timers.

Wider implications

Serverless Computing is coming of age in the era of Continuous Deployment, and the typical deployment model of Serverless Computing benefits from maturity with Continuous Deployment practices.

Serverless Computing assumes an Event-Driven Architecture. An existing application may not be compatible with this style of architecture, and developers may not be familiar with this.

Serverless represents a paradigm shift, requiring a mindset change much greater than that required by containerization for example.

Providers

There are many providers who offer Serverless Computing: Amazon Web Services Lambda, Microsoft Azure Functions, Google Cloud Functions, IBM OpenWhisk, Fn, Webtask, Spotinst, Kubeless, Alibaba Cloud Function Compute, and many more.

Note that the function signature used by the providers is not universally shared, and so there is not absolute compatibility when moving code from one provider to another. However, the cloudevents initiative aims to standardise the specification of an event data structure. There are also frameworks such as serverless.com that can help generate boilerplate code for some of the providers.

Also serverless.com offer an event gateway that can translate and forward events to specific providers.

Senior Consultant Carl Douglas presenting on Serverless Computing at an Equinox IT Client Briefing event

Limitations

There are typically default concurrency limits to the scale of Serverless Computing, for example AWS Lambda provides a default of 1000. This can be increased however on request. Allocation of concurrency can be reserved per function, for example setting to a lower number 10 causing throttling to occur on the 11th invocation. This is a partial mitigation for a Denial of Service attack, and also offers some cost control.

There is a default max compute time, on Lambda it is 5 minutes, at which point an executing function may be terminated by the provider.

Key points

Serverless Computing is a milestone in the commoditisation of automated infrastructure. It is well suited to enterprises that have a degree of maturity with Continuous Delivery. It implies an Event-Driven Architecture which means applications may need to be adapted and development teams familiarised with this style of architecture and the relevant integration patterns.

Summary

It would seem prudent to improve on Continuous Delivery practices and consider the impact of integrating legacy applications with other systems in an Event-Driven Architecture style, ahead of transitioning to Serverless Computing to better realise the benefits.

Carl Douglas is a Senior Consultant specialising in mobile and Cloud architecture and applications, based in Wellington.

Subscribe by email