Today, serverless architecture is the hottest trend in cloud computing. With its numerous benefits, like reduced costs, high scalability and performance, serverless became a perfect match for modern microservices applications. Its implementation, though, may be tricky. Keep reading to find out what a serverless application looks like and how to maintain and scale it properly.
Table of Contents
To illustrate the serverless architecture, let’s take an application built with AWS Lambda. This AWS serverless application consists of a web server, a FaaS (Function as a Service) layer, a Security Token Service (STS), a user authentication service, and a database.
As for the FaaS layer, the app uses AWS Lambda. It executes functions in response to calls from the client or other services. Lambda functions are the core of this AWS serverless app. The STS generates temporary keys to use the API by the client app. To enable user authentication, the app uses AWS Cognito. Thanks to the support of various social networks and email options for the login, it allows signing up and logging in to the app easily. Finally, to manage and store data, it uses DynamoDB, a non-relational database provided by Amazon.
You can choose any serverless vendor: Google’s Cloud Functions, Microsoft’s Azure Functions, Apache OpenWhisk, Spring Cloud Functions, or Fn Project. But at Relevant, we prefer working with AWS Lambda. Let’s learn how to manage a serverless environment with this FaaS.
The hidden danger of using AWS Lambda is the difficulty of managing a serverless environment. With many small functions running concurrently, you get a very complex system in the end. That’s why you need to arm your development team with tools for monitoring functions and the system’s performance. The metrics you should pay attention to include:
You can monitor each function with the help of Amazon CloudWatch. The service automatically stores and displays metrics through the AWS Lambda console. Here’s what the monitoring page looks like.
Apart from CloudWatch, you can use AWS Lambda with Amazon X-Ray. The tool helps you trace requests to identify performance issues.
With AWS Lambda, you pay only for the resources you use. In other words, the service charges you only for the duration of code execution and the number of requests for your functions. AWS Lambda charges $0.20 for one million requests and $0.00001667 for every GB-second used.
A request is each code execution triggered by the client or other AWS services. So, the more requests across all the functions happen, the more you’ll have to pay. As for the duration, it spans the time the code starts executing and up to the time it returns or is terminated. The duration is measured in milliseconds rounded up to 100ms. You’d have to say, execution time depends on many factors, such as third-party dependencies and language runtime.
In the end, the price also depends on the amount of memory you need for the function allocation. Take a look at how the price for 100ms corresponds to different amounts of memory used:
The good news is that AWS Lambda has a free usage tier that doesn’t expire with time. It includes 1M free requests per month, and 400,000 GB-seconds of compute time per month.
At Relevant, we help companies around the world build their products and scale engineering teams. Here’s how we can help with creating a serverless application:
Let’s look at our two projects where we successfully implemented serverless architecture on AWS.
24OnOff is a platform that minimizes paperwork for construction companies. It helps them with time tracking and project management. We divided the system’s features into separate modules and created a serverless app with AWS. Also, we set up a monitoring system that covers all endpoints to ensure the stability of HTTP requests. Additionally, it tracks server resources like CPU usage, memory consumption, network, etc. to improve capacity planning and reliability.
FirstHomeCoach is a SaaS platform that helps UK citizens buy property. It connects users with advisors who help them secure a mortgage, get insurance, and handle all legal paperwork. Our team designed the app’s serverless architecture and built the system from scratch. We established microservices, which created the necessary isolation between the application server and business processes. To speed up the development, we enabled static typing. Also, we developed a Node.js-based cluster module that allows the app to use the full power of the CPU.
If you don’t want system evolution and technical debt to cause you trouble, design your serverless architecture with maintainability in mind. Pay attention to the following system’s characteristics:
For a serverless app, scaling is usually automatic and managed by a cloud vendor. And yes, scaling both up and down is possible. Which means you don’t have to worry about the increased user traffic and server load. Nor do you have to worry about over-provisioning if your serverless application handles only occasional requests. You always use the exact amount of resources you need. Say goodbye to idle servers!
Whenever the volume of traffic changes, your serverless app will auto-scale instantly. But the trick here is that all vendors have boundaries due to the RAM, CPU, and I/O operation limits. So beware of those. Take AWS Lambda, for instance: it offers up to 3GB on RAM.
We all know serverless architecture’s benefits: system stability, high app performance and code quality, reduced time to market, low cost, and great scalability. That is why more and more companies choose moving to serverless rather than maintaining their own server infrastructure.
If you’re ready to reap serverless benefits, contact Relevant. We’ll be glad to help you scale your team and implement serverless architecture on AWS.