AWS Lambda Calculator
The AWS Lambda calculator is probably going to become very relevant to you if your company moves to the AWS cloud service. Here is a quick guide to what it is and how it works.
What is AWS Lambda Calculator Anyway?
In the context of AWS, Lambda is a service that allows you to run your code on demand without the need to provision servers. At the current time, Lambda can scale (automatically) from a few requests per day to thousands per second. Given the popularity of the service, it’s entirely possible that it will be developed further so it can scale even higher in the future.
One of the nice features of AWS Lambda is that you just choose the amount of memory you want for any given function and are automatically allocated a proportionate level of CPU power. If you scale the memory up or down, then the CPU power increases or decreases in tandem with it.
Understanding the AWS Lambda calculator
The bad news is that the AWS Lambda calculator is basic, to put it mildly. In fact, you could reasonably question whether or not it deserves to be classed as a pricing calculator at all, given that it amounts to a drop-down menu of regions plus a list of what they charge per total number of requests and per time block. You then have to do the actual sums yourself, presumably on a proper calculator.
There are two pieces of good news for the mathematically challenged. Firstly, the calculations are so simple; they really can be done with just a basic calculator, even a cellphone calculator. Secondly, if you really want an online calculator, then there are plenty of third-party options available.
Whichever option you choose, you’ll probably find it helpful to understand the basics of AWS Lambda pricing.
Understanding AWS Lambda pricing
The whole point of AWS Lambda pricing is that you literally only pay for what you use. Admittedly, this is often highlighted as a benefit of cloud computing in general, but AWS Lambda takes it to a whole new level. With “traditional” cloud computing, you typically fire up a virtual server, add whatever resources you need, do whatever you need to do and then shut it all back down again.
At least, that’s the theory. As anyone involved in real-world cloud computing will know, in reality, what often happens is that someone spins up the servers and then forgets to shut them down again, or shuts down some of the resources and forgets about the others (like the storage). That is literally impossible with AWS Lambda. It’s either on (in use) or off (out of use). There is no “in-between” or idling and its pricing reflects this.
AWS Lambda pricing depends on region, requests and duration
AWS pricing, in general, is usually dependent on the region, so there’s nothing new there. Requests and duration are what you need to understand, along with the slight twist of “provision concurrency”.
For the sake of completeness, you should also be aware that standard AWS charges will also apply to your usage of AWS Lambda. For example, if you need to transfer data from another region, then the transfer will be charged at the standard EC2 data transfer rates. Likewise, if your functions access S3, then you will be charged the standard fees for the read/write requests (and the data stored in S3).
AWS Lambda requests
In the context of AWS Lambda, a request is simply an instruction to start executing code. This is typically made in the form of an event notification or an invoke call.
AWS Lambda duration
Duration is exactly what its name suggests. You start being billed from the moment your code starts executing and you continue being billed until the moment it terminates (or returns). This is rounded up to the nearest 100ms. The key point to note about duration in AWS Lambda is that the price per time block depends on the amount of memory you allocate to the function. It, therefore, pays to write economic code.
AWS Lambda provisioned concurrency
You can think of provisioned concurrency as a turbo-charge for your Lambda functions. Provisioned concurrency fees are similar to duration fees in that they are based on the amount of memory you allocate to the function and the amount of concurrency that you configure on it.
The big difference is that provisional concurrency fee are rounded up to the nearest five minutes. If your function exceeds the configured concurrency, then you will revert to being billed at the standard rate.