-- Leo's gemini proxy

-- Connecting to capsule.adrianhesketh.com:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini; charset=utf-8

capsule.adrianhesketh.com


home


What happens when you exceed the RAM allocation of an AWS Lambda?


TL;DR - it stops very abruptly so make sure you monitor the Lambda Errors CloudWatch metrics.


I'd heard a rumour that when a Lambda ran out of RAM, it didn't log a message or metric, which didn't sound right, so I thought I'd try it out and find out.


I wrote a Lambda which had a RAM limit of 128MB and then purposefully exceeded it and deployed it. [0]


[0]


Then, I executed it via the AWS CLI.


aws lambda invoke \
--invocation-type RequestResponse \
--function-name oom-dev-memory \
--region eu-west-2 \
--log-type Tail \
--payload "{}" \
output.txt

Next, I checked the Lambda dashboard and I could clearly see the failed invocation, showing that a CloudWatch metric was available.


Lambda Dashboard Display


Screen Shot 2018-03-13 at 20.14.42


CloudWatch Metrics


Screen Shot 2018-03-14 at 08.36.35


Execution Logs


The first time I ran the Lambda, I'd forgotten to increase the maximum execution time from the Serverless Framework's 6 seconds. Even though it had used up a little more than the 128MB of RAM I'd allocated, it was clearly logged as a timeout.


Once I increased that number, it was clear in the log entries that `Process exited before completing request` is the log entry associated with running out of RAM, while `Task timed out after 6.00 seconds` is written after a timeout.


START RequestId: 16f8d3a2-26fc-11e8-bca6-73dbc61c39e7 Version: $LATEST
I'm about to use up a lot of RAM...
1MB consumed
2MB consumed
3MB consumed
4MB consumed
5MB consumed
6MB consumed
7MB consumed
8MB consumed
9MB consumed
10MB consumed
11MB consumed
12MB consumed
13MB consumed
14MB consumed
15MB consumed
16MB consumed
17MB consumed
18MB consumed
19MB consumed
20MB consumed
21MB consumed
22MB consumed
23MB consumed
24MB consumed
25MB consumed
26MB consumed
27MB consumed
28MB consumed
29MB consumed
30MB consumed
31MB consumed
32MB consumed
33MB consumed
34MB consumed
35MB consumed
2018/03/13 20:21:44 unexpected EOF
2018/03/13 20:21:44 unexpected EOF
END RequestId: 16f8d3a2-26fc-11e8-bca6-73dbc61c39e7
REPORT RequestId: 16f8d3a2-26fc-11e8-bca6-73dbc61c39e7	Duration: 24792.85 ms	Billed Duration: 24800 ms Memory Size: 128 MB	Max Memory Used: 129 MB
RequestId: 16f8d3a2-26fc-11e8-bca6-73dbc61c39e7 Process exited before completing request

If the difference between the failure reasons is important, it would be easy enough to write a CloudWatch log extractor to be able to see the difference between out-of-memory and timeout execution failures.


The `2018/03/13 20:31:51 unexpected EOF` wasn't written by my Lambda, it looks like it's part of the Go Lambda runtime, since it uses the default console log format for Go's `log` package.


More


Next


Grafana - Why is my singlestat panel showing a decimal / float instead of an integer?


Previous


Speaking at the Manchester AWS User Group


Home


home

-- Response ended

-- Page fetched on Sun Apr 28 03:08:38 2024