Azure Functions V2 Is Released, How Performant Is It?

Azure Functions major version 2.0 was released into GA a few days back during Microsoft Ignite. The runtime is now based on .NET Core and thus is cross-platform and more interoperable. It has a nice extensibility story too.

In theory, .NET Core runtime is more lean and performant. But last time I checked back in April, the preview version of Azure Functions V2 had some serious issues with cold start durations.

I decided to give the new and shiny version another try and ran several benchmarks. All tests were conducted on Consumption plan.

TL;DR: it's not perfect just yet.

Cold Starts

Cold starts happen when a new instance handles its first request, see my other posts: one, two, three.

Hello World

The following chart gives a comparison of V1 vs V2 cold starts for the two most popular runtimes: .NET and Javascript. The dark bar shows the most probable range of values, while the light ones are possible but less frequent:

Cold Starts V1 vs V2: .NET and Javascript

Apparently, V2 is slower to start for both runtimes. V2 on .NET is slower by 10% on average and seems to have higher variation. V2 on Javascript is massively slower: 2 times on average, and the slowest startup time goes above 10 seconds.

Dependencies On Board

The values for the previous chart were calculated for Hello-World type of functions with no extra dependencies.

The chart below shows two more Javascript functions, this time with a decent number of dependencies:

  • Referencing 3 NPM packages - 5MB zipped
  • Referencing 38 NPM packages - 35 MB zipped

Cold Starts V1 vs V2: Javascript with NPM dependencies

V2 clearly loses on both samples, but V2-V1 difference seems to be consistently within 2.5-3 seconds for any amount of dependencies.

All the functions were deployed with the Run-from-Package method which promises faster startup times.


Functions V2 come with a preview of a new runtime: Java / JVM. It utilizes the same extensibility model as Javascript, and thus it seems to be a first-class citizen now.

Cold starts are not first-class though:

Cold Starts Java

If you are a Java developer, be prepared for 20-25 seconds of initial startup time. That will probably be resolved when the Java runtime becomes generally available:

Queue Processor

Cold starts are most problematic for synchronous triggers like HTTP requests. They are less relevant for queue-based workloads, where scale out is of higher importance.

Last year I ran some tests around the ability of Functions to keep up with variable queue load: one, two.

Today I ran two simple tests to compare the scalability of V1 vs. V2 runtimes.


In my first tests, a lightweight Javascript Function processed messages from an Azure Storage Queue. For each message, it just pauses for 500 msec and then completes. This is supposed to simulate I/O-bound Functions.

I've sent 100,000 messages to the queue and measured how fast they went away. Batch size (degree of parallelism on each instance) was set to 16.

Processing Queue Messages with Lightweight I/O Workload

Two lines show the queue backlogs of two runtimes, while the bars indicate the number of instances working in parallel at a given minute.

We see that V2 was a bit faster to complete, probably due to more instances provisioned to it at any moment. The difference is not big though and might be statistically insignificant.

CPU at Work

Functions in my second experiment are CPU-bound. Each message invokes calculation of a 10-stage Bcrypt hash. On a very quiet moment, 1 such function call takes about 300-400 ms to complete, consuming 100% CPU load on a single core.

Both Functions are precompiled .NET and both are using Bcrypt.NET.

Batch size (degree of parallelism on each instance) was set to 2 to avoid too much fighting for the same CPU. Yet, the average call duration is about 1.5 seconds (3x slower than possible).

Processing Queue Messages with CPU-bound Workload

The first thing to notice: it's the same number of messages with comparable "sequential" execution time, but the total time to complete the job increased 3-fold. That's because the workload is much more demanding to the resources of application instances, and they struggle to parallelize work more aggressively.

V1 and V2 are again close to each other. One more time, V2 got more instances allocated to it most of the time. And yet, it seemed to be consistently slower and lost about 2.5 minutes on 25 minutes interval (~10%).

HTTP Scalability

I ran two similar Functions — I/O-bound "Pause" (~100 ms) and CPU-bound Bcrypt (9 stages, ~150ms) — under a stress test. But this time they were triggered by HTTP requests. Then I compared the results for V1 and V2.


The grey bars on the following charts represent the rate of requests sent and processed within a given minute.

The lines are percentiles of response time: green lines for V2 and orange lines for V1.

Processing HTTP Requests with Lightweight I/O Workload

Yes, you saw it right, my Azure Functions were processing 100,000 messages per minute at peak. That's a lot of messages.

Apart from the initial spike at minutes 2 and 3, both versions performed pretty close to each other.

50th percentile is flat close to the theoretic minimum of 100 ms, while the 95th percentile fluctuates a bit, but still mostly stays quite low.

Note that the response time is measured from the client perspective, not by looking at the statistics provided by Azure.

CPU Fanout

How did CPU-heavy workload perform?

To skip ahead, I must say that the response time increased much more significantly, so my sample clients were not able to generate request rates of 100k per minute. They "only" did about 48k per minute at peak, which still seems massive to me.

I've run the same test twice: one for Bcrypt implemented in .NET, and one for Javascript.

Processing HTTP Requests with .NET CPU-bound Workload

V2 had a real struggle during the first minute, where response time got terribly slow up to 9 seconds.

Looking at the bold-green 50th percentile, we can see that it's consistently higher than the orange one throughout the load growth period of the first 10 minutes. V2 seemed to have a harder time to adjust.

This might be explainable by slower growth of instance count:

Instance Count Growth while Processing HTTP Requests with .NET CPU-bound Workload

This difference could be totally random, so let's look at a similar test with Javascript worker. Here is the percentile chart:

Processing HTTP Requests with Javascript CPU-bound Workload

The original slowness of the first 3 minutes is still there, but after that time V2 and V1 are on-par.

On-par doesn't sound that great though if you look at the significant edge in the number of allocated instances, in favor of V2 this time:

Instance Count Growth while Processing HTTP Requests with Javascript CPU-bound Workload

Massive 147 instances were crunching Bcrypt hashes in Javascript V2, and that made it a bit faster to respond than V1.


As always, be reluctant to make definite conclusions based on simplistic benchmarks. But I see some trends which might be true as of today:

  • Performance of .NET Functions is comparable across two versions of Functions runtimes;
  • V1 still has a clear edge in the cold start time of Javascript Functions;
  • V2 is the only option for Java developers, but be prepared to very slow cold starts;
  • Scale-out characteristics seem to be independent of the runtime version, although there are blurry signs of V2 being a bit slower to ramp up or slightly more resource hungry.

I hope this helps in your serverless journey!

Serverless: Cold Start War

Serverless cloud services are hot. Except when they are not :)

AWS Lambda, Azure Functions, Google Cloud Functions are all similar in their attempt to enable rapid development of cloud-native serverless applications.

Auto-provisioning and auto-scalability are the killer features of those Function-as-a-Service cloud offerings. No management required, cloud providers will deliver infrastructure for the user based on the actual incoming load.

One drawback of such dynamic provisioning is a phenomenon called "cold start". Basically, applications that haven't been used for a while take longer to startup and to handle the first request.

Cloud providers keep a bunch of generic unspecialized workers in stock. Whenever a serverless application needs to scale up, be it from 0 to 1 instances, or from N to N+1 likewise, the runtime will pick one of the spare workers and will configure it to serve the named application:

Cold Start

This procedure takes time, so the latency of the application event handling increases. To avoid doing this for every event, the specialized worker will be kept intact for some period of time. When another event comes in, this worker will stand available to process it as soon as possible. This is a "warm start":

Warm Start

The problem of cold start latency was described multiple times, here are the notable links:

The goal of my article today is to explore how cold starts compare:

  • Across Big-3 cloud providers (Amazon, Microsoft, Google)
  • For different languages and runtimes
  • For smaller vs larger applications (including dependencies)
  • How often cold starts happen
  • What can be done to optimize the cold starts

Let's see how I did that and what the outcome was.

DISCLAIMER. Performance testing is hard. I might be missing some important factors and parameters that influence the outcome. My interpretation might be wrong. The results might change over time. If you happen to know a way to improve my tests, please let me know and I will re-run them and re-publish the results.


All tests were run against HTTP Functions because that's where cold start matters the most.

All the functions were returning a simple JSON reporting their current instance ID, language etc. Some functions were also loading extra dependencies, see below.

I did not rely on execution time reported by a cloud provider. Instead, I measured end-to-end duration from the client perspective. This means that durations of HTTP gateway (e.g. API Gateway in case of AWS) are included into the total duration. However, all calls were made from within the same region, so network latency should have minimal impact:

Test Setup

Important note: I ran all my tests on GA (generally available) versions of services/languages, so e.g. Azure tests were done with version 1 of Functions runtime (.NET Framework), and GCP tests were only made for Javascript runtime.

When Does Cold Start Happen?

Obviously, cold start happens when the very first request comes in. After that request is processed, the instance is kept alive in case subsequent requests arrive. But for how long?

The answer differs between cloud providers.

To help you read the charts in this section, I've marked cold starts with blue color dots, and warm starts with orange color dots.


Here is the chart for Azure. It shows the values of normalized request durations across different languages and runtime versions (Y-axis) depending on the time since the previous request in minutes (X-axis):

Azure Cold Start Threshold

Clearly, an idle instance lives for 20 minutes and then gets recycled. All requests after 20 minutes threshold hit another cold start.


AWS is more tricky. Here is the same kind of chart, relative durations vs time since the last request, measured for AWS Lambda:

AWS Cold Start vs Warm Start

There's no clear threshold here... For this sample, no cold starts happened within 28 minutes after the previous invocation. Afterward, the frequency of cold starts slowly rises. But even after 1 hour of inactivity, there's still a good chance that your instance is alive and ready to take requests.

This doesn't match the official information that AWS Lambdas stay alive for just 5 minutes after the last invocation. I reached out to Chris Munns, and he confirmed:

A couple learning points here:

  • AWS is working on improving cold start experience (and probably Azure/GCP do too)
  • My results might not be reliably reproducible in your application since it's affected by recent adjustments


Google Cloud Functions left me completely puzzled. Here is the same chart for GCP cold starts (again, orange dots are warm and blue ones are cold):

GCP Cold Start vs Warm Start

This looks totally random to me. A cold start can happen in 3 minutes after the previous request, or an instance can be kept alive for the whole hour. The probability of a cold start doesn't seem to depend on the interval, at least just by looking at this chart.

Any ideas about what's going on are welcome!

Parallel requests

Cold starts happen not only when the first instance of an application is provisioned. The same issue will happen whenever all the provisioned instances are busy handling incoming events, and yet another event comes in (at scale out).

As far as I'm aware, this behavior is common to all 3 providers, so I haven't prepared any comparison charts for N+1 cold starts. Yet, be aware of them!

Reading Candle Charts

In the following sections, you will see charts that represent statistical distribution of cold start time as measured during my experiments. I repeated experiments multiple times and then grouped the metric values, e.g. by the cloud provider or by language.

Each group will be represented by a "candle" on the chart. This is how you should read each candle:

How to Read Cold Start Charts

Memory Allocation

AWS Lambda and Google Cloud Functions have a setting to define the memory size that gets allocated to a single instance of a function. A user can select a value from 128MB to 2GB and above at creation time.

More importantly, the virtual CPU cycles get allocated proportionally to this provisioned memory size. This means that an instance of 512 MB will have twice as much CPU speed as an instance of 256MB.

Does this affect the cold start time?

I've run a series of tests to compare cold start latency across the board of memory/CPU sizes. The results are somewhat mixed.

AWS Lambda Javascript doesn't seem to have significant differences. This probably means that not so much CPU load is required to start a Node.js "Hello World" application:

AWS Javascript Cold Start by Memory

AWS Lambda .NET Core runtime does depend on memory size though. Cold start time drops dramatically with every increase in allocated memory and CPU:

AWS C# Cold Start by Memory

GCP Cloud Functions expose a similar effect even for Javascript runtime:

GCP Javascript Cold Start by Memory

In contrast to Amazon and Google, Microsoft doesn't ask to select a memory limit. Azure will charge Functions based on the actual memory usage. More importantly, it will always dedicate a full vCore for a given Function execution.

It's not exactly apples-to-apples, but I chose to fix the memory allocations of AWS Lambda and GCF to 1024 MB. This feels the closest to Azure's vCore capacity, although I haven't tried a formal CPU performance comparison.

Given that, let's see how the 3 cloud providers compare in cold start time.

Javascript Baseline

Node.js is the only runtime supported in production by Google Cloud Functions right now. Javascript is also probably by far the most popular language for serverless applications across the board.

Thus, it makes sense to compare the 3 cloud providers on how they perform in Javascript. The base test measures the cold starts of "Hello World" type of functions. Functions have no dependencies, so deployment package is really small.

Here are the numbers for cold starts:

Cold Start for Basic Javascript Functions

AWS is clearly doing the best job here. GCP takes the second place, and Azure is the slowest. The rivals are sort of close though, seemingly playing in the same league so the exact disposition might change over time.

How Do Languages Compare?

I've written Hello World HTTP function in all supported languages of the cloud platforms:

  • AWS: Javascript, Python, Java, Go and C# (.NET Core)
  • Azure: Javascript and C# (precompiled .NET assembly)
  • GCP: Javascript

Azure kind of supports much more languages, including Python and Java, but they are still considered experimental / preview, so the cold starts are not fully optimized. See my previous article for exact numbers.

Same applies to Python on GCP.

The following chart shows some intuition about the cold start duration per language. The languages are ordered based on mean response time, from lowest to highest:

Cold Start per Language per Cloud and Language

AWS provides the richest selection of runtimes, and 4 out of 5 are faster than the other two cloud providers. C# / .NET seems to be the least optimized (Amazon, why is that?).

Does Size Matter?

OK, enough of Hello World. A real-life function might be more heavy, mainly because it would depend on other third-party libraries.

To simulate such scenario, I've measured cold starts for functions with extra dependencies:

  • Javascript referencing 3 NPM packages - 5MB zipped
  • Javascript referencing 38 NPM packages - 35 MB zipped
  • C# function referencing 5 NuGet packages - 2 MB zipped
  • Java function referencing 5 Maven packages - 15 MB zipped

Here are the results:

Cold Start Dependencies

As expected, the dependencies slow the loading down. You should keep your Functions lean, otherwise, you will pay in seconds for every cold start.

However, the increase in cold start seems quite low, especially for precompiled languages.

A very cool feature of GCP Cloud Functions is that you don't have to include NPM packages into the deployment archive. You just add package.json file and the runtime will restore them for you. This makes the deployment artifact ridiculously small, but doesn't seem to slow down the cold starts either. Obviously, Google pre-restores the packages in advance, before the actual request comes in.

Avoiding Cold Starts

The overall impression is that cold start delays aren't that high, so most applications can tolerate them just fine.

If that's not the case, some tricks can be implemented to keep function instances warm. The approach is universal for all 3 providers: once in X minutes, make an artificial call to the function to prevent it from expiring.

Implementation details will differ since the expiration policies are different, as we explored above.

For applications with higher load profile, you might want to fire several parallel "warming" requests in order to make sure that enough instances are kept in warm stock.

For further reading, have a look at my Cold Starts Beyond First Request in Azure Functions and AWS Lambda Warmer as Pulumi Component.


Here are some lessons learned from all the experiments above:

  • Be prepared for 1-3 seconds cold starts even for the smallest Functions
  • Different languages and runtimes have roughly comparable cold start time within the same platform
  • Minimize the number of dependencies, only bring what's needed
  • AWS keeps cold starts below 1 second most of the time, which is pretty amazing
  • All cloud providers are aware of the problem and are actively optimizing the cold start experience
  • It's likely that in middle term these optimizations will make cold starts a non-issue for the vast majority of applications

Do you see anything weird or unexpected in my results? Do you need me to dig deeper into other aspects? Please leave a comment below or ping me on twitter, and let's sort it all out.

Stay tuned for more serverless perf goodness!

AWS Lambda Warmer as Pulumi Component

Out of curiosity, I'm currently investigating cold starts of Function-as-a-Service platforms of major cloud providers. Basically, if a function is not called for several minutes, the cloud instance behind it might be recycled, and then the next request will take longer because a new instance will need to be provisioned.

Recently, Jeremy Daly posted a nice article about the proper way to keep AWS Lambda instances "warm" to (mostly) prevent cold starts with minimal overhead. Chris Munns endorsed the article, so we know it's the right way.

The amount of actions to be taken is quite significant:

  • Define a CloudWatch event which would fire every 5 minutes
  • Bind this event as another trigger for your Lambda
  • Inside the Lambda, detect whether current invocation is triggered by our CloudWatch event
  • If so, short-circuit the execution and return immediately; otherwise, run the normal workload
  • (Bonus point) If you want to keep multiple instances alive, do some extra dancing with calling itself N times in parallel, provided by an extra permission to do so.

Pursuing Reusability

To simplify this for his readers, Jeremy was so kind to

  • Create an NPM package which you can install and then call from a function-to-be-warmed
  • Provide SAM and Serverless Framework templates to automate Cloud Watch integration

Those are still two distinct steps: writing the code (JS + NPM) and provisioning the cloud resources (YAML + CLI). There are some drawbacks to that:

  • You need to change two parts, which don't look like each other
  • They have to work in sync, e.g. Cloud Watch event must provide the right payload for the handler
  • There's still some boilerplate for every new Lambda

Pulumi Components

Pulumi takes a different approach. You can blend the application code and infrastructure management code into one cohesive cloud application.

Related resources can be combined together into reusable components, which hide repetitive stuff behind code abstractions.

One way to define an AWS Lambda with Typescript in Pulumi is the following:

const handler = (event: any, context: any, callback: (error: any, result: any) => void) => {
    const response = {
        statusCode: 200,
        body: "Cheers, how are things?"

    callback(null, response);

const lambda = new aws.serverless.Function("my-function", { /* options */ }, handler);

The processing code handler is just passed to infrastructure code as a parameter.

So, if I wanted to make reusable API for an "always warm" function, how would it look like?

From the client code perspective, I just want to be able to do the same thing:

const lambda = new mylibrary.WarmLambda("my-warm-function", { /* options */ }, handler);

CloudWatch? Event subscription? Short-circuiting? They are implementation details!

Warm Lambda

Here is how to implement such component. The declaration starts with a Typescript class:

export class WarmLambda extends pulumi.ComponentResource {
    public lambda: aws.lambda.Function;

    // Implementation goes here...

We expose the raw Lambda Function object, so that it could be used for further bindings and retrieving outputs.

The constructor accepts the same parameters as aws.serverless.Function provided by Pulumi:

constructor(name: string,
        options: aws.serverless.FunctionOptions,
        handler: aws.serverless.Handler,
        opts?: pulumi.ResourceOptions) {

    // Subresources are created here...

We start resource provisioning by creating the CloudWatch rule to be triggered every 5 minutes:

const eventRule = new aws.cloudwatch.EventRule(`${name}-warming-rule`, 
    { scheduleExpression: "rate(5 minutes)" },
    { parent: this, ...opts }

Then goes the cool trick. We substitute the user-provided handler with our own "outer" handler. This handler closes over eventRule, so it can use the rule to identify the warm-up event coming from CloudWatch. If such is identified, the handler short-circuits to the callback. Otherwise, it passes the event over to the original handler:

const outerHandler = (event: any, context: aws.serverless.Context, callback: (error: any, result: any) => void) =>
    if (event.resources && event.resources[0] && event.resources[0].includes( {
        callback(null, "warmed!");
    } else {
        console.log('Running the real handler...');
        handler(event, context, callback);

That's a great example of synergy enabled by doing both application code and application infrastructure in a single program. I'm free to mix and match objects from both worlds.

It's time to bind both eventRule and outerHandler to a new serverless function:

const func = new aws.serverless.Function(
    { parent: this, ...opts });
this.lambda = func.lambda;            

Finally, I create an event subscription from CloudWatch schedule to Lambda:

this.subscription = new serverless.cloudwatch.CloudwatchEventSubscription(
    { },
    { parent: this, ...opts });

And that's all we need for now! See the full code here.

Here is the output of pulumi update command for my sample "warm" lambda application:

     Type                                                      Name                            Plan
 +   pulumi:pulumi:Stack                                       WarmLambda-WarmLambda-dev       create
 +    samples:WarmLambda                                       i-am-warm                       create
 +      aws-serverless:cloudwatch:CloudwatchEventSubscription  i-am-warm-warming-subscription  create
 +        aws:lambda:Permission                                i-am-warm-warming-subscription  create
 +        aws:cloudwatch:EventTarget                           i-am-warm-warming-subscription  create
 +      aws:cloudwatch:EventRule                               i-am-warm-warming-rule          create
 +      aws:serverless:Function                                i-am-warm-warmed                create
 +         aws:lambda:Function                                 i-am-warm-warmed                create

7 Pulumi components and 4 AWS cloud resources are provisioned by one new WarmLambda() line.

Multi-Instance Warming

Jeremy's library supports warming several instances of Lambda by issuing parallel self-calls.

Reproducing the same with Pulumi component should be fairly straightforward:

  • Add an extra constructor option to accept the number of instances to keep warm
  • Add a permission to call Lambda from itself
  • Fire N calls when warming event is triggered
  • Short-circuit those calls in each instance

Note that only the first item would be visible to the client code. That's the power of componentization and code reuse.

I didn't need multi-instance warming, so I'll leave the implementation as exercise for the reader.


Obligatory note: most probably, you don't need to add warming to your AWS Lambdas.

But whatever advanced scenario you might have, it's likely that it is easier to express the scenario in terms of general-purpose reusable component, rather than a set of guidelines or templates.

Happy hacking!

Getting Started with AWS Lambda in Pulumi

For a small research project of mine, I needed to create HTTP triggered AWS Lambda's in all supported programming languages.

I'm not a power AWS user, so I get easily confused about the configuration of things like IAM roles or API Gateway. Moreover, I wanted my environment to be reproducible, so manual AWS Console wasn't a good option.

I decided it was a good job for Pulumi. They pay a lot of attention to serverless and especially AWS Lambda, and I love the power of configuration as code.

I created a Pulumi program which provisions Lambda's running on Javascript, .NET, Python, Java and Go. Pulumi program itself is written in Javascript.

I'm describing the resulting code below in case folks need to do the same thing. The code itself is on my github.


Probably, the vast majority of Pulumi + AWS Lambda users will be using Javascript as programming language for their serverless functions.

No wonder that this scenario is the easiest to start with. There is a high-level package @pulumi/cloud-aws which hides all the AWS machinery from a developer.

The simplest function will consist of just several lines:

const cloud = require("@pulumi/cloud-aws");

const api = new cloud.API("aws-hellolambda-js");
api.get("/js", (req, res) => {
    res.status(200).json("Hi from Javascript lambda");

exports.endpointJs = api.publish().url;

Configure your Pulumi stack, run pulumi update and a Lambda is up, running and accessible via HTTP.

.NET Core

.NET is my default development environment and AWS Lambda supports .NET Core as execution runtime.

Pulumi program is still Javascript, so it can't mix C# code in. Thus, the setup looks like this:

  • There is a .NET Core 2.0 application written in C# and utilizing Amazon.Lambda.* NuGet packages
  • I build and publish this application with dotnet CLI
  • Pulumi then utilizes the published binaries to create deployment artifacts

C# function looks like this:

public class Functions
    public async Task<APIGatewayProxyResponse> GetAsync(APIGatewayProxyRequest request, ILambdaContext context)
        return new APIGatewayProxyResponse
            StatusCode = (int)HttpStatusCode.OK,
            Body = "\"Hi from C# Lambda\"",
            Headers = new Dictionary<string, string> { { "Content-Type", "application/json" } }

For non-Javascript lambdas I utilize @pulumi/aws package. It's of lower level than @pulumi/cloud-aws, so I had to setup IAM first:

const aws = require("@pulumi/aws");

const policy = {
    "Version": "2012-10-17",
    "Statement": [
            "Action": "sts:AssumeRole",
            "Principal": {
                "Service": "",
            "Effect": "Allow",
            "Sid": "",
const role = new aws.iam.Role("precompiled-lambda-role", {
    assumeRolePolicy: JSON.stringify(policy),

And then I did a raw definition of AWS Lambda:

const pulumi = require("@pulumi/pulumi");

const csharpLambda = new aws.lambda.Function("aws-hellolambda-csharp", {
    runtime: aws.lambda.DotnetCore2d0Runtime,
    code: new pulumi.asset.AssetArchive({
        ".": new pulumi.asset.FileArchive("./csharp/bin/Debug/netcoreapp2.0/publish"),
    timeout: 5,
    handler: "app::app.Functions::GetAsync",
    role: role.arn

Note the path to publish folder, which should match the path created by dotnet publish, and the handler name matching C# class/method.

Finally, I used @pulumi/aws-serverless to define API Gateway endpoint for the lambda:

const serverless = require("@pulumi/aws-serverless");

const precompiledApi = new serverless.apigateway.API("aws-hellolambda-precompiledapi", {
    routes: [
        { method: "GET", path: "/csharp", handler: csharpLambda },

That's definitely more ceremony compared to Javascript version. But hey, it's code, so if you find yourself repeating the same code, go ahead and make a higher order component out of it, incapsulating the repetitive logic.


Pulumi supports Python as scripting language, but I'm sticking to Javascript for uniform experience.

In this case, the flow is similar to .NET but simpler: no compilation step is required. Just define a

def handler(event, context): 
    return {
        'statusCode': 200,
        'headers': {'Content-Type': 'application/json'},
        'body': '"Hi from Python lambda"'

and package it into zip in AWS lambda definition:

const pythonLambda = new aws.lambda.Function("aws-hellolambda-python", {
    runtime: aws.lambda.Python3d6Runtime,
    code: new pulumi.asset.AssetArchive({
        ".": new pulumi.asset.FileArchive("./python"),
    timeout: 5,
    handler: "handler.handler",
    role: role.arn

I'm reusing the role definition from above. The API definition will also be the same as for .NET.


Golang is a compiled language, so the approach is similar to .NET: write code, build, reference the built artifact from Pulumi.

My Go function looks like this:

func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {

 return events.APIGatewayProxyResponse{
  Body:       "\"Hi from Golang lambda\"",
  StatusCode: 200,
 }, nil


Because I'm on Windows but AWS Lambda runs on Linux, I had to use build-lambda-zip tool to make the package compatible. Here is the PowerShell build script:

$env:GOOS = "linux"
$env:GOARCH = "amd64"
go build -o main main.go
~\Go\bin\build-lambda-zip.exe -o main

and Pulumi function definition:

const golangLambda = new aws.lambda.Function("aws-hellolambda-golang", {
    runtime: aws.lambda.Go1dxRuntime,
    code: new pulumi.asset.FileArchive("./go/"),
    timeout: 5,
    handler: "main",
    role: role.arn


Java class implements an interface from AWS SDK:

public class Hello implements RequestStreamHandler {

    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {

        JSONObject responseJson = new JSONObject();

        responseJson.put("isBase64Encoded", false);
        responseJson.put("statusCode", "200");
        responseJson.put("body", "\"Hi from Java lambda\"");  

        OutputStreamWriter writer = new OutputStreamWriter(outputStream, "UTF-8");

I compiled this code with Maven (mvn package), which produced a jar file. AWS Lambda accepts jar directly, but Pulumi's FileArchive is unfortunately crashing on trying to read it.

As a workaround, I had to define a zip file with jar placed inside lib folder:

const javaLambda = new aws.lambda.Function("aws-coldstart-java", {
    code: new pulumi.asset.AssetArchive({
        "lib/lambda-java-example-1.0-SNAPSHOT.jar": new pulumi.asset.FileAsset("./java/target/lambda-java-example-1.0-SNAPSHOT.jar"),
    runtime: aws.lambda.Java8Runtime,
    timeout: 5,
    handler: "example.Hello",
    role: role.arn


The complete code for 5 lambda functions in 5 different programming languages can be found in my github repository.

Running pulumi update provisions 25 AWS resources in a matter of 1 minute, so I can start playing with my test lambdas in no time.

And the best part: when I don't need them anymore, I run pulumi destroy and my AWS Console is clean again!

Happy serverless moments!

Monads explained in C# (again)

I love functional programming for the simplicity that it brings.

But at the same time, I realize that learning functional programming is a challenging process. FP comes with a baggage of unfamiliar vocabulary that can be daunting for somebody coming from an object-oriented language like C#.

Functional Programming Word Cloud

Some of functional lingo

"Monad" is probably the most infamous term from the list above. Monads have reputation of being something very abstract and very confusing.

The Fallacy of Monad Tutorials

Numerous attempts were made to explain monads in simple definitions; and monad tutorials have become a genre of its own. And yet, times and times again, they fail to enlighten the readers.

The shortest explanation of monads looks like this:

A Monad is just a monoid in the category of endofunctors

It's both mathematically correct and totally useless to anybody learning functional programming. To understand this statement, one has to know the terms "monoid", "category" and "endofunctors" and be able to mentally compose them into something meaningful.

The same problem is apparent in most monad tutorials. They assume some pre-existing knowledge in heads of their readers, and if that assumption fails, the tutorial doesn't click.

Focusing too much on mechanics of monads instead of explaining why they are important is another common problem.

Douglas Crockford grasped this fallacy very well:

The monadic curse is that once someone learns what monads are and how to use them, they lose the ability to explain them to other people

The problem here is likely the following. Every person who understands monads had their own path to this knowledge. It hasn't come all at once, instead there was a series of steps, each giving an insight, until the last final step made the puzzle complete.

But they don't remember the whole path anymore. They go online and blog about that very last step as the key to understanding, joining the club of flawed explanations.

There is an actual academic paper from Tomas Petricek that studies monad tutorials.

I've read that paper and a dozen of monad tutorials online. And of course, now I came up with my own.

I'm probably doomed to fail too, at least for some readers. Yet, I know that many people found the previous version of this article useful.

I based my explanation on examples from C# - the object-oriented language familiar to .NET developers.

Story of Composition

The base element of each functional program is Function. In typed languages each function is just a mapping between the type of its input parameter and output parameter. Such type can be annotated as func: TypeA -> TypeB.

C# is object-oriented language, so we use methods to declare functions. There are two ways to define a method comparable to func function above. I can use static method:

static class Mapper 
    static ClassB func(ClassA a) { ... }

... or instance method:

class ClassA 
    // Instance method
    ClassB func() { ... }

Static form looks closer to the function annotation, but both ways are actually equivalent for the purpose of our discussion. I will use instance methods in my examples, however all of them could be written as static extension methods too.

How do we compose more complex workflows, programs and applications out of such simple building blocks? A lot of patterns both in OOP and FP worlds revolve around this question. And monads are one of the answers.

My sample code is going to be about conferences and speakers. The method implementations aren't really important, just watch the types carefully. There are 4 classes (types) and 3 methods (functions):

class Speaker 
    Talk NextTalk() { ... }

class Talk 
    Conference GetConference() { ... }

class Conference 
    City GetCity() { ... }

class City { ... }

These methods are currently very easy to compose into a workflow:

static City NextTalkCity(Speaker speaker) 
    Talk talk = speaker.NextTalk();
    Conference conf = talk.GetConference();
    City city = conf.GetCity();
    return city;

Because the return type of the previous step always matches the input type of the next step, we can write it even shorter:

static City NextTalkCity(Speaker speaker) 

This code looks quite readable. It's concise and it flows from top to bottom, from left to right, similar to how we are used to read any text. There is not much noise too.

That's not what real codebases look like though, because there are multiple complications along the happy composition path. Let's look at some of them.


Any class instance in C# can be null. In the example above I might get runtime errors if one of the methods ever returns null back.

Typed functional programming always tries to be explicit about types, so I'll re-write the signatures of my methods to annotate the return types as nullables:

class Speaker 
    Nullable<Talk> NextTalk() { ... }

class Talk 
    Nullable<Conference> GetConference() { ... }

class Conference 
    Nullable<City> GetCity() { ... }

class City { ... }

This is actually invalid syntax in current C# version, because Nullable<T> and its short form T? are not applicable to reference types. This might change in C# 8 though, so bear with me.

Now, when composing our workflow, we need to take care of null results:

static Nullable<City> NextTalkCity(Speaker speaker) 
    Nullable<Talk> talk = speaker.NextTalk();
    if (talk == null) return null;

    Nullable<Conference> conf = talk.GetConference();
    if (conf == null) return null;

    Nullable<City> city = conf.GetCity();
    return city;

It's still the same method, but it got more noise now. Even though I used short-circuit returns and one-liners, it still got harder to read.

To fight that problem, smart language designers came up with the Null Propagation Operator:

static Nullable<City> NextTalkCity(Speaker speaker) 

Now we are almost back to our original workflow code: it's clean and concise, we just got 3 extra ? symbols around.

Let's take another leap.


Quite often a function returns a collection of items, not just a single item. To some extent, that's a generalization of null case: with Nullable<T> we might get 0 or 1 results back, while with a collection we can get 0 to any n results.

Our sample API could look like this:

class Speaker 
    List<Talk> GetTalks() { ... }

class Talk 
    List<Conference> GetConferences() { ... }

class Conference 
    List<City> GetCities() { ... }

I used List<T> but it could be any class or plain IEnumerable<T> interface.

How would we combine the methods into one workflow? Traditional version would look like this:

static List<City> AllCitiesToVisit(Speaker speaker) 
    var result = new List<City>();

    foreach (Talk talk in speaker.GetTalks())
        foreach (Conference conf in talk.GetConferences())
            foreach (City city in conf.GetCities())

    return result;

It reads ok-ish still. But the combination of nested loops and mutation with some conditionals sprinkled on them can get unreadable pretty soon. The exact workflow might be lost in the mechanics.

As an alternative, C# language designers invented LINQ extension methods. We can write code like this:

static List<City> AllCitiesToVisit(Speaker speaker) 
        .SelectMany(talk => talk.GetConferences())
        .SelectMany(conf => conf.GetCities())

Let me do one further trick and format the same code in an unusual way:

static List<City> AllCitiesToVisit(Speaker speaker) 
        .GetTalks()           .SelectMany(x => x
        .GetConferences()    ).SelectMany(x => x
        .GetCities()         ).ToList();

Now you can see the same original code on the left, combined with just a bit of technical repeatable clutter on the right. Hold on, I'll show you where I'm going.

Let's discuss another possible complication.

Asynchronous Calls

What if our methods need to access some remote database or service to produce the results? This should be shown in type signature, and C# has Task<T> for that:

class Speaker 
    Task<Talk> NextTalk() { ... }

class Talk 
    Task<Conference> GetConference() { ... }

class Conference 
    Task<City> GetCity() { ... }

This change breaks our nice workflow composition again.

We'll get back to async-await later, but the original way to combine Task-based methods was to use ContinueWith and Unwrap API:

static Task<City> NextTalkCity(Speaker speaker) 
        .ContinueWith(talk => talk.Result.GetConference())
        .ContinueWith(conf => conf.Result.GetCity())

Hard to read, but let me apply my formatting trick again:

static Task<City> NextTalkCity(Speaker speaker) 
        .NextTalk()         .ContinueWith(x => x.Result
        .GetConference()   ).Unwrap().ContinueWith(x => x.Result
        .GetCity()         ).Unwrap();

You can see that, once again, it's our nice readable workflow on the left + some mechanical repeatable junction code on the right.


Can you see a pattern yet?

I'll repeat the Nullable-, List- and Task-based workflows again:

static Nullable<City> NextTalkCity(Speaker speaker) 
        speaker               ?
        .NextTalk()           ?
        .GetConference()      ?

static List<City> AllCitiesToVisit(Speaker speaker) 
        .GetTalks()            .SelectMany(x => x
        .GetConferences()     ).SelectMany(x => x
        .GetCities()          ).ToList();

static Task<City> NextTalkCity(Speaker speaker) 
        .NextTalk()            .ContinueWith(x => x.Result
        .GetConference()      ).Unwrap().ContinueWith(x => x.Result
        .GetCity()            ).Unwrap();

In all 3 cases there was a complication which prevented us from sequencing method calls fluently. In all 3 cases we found the gluing code to get back to fluent composition.

Let's try to generalize this approach. Given some generic container type WorkflowThatReturns<T>, we have a method to combine an instance of such workflow with a function which accepts the result of that workflow and returns another workflow back:

class WorkflowThatReturns<T> 
    WorkflowThatReturns<U> AddStep(Func<T, WorkflowThatReturns<U>> step);

In case this is hard to grasp, have a look at the picture of what is going on:

Monad Bind Internals

  1. An instance of type T sits in a generic container.

  2. We call AddStep with a function, which maps T to U sitting inside yet another container.

  3. We get an instance of U but inside two containers.

  4. Two containers are automatically unwrapped into a single container to get back to the original shape.

  5. Now we are ready to add another step!

In the following code, NextTalk returns the first instance inside the container:

WorkflowThatReturns<City> Workflow(Speaker speaker) 
        .AddStep(x => x.GetConference())
        .AddStep(x => x.GetCity()); 

Subsequently, AddStep is called two times to transfer to Conference and then City inside the same container:

Monad Bind Chaining

Finally, Monads

The name of this pattern is Monad.

In C# terms, a Monad is a generic class with two operations: constructor and bind.

class Monad<T> {
    Monad(T instance);
    Monad<U> Bind(Func<T, Monad<U>> f);

Constructor is used to put an object into container, Bind is used to replace one contained object with another contained object.

It's important that Bind's argument returns Monad<U> and not just U. We can think of Bind as a combination of Map and Unwrap as defined per following signature:

class Monad<T> {
    Monad(T instance);
    Monad<U> Map(Function<T, U> f);
    static Monad<U> Unwrap(Monad<Monad<U>> nested);

Even though I spent quite some time with examples, I expect you to be slightly confused at this point. That's ok.

Keep going and let's have a look at several sample implementations of Monad pattern.

Maybe (Option)

My first motivational example was with Nullable<T> and ?.. The full pattern containing either 0 or 1 instance of some type is called Maybe (it maybe has a value, or maybe not).

Maybe is another approach to dealing with 'no value' value, alternative to the concept of null.

Functional-first language F# typically doesn't allow null for its types. Instead, F# has a maybe implementation built into the language: it's called option type.

Here is a sample implementation in C#:

public class Maybe<T> where T : class
    private readonly T value;

    public Maybe(T someValue)
        if (someValue == null)
            throw new ArgumentNullException(nameof(someValue));
        this.value = someValue;

    private Maybe()

    public Maybe<U> Bind<U>(Func<T, Maybe<U>> func) where U : class
        return value != null ? func(value) : Maybe<U>.None();

    public static Maybe<T> None() => new Maybe<T>();

When null is not allowed, any API contract gets more explicit: either you return type T and it's always going to be filled, or you return Maybe<T>. The client will see that Maybe type is used, so it will be forced to handle the case of absent value.

Given an imaginary repository contract (which does something with customers and orders):

public interface IMaybeAwareRepository
    Maybe<Customer> GetCustomer(int id);
    Maybe<Address> GetAddress(int id);
    Maybe<Order> GetOrder(int id);

The client can be written with Bind method composition, without branching, in fluent style:

Maybe<Shipper> shipperOfLastOrderOnCurrentAddress =
        .Bind(c => c.Address)
        .Bind(a => repo.GetAddress(a.Id))
        .Bind(a => a.LastOrder)
        .Bind(lo => repo.GetOrder(lo.Id))
        .Bind(o => o.Shipper);

As we saw above, this syntax looks very much like a LINQ query with a bunch of SelectMany statements. One of the common implementations of Maybe implements IEnumerable interface to enable a more C#-idiomatic binding composition. Actually:

Enumerable + SelectMany is a Monad

IEnumerable is an interface for enumerable containers.

Enumerable containers can be created - thus the constructor monadic operation.

The Bind operation is defined by the standard LINQ extension method, here is its signature:

public static IEnumerable<U> SelectMany<T, U>(
    this IEnumerable<T> first, 
    Func<T, IEnumerable<U>> selector)

Direct implementation is quite straightforward:

static class Enumerable 
    public static IEnumerable<U> SelectMany(
        this IEnumerable<T> values, 
        Func<T, IEnumerable<U>> func) 
        foreach (var item in values)
            foreach (var subItem in func(item))
                yield return subItem;

And here is an example of composition:

IEnumerable<Shipper> shippers =
        .SelectMany(c => c.Addresses)
        .SelectMany(a => a.Orders)
        .SelectMany(o => o.Shippers);

The query has no idea about how the collections are stored (encapsulated in containers). We use functions T -> IEnumerable<U> to produce new enumerables (Bind operation).

Task (Future)

In C# Task<T> type is used to denote asynchronous computation which will eventually return an instance of T. The other names for similar concepts in other languages are Promise and Future.

While the typical usage of Task in C# is different from the Monad pattern we discussed, I can still come up with a Future class with the familiar structure:

public class Future<T>
    private readonly Task<T> instance;

    public Future(T instance)
        this.instance = Task.FromResult(instance);

    private Future(Task<T> instance)
        this.instance = instance;

    public Future<U> Bind<U>(Func<T, Future<U>> func)
        var a = this.instance.ContinueWith(t => func(t.Result).instance).Unwrap();
        return new Future<U>(a);

    public void OnComplete(Action<T> action)
        this.instance.ContinueWith(t => action(t.Result));

Effectively, it's just a wrapper around the Task which doesn't add too much value, but it's a useful illustration because now we can do:

    .Bind(speaker => speaker.NextTalk())
    .Bind(talk => talk.GetConference())
    .Bind(conference => conference.GetCity())
    .OnComplete(city => reservations.BookFlight(city));

We are back to the familiar structure. Time for some more complications.

Non-Sequential Workflows

Up until now, all the composed workflows had very liniar, sequential structure: the output of a previous step was always the input for the next step. That piece of data could be discarded after the first use because it was never needed for later steps:

Linear Workflow

Quite often though, this might not be the case. A workflow step might need data from two or more previous steps combined.

In the example above, BookFlight method might actually need both Speaker and City objects:

Non Linear Workflow

In this case, we would have to use closure to save speaker object until we get a talk too:

    .OnComplete(speaker =>
            .Bind(talk => talk.GetConference())
            .Bind(conference => conference.GetCity())
            .OnComplete(city => reservations.BookFlight(speaker, city))

Obviously, this gets ugly very soon.

To solve this structural problem, C# language got its async-await feature, which is now being reused in more languages including Javascript.

If we move back to using Task instead of our custom Future, we are able to write

var speaker = await repository.LoadSpeaker();
var talk = await speaker.NextTalk();
var conference = await talk.GetConference();
var city = await conference.GetCity();
await reservations.BookFlight(speaker, city);

Even though we lost the fluent syntax, at least the block has just one level, which makes it easier to navigate.

Monads in Functional Languages

So far we learned that

  • Monad is a workflow composition pattern
  • This pattern is used in functional programming
  • Special syntax helps simplify the usage

It should come at no surprise that functional languages support monads on syntactic level.

F# is a functional-first language running on .NET framework. F# had its own way of doing workflows comparable to async-await before C# got it. In F#, the above code would look like this:

let sendReservation () = async {
    let! speaker = repository.LoadSpeaker()
    let! talk = speaker.nextTalk()
    let! conf = talk.getConference()
    let! city = conf.getCity()
    do! bookFlight(speaker, city)

Apart from syntax (! instead of await), the major difference to C# is that async is just one possible monad type to be used this way. There are many other monads in F# standard library (they are called Computation Expressions).

The best part is that any developer can create their own monads, and then use all the power of language features.

Say, we want a hand-made Maybe computation expressoin in F#:

let nextTalkCity (speaker: Speaker) = maybe {
    let! talk = speaker.nextTalk()
    let! conf = talk.getConference()
    let! city = conf.getCity(talk)
    return city

To make this code runnable, we need to define Maybe computation expression builder:

type MaybeBuilder() =

    member this.Bind(x, f) = 
        match x with
        | None -> None
        | Some a -> f a

    member this.Return(x) = 
        Some x

let maybe = new MaybeBuilder()

I won't explain the details of what happens here, but you can see that the code is quite trivial. Note the presence of Bind operation (and Return operation being the monad constructor).

The feature is widely used by third-party F# libraries. Here is an actor definition in Akka.NET F# API:

let loop () = actor {
    let! message = mailbox.Receive()
    match message with
    | Greet(name) -> printfn "Hello %s" name
    | Hi -> printfn "Hello from F#!"
    return! loop ()

Monad Laws

There are a couple laws that constructor and Bind need to adhere to, so that they produce a proper monad.

A typical monad tutorial will make a lot of emphasis on the laws, but I find them less important to explain to a beginner. Nonetheless, here they are for the sake of completeness.

Left Identity law says that Monad constructor is a neutral operation: you can safely run it before Bind, and it won't change the result of the function call:

// Given
T value;
Func<T, Monad<U>> f;

// Then (== means both parts are equivalent)
new Monad<T>(value).Bind(f) == f(value) 

Right Identity law says that given a monadic value, wrapping its contained data into another monad of same type and then Binding it, doesn't change the original value:

// Given
Monad<T> monadicValue;

// Then (== means both parts are equivalent)
monadicValue.Bind(x => new Monad<T>(x)) == monadicValue

Associativity law means that the order in which Bind operations are composed does not matter:

// Given
Monad<T> m;
Func<T, Monad<U>> f;
Func<U, Monad<V>> g;

// Then (== means both parts are equivalent)
m.Bind(f).Bind(g) == m.Bind(a => f(a).Bind(g))

The laws may look complicated, but in fact they are very natural expectations that any developer has when working with monads, so don't spend too much mental effort on memorizing them.


You should not be afraid of the "M-word" just because you are a C# programmer.

C# does not have a notion of monads as predefined language constructs, but that doesn't mean we can't borrow some ideas from the functional world. Having said that, it's also true that C# is lacking some powerful ways to combine and generalize monads that are available in functional programming languages.

Go learn some more Functional Programming!

Mikhail Shilkov I'm Mikhail Shilkov, a software developer and architect, a Microsoft Azure MVP, Russian expat living in the Netherlands. I am passionate about cloud technologies, functional programming and the intersection of the two.

LinkedIn@mikhailshilkovGitHubStack Overflow