Precompiled Azure Functions in F#

This post is giving a start to F# Advent Calendar in English 2017. Please follow the calendar for all the great posts to come.

Azure Functions is a "serverless" cloud offering from Microsoft. It allows you to run your custom code as response to events in the cloud. Functions are very easy to start with; and you only pay per execution - with free allowance sufficient for any proof-of-concept, hobby project or even low-usage production loads. And when you need more, Azure will scale your project up automatically.

F# is one of the officially supported languages for Azure Functions. Originally, F# support started with F# Script files (authored directly in Azure portal or copied from local editor), so you can find many articles online to get started, e.g. Creating an Azure Function in F# from the ground up and Part 2 by Mathias Brandewinder.

However, I find script-based model a bit limited. In today's article I will focus on creating Azure Functions as precompiled .NET libraries. Along the way, I'll use cross-platform tools like .NET Core and VS Code, and I'll show how to integrate Functions with some popular tools like Suave and Paket.

Create a Project

You can follow this walkthrough on Windows or Mac, just make sure that you have .NET Core 2 and Node.js 8.x with npm installed. My editor of choice is Visual Studio Code with Ionide plugin.

I'll show you how to create a new F# Function App from scratch. If you want to jump to runnable project, you can get it from my github.

We start with creating a new F# library project for .NET Standard 2. Run in your command line:

dotnet new classlib --language F# --name HelloFunctions

This command creates a folder with two files: HelloFunctions.fsproj project file and Library.fs source code file.

Now, add a reference to Azure Functions NuGet package:

dotnet add package Microsoft.NET.Sdk.Functions

Define a Function

Open Library.fs code file and change it to the following code:

namespace HelloFunctions

open System
open Microsoft.Azure.WebJobs
open Microsoft.Azure.WebJobs.Host

module Say =
  let private daysUntil (d: DateTime) =
    (d - DateTime.Now).TotalDays |> int

  let hello (timer: TimerInfo, log: TraceWriter) =
    let christmas = new DateTime(2017, 12, 25)

    daysUntil christmas
    |> sprintf "%d days until Christmas"
    |> log.Info

We defined a function hello which should be triggered by Functions runtime based on time intervals. Every time the function is called, we log how many days we still need to wait before Christmas 2017.

To convert this simple F# function to an Azure Function, create a folder called Hello (or choose any other name) next to the project file and add function.json file in there:

{
  "bindings": [
    {
      "name": "timer",
      "type": "timerTrigger",
      "schedule": "0 * * * * *"
    }
  ],
  "scriptFile": "../bin/HelloFunctions.dll",
  "entryPoint": "HelloFunctions.Say.hello"
}

We defined that:

  • Our function is triggered by timer
  • It runs every minute at 0 seconds
  • The entry point is our hello function in the compiled assembly

Prepare Local Runtime

There are a couple more configuration files needed to be able to run the Function App locally. host.json defines hosting parameters; empty file will do for now:

{
}

Most triggers need to connect to a Storage Account. For examples, timer trigger uses it to hold leases to define which running instance will actually execute the action every minute. Copy a connection string to your Storage Account (local Storage emulator is fine too) and put it into local.settings.json file:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "...your connection string..."
  }
}

Note that this file is only used for local development and is not published to Azure by default.

Finally, we need to modify fsproj file to make the build tool copy those files into bin folder. Add the following section in there:

<ItemGroup>
  <Content Include="Hello\function.json">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </Content>
  <Content Include="host.json">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </Content>
  <Content Include="local.settings.json">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </Content>
</ItemGroup>

Run App Locally

The first step is to build and publish our Function App with dotnet commands:

dotnet build
dotnet publish

The first line produces the dll file and the second line copies it and all of its dependencies to publish folder.

The nice thing about Azure Functions is that you can easily run them locally on a development machine. Execute the following command to install the runtime and all the required libraries:

npm install -g [email protected]

This will add a func CLI to your system which is the tool to use for all Function related operations.

Navigate to bin\Debug\netstandard2.0\publish folder and run func start from there. You should see that your app is now running, and your timer function is scheduled for execution:

Function App Start

Once the next minute comes, the timer will trigger and you will see messages in the log:

Timer Trigger Working

Integrate into VS Code

You are free to use full Visual Studio or any editor to develop Function Apps in F#. I've been mostly using VS Code for this purpose, and I believe it's quite popular among F# community.

If you use VS Code, be sure to setup the tasks that you can use from within the editor. I usually have at least 3 tasks: "build" (dotnet build), "publish" (dotnet publish) and "run" (func start --script-root bin\\debug\\netstandard2.0\\publish), with shortcuts configured to all of them.

You can find an example of tasks.json file here.

Also, check out Azure Functions Extension.

Deploy to Azure

You can deploy the exact same application binaries to Azure. Start by creating an empty Function App in the portal, or via Azure CLI (func CLI does not support that).

Then run the following command to deploy your precompiled function to this app:

func azure functionapp publish <FunctionAppName>

At the first run, it will verify your Azure credentials.

In real-life production scenarios your workflow is probably going to be similar to this:

  • Change Function App code
  • Run it locally to test the change
  • Push the code changes to the source control repository
  • Have your CI/CD pipeline build it, run the tests and then push the binaries to Azure Functions environment

HTTP Trigger

Timer-triggered functions are useful, but that's just one limited use case. Several other event types can trigger Azure Functions, and for all of them you can create precompiled functions and run them locally.

The most ubiquotous trigger for any serverless app is probably HTTP. So, for the rest of the article I will focus on several approaches to implement HTTP functions. Nonetheless, the same techique can be applied to other triggers too.

F# code for the simplest HTTP Function can look like this:

namespace PrecompiledApp

open Microsoft.AspNetCore.Mvc
open Microsoft.AspNetCore.Http
open Microsoft.Azure.WebJobs.Host

module PrecompiledHttp =

  let run(req: HttpRequest, log: TraceWriter) =
    log.Info("F# HTTP trigger function processed a request.")
    ContentResult(Content = "HO HO HO Merry Christmas", ContentType = "text/html")

You can find a full example of HTTP Function App here.

This code is using ASP.NET Core classes for request and response. It's still just an F# function, so we need to bind it to a trigger in function.json:

{
  "bindings": [
    {
      "type": "httpTrigger",
      "methods": ["get"],
      "authLevel": "anonymous",
      "name": "req",
      "route": "hellosanta"
    }
  ],
  "scriptFile": "../bin/PrecompiledApp.dll",
  "entryPoint": "PrecompiledApp.PrecompiledHttp.run"
}

If you run the app, the function will be hosted at localhost

HTTP Trigger Working

And a request to http://localhost:7071/api/hellosanta will get responded with our "HO HO HO" message.

This function is of "Hello World" level, but the fact that it's inside a normal F# library gives you lots of power.

Let's see at some examples of how to use it.

Suave Function

What can we do to enhance developer experience? We can use our favourite F# libraries.

Suave is one of the most popular libraries to implement Web API's with. And we can use it in Azure Functions too!

Let's first make a small twist to HTTP trigger definition in function.json:

"bindings": [
  {
    "type": "httpTrigger",
    "methods": ["get"],
    "authLevel": "anonymous",
    "name": "req",
    "route": "{*anything}"
  }
],

Binding now defines a wildcard route to redirect all requests to this function. That's because we want Suave to take care of routing for us.

The definition of such routing will look familiar to all Suave users:

module App =
  open Suave
  open Suave.Successful
  open Suave.Operators
  open Suave.Filters

  let app = 
    GET >=> choose
      [ path "/api/what" >=> OK "Every time we love, every time we give, it's Christmas."
        path "/api/when" >=> OK "Christmas isn't a season. It's a feeling."
        path "/api/how" >=> OK "For it is in giving that we receive." ]

Azure Function is just a one-liner wiring Suave app into the pipeline:

module Http =
  open Suave.Azure.Functions.Context

  let run req =
    req |> runWebPart App.app  |> Async.StartAsTask

The heavy lifting is done by runWebPart function, which is a utility function defined in the same application. You can see the full code of this wiring in my repo.

Run the application and request the URL http://localhost:7071/api/what to see the function in action.

This example is very simple, but you can do lots of powerful stuff with Suave! Most probably, you shouldn't go over the root and try to fit whole mulpti-resource REST API into a single Azure Function. But it might still make sense to keep related HTTP calls together, and Suave can help to keep it cleaner.

Managing Dependencies with Paket

Once your Function App becomes bigger and you start using multiple F# projects, it makes sense to switch to Paket package manager.

It is totally possible to use Paket with Azure Functions. There isn't much specific to Azure Functions, really. Here is an example of paket.dependecies file

source https://www.nuget.org/api/v2

framework: >= netstandard2.0
nuget FSharp.Core
nuget Microsoft.NET.Sdk.Functions
nuget Microsoft.AspNetCore.Mvc.Core

that I used in example which demonstrates Paket + Functions combination.

Attribute-Based Functions

Up until now, we were writing function.json files manually for each function. This is not very tedious, but it is error prone. Microsoft offers an alternative programming model where these files are auto-generated by Functions SDK.

This programming model is based on attributes, which are similar to WebJobs SDK attributes. With this approach, there's no function.json file in the project. Instead, the function declaration is decorated with attributes:

[<FunctionName("AttributeBased")>]
let run([<HttpTrigger>] req: HttpRequest, log: TraceWriter)

The same development flow still works. Once you run dotnet build, a new function.json file will be generated and placed into bin folder. Functions runtime will be able to use it to run the function as usual.

Note that the generated file looks a bit different from the manual equivalent:

  1. It manifests itself with

     "generatedBy": "Microsoft.NET.Sdk.Functions.Generator-1.0.6",
     "configurationSource": "attributes",
    
  2. In case you use input and output bindings, you won't be able to see them in the generated file. Only trigger will be visible in json. Don't worry, input and output bindings will still work.

You can find an example of HTTP function with attributes here.

There are pro's and con's in this model. Obviously, not having to write JSON files manually is beneficial. Some people find the binding attributes really ugly though, especially when you have 3 or 4 bindings and each has multiple parameters.

My preference is to use attributes, but don't mix attribute decoration with real code. I.e. keep the Function's body to a simple 1-liner, and delegate the call to a properly defined F# function with the actual domain logic.

Wrapping Up

Lots of F# users value the language for how quickly one can be productive with it: based on concise syntax, powerful libraries and tools like FSI.

In my opinion, Azure Functions fit nicely into the picture. It takes just several minutes before you can run your first Function App on developer machine, and then seamlessly transfer it into the cloud.

I've prepared a github repository where you can find more Examples of Azure Functions implemented in F#.

Merry Serverless Functional Christmas!

Azure F#unctions Talk at FSharping Meetup in Prague

On November 8th 2017 I gave a talk about developing Azure Functions in F# at FSharping meetup in Prague.

I really enjoyed giving this talk: the audience was great and asked awesome questions. One more prove that F# community is so welcoming and energizing!

All the demos of that session can be found in my github repository.

The slides were only a small portion of my talk, but you can see them below anyways.

Link to full-screen HTML slides: Azure F#unctions

Slides on SlideShare:

Thanks for attending my talk! Feel free to post any feedback in the comments.

Azure Function Triggered by Azure Event Grid

Update: I missed the elephant in the room. There actually exists a specialized trigger for Event Grid binding. In the portal, just select Experimental in Scenario drop down while creating the function. In precompiled functions, reference Microsoft.Azure.WebJobs.Extensions.EventGrid NuGet package.

The rest of the article describes my original approach to trigger an Azure Function from Azure Event Grid with generic Web Hook trigger.

Here are the steps to follow:

Create a Function with Webhook Trigger

I'm not aware of a specialized trigger type for Event Grid, so I decided to use Generic Webhook trigger (which is essentially an HTTP trigger).

I used the Azure Portal to generate a function, so here is the function.json that I got:

{
  "bindings": [
    {
      "type": "httpTrigger",
      "direction": "in",
      "webHookType": "genericJson",
      "name": "req"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    }
  ],
  "disabled": false
}

For precompiled functions, just decorate it with HttpTriggerAttribute with POST method:

public static Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestMessage req)

Parse the Payload

Events from Event Grid will arrive in a specific predefined JSON format. Here is an example of events to expect:

[{
  "id": "0001",
  "eventType": "MyHelloWorld",
  "subject": "Hello World!",
  "eventTime": "2017-10-05T08:53:07",
  "data": {
    "hello": "world"
  },
  "topic": "/SUBSCRIPTIONS/GUID/RESOURCEGROUPS/NAME/PROVIDERS/MICROSOFT.EVENTGRID/TOPICS/MY-EVENTGRID-TOPIC1"
}]

To be able to parse those data more easily, I defined a C# class to deserialize JSON to:

public class GridEvent
{
    public string Id { get; set; }
    public string EventType { get; set; }
    public string Subject { get; set; }
    public DateTime EventTime { get; set; }
    public Dictionary<string, string> Data { get; set; }
    public string Topic { get; set; }
}

Now, the function can read the events (note, that they are sent in arrays) from the body of POST request:

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    string jsonContent = await req.Content.ReadAsStringAsync();
    var events = JsonConvert.DeserializeObject<GridEvent[]>(jsonContent);

    // do something with events

    return req.CreateResponse(HttpStatusCode.OK);
}

Validate the Endpoint

To prevent you from sending events to endpoints that you don't own, Event Grid requires each subsriber to validate itself. For this purpose, Event Grid will send events of the special type SubscriptionValidation.

The validation request will contain a code, which we need to echo back in 200-OK HTTP response.

Here is a small piece of code to do just that:

if (req.Headers.GetValues("Aeg-Event-Type").FirstOrDefault() == "SubscriptionValidation")
{
    var code = events[0].Data["validationCode"];
    return req.CreateResponse(HttpStatusCode.OK,
        new { validationResponse = code });
}

The function is ready!

Create a Custom Event Grid Topic

To test it out, go to the portal and create a custom Event Grid topic. Then click on Add Event Subscription button, give it a name and copy paste the function URL (including key) to Subscriber Endpoint field:

Azure Function URL

Event Grid Subscription

Creating a subscription will immediately trigger a validation request to your function, so you should see one invocation in the logs.

Send Custom Events

Now, go to your favorite HTTP client (curl, Postman, etc) and send a sample event to check how the whole setup works:

POST /api/events HTTP/1.1
Host: <your-eventgrid-topic>.westus2-1.eventgrid.azure.net
aeg-sas-key: <key>
Content-Type: application/json

[{
  "id": "001",
  "eventType": "MyHelloWorld",
  "subject": "Hello World!",
  "eventTime": "2017-10-05T08:53:07",
  "data": {
    "hello": "world"
  }
}]

Obviously, adjust the endpoint and key based on the data from the portal.

You should get a 200-OK back and then see your event in Azure Function invocation logs.

Have fun!

Wanted: Effectively-Once Processing in Azure

This experimental post is a question. The question is too broad for StackOverflow, so I'm posting it here. Please engage in the comments section, or forward the link to subject experts.

TL;DR: Are there any known patterns / tools / frameworks to provide scalable, stateful, effectively-once, end-to-end processing of messages, to be hosted in Azure, preferably on PaaS-level of service?

Motivational Example

Let's say we are making a TODO app. There is a constant flow of requests to create a TODO in the system. Each request contains just two fields: a title and a project ID which TODO should belong to. Here is the definition:

type TodoRequest = {
  ProjectId: int
  Title: string
}

Now, we want to process the request and assign each TODO an identifier, which should be an auto-incremented integer. Numeration is unique per project, so each TODO must have its own combination of ProjectId and Id:

type Todo = {
  ProjectId: int
  Id: int
  Title: string
}

Now, instead of relying on some database sequences, I want to describe this transformation as a function. The function has the type (TodoRequest, int) -> (Todo, int), i.e. it transforms a tuple of a request and current per-project state (last generated ID) to a tuple of a TODO and post-processing state:

let create (request: TodoRequest, state: int) =
  let nextId = state + 1
  let todo = {
    ProjectId = request.ProjectId
    Id = nextId
    Title = request.Title
  }
  todo, nextId

This is an extremely simple function, and I can use it to great success to process local, non-durable data.

But if I need to make a reliable distributed application out of it, I need to take care of lots of things:

  1. No request should be lost. I need to persist all the requests into a durable storage in case of processor crash.

  2. Similarly, I need to persist TODO's too. Presumably, some downstream logic will use the persisted data later on in TODO's lifecycle.

  3. The state (the counter) must be durable too. In case of crash of processing function, I want to be able to restart processing after recovery.

  4. Processing of the requests should be sequential per project ID. Otherwise I might get a clash of ID's in case two requests belonging to the same project are processed concurrently.

  5. I still want requests to different projects to be processed in parallel, to make sure the system scales up with the growth of project count.

  6. There must be no holes or duplicates in TODO numbering per project, even in face of system failures. In worst case, I agree to tolerate a duplicated entry in the output log, but it must be exactly the same entry (i.e. two entries with same project id, id and title).

  7. The system should tolerate a permanent failure of any single hardware dependency and automatically fail-over within reasonable time.

It's not feasible to meet all of those requirements without relying on some battle-tested distributed services or frameworks.

Which options do I know of?

Transactions

Traditionally, this kind of requirements were solved by using transactions in something like SQL Server. If I store requests, TODO's and current ID per project in the same relational database, I can make each processing step a single atomic transaction.

This addresses all the concerns, as long as we can stay inside the single database. That's probably a viable option for the TODO app, but less of so if I convert my toy example to some real applications like IoT data processing.

Can we do the same for distributed systems at scale?

Azure Event Hubs

Since I touched IoT space, the logical choice would be to store our entries in Azure Event Hubs. That works for many criteria, but I don't see any available approach to make such processing consistent in the face of failures.

When processing is done, we need to store 3 pieces: generated TODO event, current processing offset and current ID. Event goes to another event hub, processing offset is stored in Blob Storage and ID can be saved to something like Table Storage.

But there's no way to store those 3 pieces atomically. Whichever order we choose, we are bound to get anomalies in some specific failure modes.

Azure Functions

Azure Functions don't solve those problems. But I want to mention this Function-as-a-Service offering because they provide an ideal programming model for my use case.

I need to take just one step from my domain function to Azure Function: to define bindings for e.g. Event Hubs and Table Storage.

However, reliability guarantees will stay poor. I won't get neither sequential processing per Event Hub partition key, nor atomic state commit.

Azure Service Fabric

Service Fabric sounds like a good candidate service for reliable processing. Unfortunately, I don't have much experience with it to judge.

Please leave a comment if you do.

JVM World

There are products in JVM world which claim to solve my problem perfectly.

Apache Kafka was the inspiration for Event Hubs log-based messaging. The recent Kafka release provides effectively-once processing semantics as long as data stay inside Kafka. Kafka does that with atomic publishing to multiple topics, and state storage based on compacted topics.

Apache Flink has similar guarantees for its stream processing APIs.

Great, but how do I get such awesomeness in .NET code, and without installing expensive ZooKeeper-managed clusters?

Call for Feedback

Do you know a solution, product or service?

Have you developed effectively-once processing on .NET / Azure stack?

Are you in touch with somebody who works on such framework?

Please leave a comment, or ping me on Twitter.

Azure Functions: Are They Really Infinitely Scalable and Elastic?

Automatic elastic scaling is a built-in feature of Serverless computing paradigm. One doesn't have to provision servers anymore, they just need to write code that will be provisioned on as many servers as needed based on the actual load. That's the theory.

In particular, Azure Functions can be hosted on the Consumption plan:

The Consumption plan automatically allocates compute power when your code is running, scales out as necessary to handle load, and then scales down when code is not running.

In this post I will run a simple stress test to get a feel of how such automatic allocation works in practice and what kind of characteristics we can rely on.

Setup

Here are the parameters that I chose for my test of today:

  • Azure Function written in C# and hosted on Consumption plan
  • Triggered by Azure Storage Queue binding
  • Workload is strictly CPU-bound, no I/O is executed

Specifically, each queue item represents one password that I need to hash. Each function call performs 12-round Bcrypt hashing. Bcrypt is a slow algorithm recommended for password hashing, because it makes potential hash collision attacks really hard and costly.

My function is based on Bcrypt.Net implementation, and it's extremely simple:

public static void Run([QueueTrigger("bcrypt-password")] string password)
{
    BCrypt.Net.BCrypt.HashPassword(password, 12);
}

It turns out that a single execution of this function takes approximately 1 second on an instance of Consumption plan, and consumes 100% CPU during that second.

Now, the challenge is simple. I send 100,000 passwords to the queue and see how long it will take to hash them, and also how the autoscaling will behave. I will run it two times, with different pace of sending messages to the queue.

That sounds like a perfect job for a Function App on Consumption plan:

  • Needs to scale based on load
  • CPU intensive - easy to see how busy each server is
  • Queue-based - easy to see the incoming vs outgoing rate

Let's see how it went.

Experiment 1: Steady Load

In my first run, I was sending messages at constant rate. 100,000 messages were sent within 2 hours, without spikes or drops in the pace.

Sounds like an easy job for autoscaling facilities. But here is the actual chart of data processing:

Function App Scaling

The horizontal axis is time in minutes since the first message came in.

The orange line shows the queue backlog - the amount of messages sitting in the queue at a given moment.

The blue area represents the amount of instances (virtual servers) allocated to the function by Azure runtime (see the numbers at the right side).

We can divide the whole process into 3 logical segments, approximately 40 minutes each:

Laging behind. Runtime starts with 0 instances, and immediately switches to 1 when the first message comes in. However it's reluctant to add any more servers for the next 20 (!) minutes. The scaling heuristic is probably based on the past history for this queue/function, and it wasn't busy at all during the hours before.

After 20 minutes, the runtime starts adding more instances: it goes up to 2, then jumps to 4, then reaches 5 at minute 40. The CPU is constantly at 100% and the queue backlog grows linearly.

Rapid scale up. After minute 40, it looks like the runtime realizes that it needs more power. Much more power! The growth speeds up real quick and by minute 54 the backlog stops growing, even though the messages are still coming in. But there are now 21 instances working, which is enough to finally match and beat the rate of incoming messages.

The runtime doesn't stop growing though. CPU's are still at 100%, and the backlog is still very high, so the scaling goes up and up. The amount of instances reaches astonishing 55, at which point all the backlog is processed and there are no messages in the queue.

Searching for balance. When queue is almost empty and CPU drops below 100% for the first time, the runtime decides to scale down. It does that quickly and aggressively, switching from 55 to 21 instances in just 2 minutes.

From there it keeps slowly reducing the number of instances until the backlog starts growing again. The runtime allows the backlog to grow a bit, but then figures out a balanced number of servers (17) to keep the backlog flat at around 2,000 messages.

It stays at 17 until the producer stops sending new messages. The backlog goes to 0, and the amount of instances gradually drops to 0 within 10 minutes.

The second chart from the same experiment looks very similar, but it shows different metrics:

Function App Delay

The gray line is the delay in minutes since the currently processed message got enqueued (message "age", in-queue latency). The blue line is the total processing rate, measured in messages per minute.

Due to perfect scalability and stability of my function, both charts are almost exactly the same. I've put it here so that you could see that the slowest message spent more than 40 minutes sitting inside the queue.

Experiment 2: Spiky Load

With the second run, I tried to emulate a spiky load profile. I was sending my 100,000 messages throughout 6 hours at lower pace than during the first run. But sometimes the producer switched to fast mode and sent a bigger bunch of messages in just several minutes. Here is the actual chart of incoming message rate:

Spiky Load

It's easy to imagine some service which has a usage pattern like that, when spikes of the events happen from time to time, or in rush hours.

This is how the Function App managed to process the messages:

Spiky Load Processing Result

The green line still shows the amount of incoming messages per minute. The blue line denotes how many messages were actually processed at that minute. And the orange bars are queue backlogs - the amount of messages pending.

Here are several observations:

  • Obviously, processing latency is way too far from real time. There is constantly quite a significant backlog in the queue, and processing delay reaches 20 minutes at peak.

  • It took the runtime 2 hours to clean the backlog for the first time. Even without any spikes during the first hour, the autoscaling algorithm needs time to get up to speed.

  • Function App runtime is able to scale up quite fast (look at the reaction on the fourth spike), but it's not really willing to do that most of the time.

  • The growth of the backlog after minute 280 is purely caused by wrong decision of runtime. While the load is completely steady, the runtime decided to shut down most workers after 20 minutes of empty backlog, and could not recover for the next hour.

Conclusions

I tried to get a feeling about the ability of Azure Functions to scale on demand, adapting to the workload. The function under test was purely CPU-bound, and for that I can give two main conclusions:

  • Function Apps are able to scale to high amount of instances running at the same time, and to eventually process large parallel jobs (at least up to 55 instances).

  • Significant processing delays are to be expected for heavy loads. Function App runtime has quite some inertia, and the resulting processing latency can easily go up to tens of minutes.

If you know how these results can be improved, or why they are less than optimal, please leave a comment or contact me directly.

I look forward to conducting more tests in the future!

Mikhail Shilkov I'm Mikhail Shilkov, a software developer. I enjoy F#, C#, Javascript and SQL development, reasoning about distributed systems, data processing pipelines, cloud and web apps. I blog about my experience on this website.

LinkedIn@mikhailshilkovGitHubStack Overflow