Programmable Cloud: Provisioning Azure App Service with Pulumi

Modern Cloud providers offer a wide variety of services of different types and levels. A modern cloud application would leverage multiple services in order to be efficient in terms of developer experience, price, operations etc.

For instance, a very simple Web Application deployed to Azure PaaS services could use

  • App Service - to host the application
  • App Service Plan - to define the instance size, price, scaling and other hosting parameters
  • Azure SQL Database - to store relational data
  • Application Insights - to collect telemetry and logs
  • Storage Account - to store the binaries and leverage Run-as-Zip feature

Provisioning such environment becomes a task on its own:

  • How do we create the initial setup?
  • How do we make changes?
  • What if we need multiple environments?
  • How do we apply settings?
  • How do we recycle resources which aren't needed anymore?

Well, there are several options.

Manually in Azure Portal

We all start doing this in Azure Portal. User Interface is great for discovering new services and features, and it's a quick way to make a single change.

Azure Portal

Creating an App Service in Azure Portal

Clicking buttons manually doesn't scale though. After the initial setup is complete, maintaining the environment over time poses significant challenges:

  • Every change requires going back to the portal, finding the right resource and doing the right change
  • People make mistakes, so if you have multiple environments, they are likely to be different in subtle ways
  • Naming gets messy over time
  • There is no easily accessible history of environment changes
  • Cleaning up is hard: usually some leftovers will remain unnoticed
  • Skills are required from everybody involved in provisioning

So, how do we streamline this process?

Azure PowerShell, CLI and Management SDKs

Azure comes with a powerful set of tools to manage resources with code.

You can use PowerShell, CLI scripts or custom code like C# to do with code whatever is possible to do via portal.

var webApp = azure.WebApps.Define(appName)
    .WithRegion(Region.WestEurope)
    .WithNewResourceGroup(rgName)
    .WithNewFreeAppServicePlan()
    .Create();

Fluent C# code creating an App Service

However, those commands are usually expressed in imperative style of CRUD operations. You can run the commands once, but it's hard to modify existing resources from an arbitrary state to the desired end state.

Azure Resource Manager Templates

All services in Azure are managed by Azure Resource Manager (ARM). ARM has a special JSON-based format for templates.

Once a template is defined, it's relatively straightforward to be deployed to Azure environment. So, if resources are defined in JSON, they will be created automatically via PowerShell or CLI commands.

It is also possible to deploy templates in incremental mode, when the tool will compare existing environment with desired configuration and will deploy the difference.

Templates can be parametrized, which enables multi-environment deployments.

There's a problem with templates though: they are JSON files. They get very large very fast, they are hard to reuse, it's easy to make a typo.

ARM Template

A fragment of auto-generated ARM Template for App Service, note the line numbers

Terraform is another templating tool to provision cloud resources but it uses YAML instead of JSON. I don't have much experience with it, but the problems seem to be very similar.

Can we combine the power of SDKs and the power of JSON-/YAML-based desired state configuration tools?

Pulumi

One potential solution has just arrived. A startup called Pulumi just went out of private beta to open source.

Pulumi

Pulumi wants to be much more than a better version of ARM templates, aiming to become the tool to build cloud-first distributed systems. But for today I'll focus on lower level of resource provisioning task.

With Pulumi cloud infrastructure is defined in code using full-blown general purpose programming languages.

The workflow goes like this:

  • Define a Stack, which is a container for a group of related resources
  • Write a program in one of supported languages (I'll use TypeScript) which references pulumi libraries and constructs all the resources as objects
  • Establish connection with your Azure account
  • Call pulumi CLI to create, update or destroy Azure resources based on the program
  • Pulumi will first show the preview of changes, and then apply them as requested

Pulumi Program

I'm using TypeScript to define my Azure resources in Pulumi. So, the program is a normal Node.js application with index.ts file, package references in package.json and one extra file Pulumi.yaml to define the program:

name: azure-appservice
runtime: nodejs

Our index.js is as simple as a bunch of import statements followed by creating TypeScript objects per desired resource. The simplest program can look like this:

import * as pulumi from "@pulumi/pulumi";
import * as azure from "@pulumi/azure";

const resourceGroup = new azure.core.ResourceGroup("myrg", {
    location: "West Europe"
});

When executed by pulumi update command, this program will create a new Resource Group in your Azure subscription.

Chaining Resources

When multiple resources are created, the properties of one resource will depend on properties of the others. E.g. I've defined the Resource Group above, and now I want to create an App Service Plan under this Group:

const resourceGroupArgs = {
    resourceGroupName: resourceGroup.name,
    location: resourceGroup.location
};

const appServicePlan = new azure.appservice.Plan("myplan", {
    ...resourceGroupArgs,

    kind: "App",

    sku: {
        tier: "Basic",
        size: "B1",
    },
});

I've assigned resourceGroupName and location of App Service Plan to values from the Resource Group. It looks like a simple assignment of strings but in fact it's more complicated.

Property resourceGroup.name has the type of pulumi.Output<string>. Constructor argument resourceGroupName of Plan has the type of pulumi.Input<string>.

We assigned "myrg" value to Resource Group name, but during the actual deployment it will change. Pulumi will append a unique identifier to the name, so the actually provisioned group will be named e.g. "myrg65fb103e".

This value will materialize inside Output type only at deployment time, and then it will get propagated to Input by Pulumi.

There is also a nice way to return the end values of Output's from Pulumi program. Let's say we define an App Service:

const app = new azure.appservice.AppService("mywebsite", {
    ...resourceGroupArgs,

    appServicePlanId: appServicePlan.id
});

First, notice how we used TypeScript spread operator to reuse properties from resourceGroupArgs.

Second, Output-Input assignment got used again to propagate App Service Plan ID.

Lastly, we can now export App Service host name from our program, e.g. for the user to be able to go to the web site immediately after deployment:

exports.hostname = app.defaultSiteHostname;

Output can also be transformed with apply function. Here is the code to format output URL:

exports.endpoint = app.defaultSiteHostname.apply(n => `https://${n}`);

Running pulumi update from CLI will then print the endpoint for us:

---outputs:---
endpoint: "https://mywebsiteb76260b5.azurewebsites.net"

Multiple outputs can be combined with pulumi.all, e.g. given SQL Server and Database, we could make a connection string:

const connectionString = 
    pulumi.all([sqlServer, database]).apply(([server, db]) => 
        `Server=tcp:${server}.database.windows.net;initial catalog=${db};user ID=${username};password=${pwd};Min Pool Size=0;Max Pool Size=30;Persist Security Info=true;`)

Using the Power of NPM

Since our program is just a TypeScript application, we are free to use any 3rd party package which exists out there in NPM.

For instance, we can install Azure Storage SDK. Just

npm install [email protected]

and then we can write a function to produce SAS token for a Blob in Azure Storage:

import * as azurestorage from "azure-storage";

// Given an Azure blob, create a SAS URL that can read it.
export function signedBlobReadUrl(
    blob: azure.storage.Blob | azure.storage.ZipBlob,
    account: azure.storage.Account,
    container: azure.storage.Container,
): pulumi.Output<string> {
    const signatureExpiration = new Date(2100, 1);

    return pulumi.all([
        account.primaryConnectionString,
        container.name,
        blob.name,
    ]).apply(([connectionString, containerName, blobName]) => {
        let blobService = new azurestorage.BlobService(connectionString);
        let signature = blobService.generateSharedAccessSignature(
            containerName,
            blobName,
            {
                AccessPolicy: {
                    Expiry: signatureExpiration,
                    Permissions: azurestorage.BlobUtilities.SharedAccessPermissions.READ,
                },
            }
        );

        return blobService.getUrl(containerName, blobName, signature);
    });
}

I took this function from Azure Functions example, and it will probably move to Pulumi libraries at some point, but until then you are free to leverage the package ecosystem.

Deploying Application Files

So far we provisioned Azure App Service, but we can also deploy the application files as part of the same workflow.

The code below is using Run from Zip feature of App Service:

  1. Define Storage Account and Container

     const storageAccount = new azure.storage.Account("mystorage", {
         ...resourceGroupArgs,
    
         accountKind: "StorageV2",
         accountTier: "Standard",
         accountReplicationType: "LRS",
     });
    
     const storageContainer = new azure.storage.Container("mycontainer", {
         resourceGroupName: resourceGroup.name,
         storageAccountName: storageAccount.name,
         containerAccessType: "private",
     });
    
  2. Create a folder with application files, e.g. wwwroot. It may contain some test HTML, ASP.NET application, or anything supported by App Service.

  3. Produce a zip file from that folder in Pulumi program:

     const blob = new azure.storage.ZipBlob("myzip", {
         resourceGroupName: resourceGroup.name,
         storageAccountName: storageAccount.name,
         storageContainerName: storageContainer.name,
         type: "block",
    
         content: new pulumi.asset.FileArchive("wwwroot")
     });
    
  4. Produce SAS Blob URL and assign it to App Service Run-as-Zip setting:

     const codeBlobUrl = signedBlobReadUrl(blob, storageAccount, storageContainer);
    
     const app = new azure.appservice.AppService("mywebsite", {
         ...resourceGroupArgs,
    
         appServicePlanId: appServicePlan.id,
    
         appSettings: {
             "WEBSITE_RUN_FROM_ZIP": codeBlobUrl
         }
     });
    

Run the program, and your Application will start as soon as pulumi update is complete.

Determinism

Pulumi programs should strive to be deterministic. That means you should avoid using things like current date/time or random numbers.

The reason is incremental updates. Every time you run pulumi update, it will execute the program from scratch. If your resources depend on random values, they will not match the existing resources and thus the false delta will be detected and deployed.

In the SAS generation example above we used a fixed date in the future instead of doing today + 1 year kind of calculation.

Should Pulumi provide some workaround for this?

Conclusion

My code was kindly merged to Pulumi examples, go there for the complete runnable program that provisions App Service with Azure SQL Database and Application Insights.

I really see high potential in Cloud-as-Code approach suggested by Pulumi. Today we just scratched the surface of the possibilities. We were working with cloud services on raw level: provisioning specific services with given parameters.

Pulumi's vision includes providing higher-level components to blur the line between infrastructure and code, and to enable everybody to create such components on their own.

Exciting future ahead!

Cold Starts Beyond First Request in Azure Functions

In my previous article I've explored the topic of Cold Starts in Azure Functions. Particularly, I've measured the cold start delays per language and runtime version.

I received some follow-up questions that I'd like to explore in today's post:

  • Can we avoid cold starts except the very first one by keeping the instance warm?
  • Given one warm instance, if two requests come at the same time, will one request hit a cold start because existing instance is busy with the other?
  • In general, does a cold start happen at scale-out when a new extra instance is provisioned?

Again, we are only talking Consumption Plan here.

Theory

Azure Functions are running on instances provided by Azure App Service. Each instance is able to process several requests concurrently, which is different comparing to AWS Lambda.

Thus, the following could be true:

  • If we issue at least 1 request every 20 minutes, the first instance should stay warm for long time
  • Simultaneous requests don't cause cold start unless the existing instance gets too busy
  • When runtime decides to scale out and spin up a new instance, it could do so in the background, still forwarding incoming requests to the existing warm instance(s). Once the new instance is ready, it could be added to the pool without causing cold starts
  • If so, cold starts are mitigated beyond the very first execution

Let's put this theory under test!

Keeping Always Warm

I've tested a Function App which consists of two Functions:

  • HTTP Function under test
  • Timer Function which runs every 10 minutes and does nothing but logging 1 line of text

I then measured the cold start statistics similar to all the tests from my previous article.

During 2 days I was issuing infrequent requests to the same app, most of them would normally lead to a cold start. Interestingly, even though I was regularly firing the timer, Azure switched instances to serve my application 2 times during the test period:

Infrequent Requests to Azure Functions with "Keep It Warm" Timer

I can see that most responses are fast, so timer "warmer" definitely helps.

The first request(s) to a new instance are slower than subsequent ones. Still, they are faster than normal full cold start time, so it could be related to HTTP stack loading.

Anyway, keeping Functions warm seems a viable strategy.

Parallel Requests

What happens when there is a warm instance, but it's already busy with processing another request? Will the parallel request be delayed, or will it be processed by the same warm instance?

I tested with a very lightweight function, which nevertheless takes some time to complete:

public static async Task<HttpResponseMessage> Delay500([HttpTrigger] HttpRequestMessage req)
{
    await Task.Delay(500);
    return req.CreateResponse(HttpStatusCode.OK, "Done");
}

I believe it's an OK approximation for an IO-bound function.

The test client then issued 2 to 10 parallel requests to this function and measured the response time for all requests.

It's not the easiest chart to understand in full, but note the following:

  • Each group of bars are for requests sent at the same time. Then there goes a pause about 20 seconds before the next group of requests gets sent

  • The bars are colored by the instance which processed that request: same instance - same color

Azure Functions Response Time to Batches of Simultaneous Requests

Here are some observations from this experiment:

  • Out of 64 requests, there were 11 cold starts

  • Same instance can process multiple simultaneous requests, e.g. one instance processed 7 out of 10 requests in the last batch

  • Nonetheless, Azure is eager to spin up new instances for multiple requests. In total 12 instances were created, which is even more than max amount of requests in any single batch

  • Some of those instances were actually never reused (gray-ish bars in batched x2 and x3, brown bar in x10)

  • The first request to each new instance pays the full cold start price. Runtime doesn't provision them in background while reusing existing instances for received requests

  • If an instance handled more than one request at a time, response time invariably suffers, even though the function is super lightweight (Task.Delay)

Conclusion

Getting back to the experiment goals, there are several things that we learned.

For low-traffic apps with sporadic requests it makes sense to setup a "warmer" timer function firing every 10 minutes or so to prevent the only instance from being recycled.

However, scale-out cold starts are real and I don't see any way to prevent them from happening.

When multiple requests come in at the same time, we might expect some of them to hit a new instance and get slowed down. The exact algorithm of instance reuse is not entirely clear.

Same instance is capable of processing multiple requests in parallel, so there are possibilities for optimization in terms of routing to warm instances during the provisioning of cold ones.

If such optimizations happen, I'll be glad to re-run my tests and report any noticeable improvements.

Stay tuned for more serverless perf goodness!

Azure Functions: Cold Starts in Numbers

Auto-provisioning and auto-scalability are the killer features of Function-as-a-Service cloud offerings, and Azure Functions in particular.

One drawback of such dynamic provisioning is a phenomenon called "Cold Start". Basically, applications that haven't been used for a while take longer to startup and to handle the first request.

The problem is nicely described in Understanding Serverless Cold Start, so I won't repeat it here. I'll just copy a picture from that article:

Cold Start

Based on the 4 actions which happen during a cold start, we may guess that the following factors might affect the cold start duration:

  • Language / execution runtime
  • Azure Functions runtime version
  • Application size including dependencies

I ran several sample functions and tried to analyze the impact of these factors on cold start time.

Methodology

All tests were run against HTTP Functions, because that's where cold start matters the most.

All the functions were just returning "Hello, World" taking the "World" value from the query string. Some functions were also loading extra dependencies, see below.

I did not rely on execution time reported by Azure. Instead, I measured end-to-end duration from client perspective. All calls were made from within the same Azure region, so network latency should have minimal impact:

Test Setup

When Does Cold Start Happen?

Obviously, cold start happens when the very first request comes in. After that request is processed, the instance is kept alive in case subsequent requests arrive. But for how long?

The following chart gives the answer. It shows values of normalized request durations across different languages and runtime versions (Y axis) depending on the time since the previous request in minutes (X axis):

Cold Start Threshold

Clearly, an idle instance lives for 20 minutes and then gets recycled. All requests after 20 minutes threshold hit another cold start.

How Do Languages Compare?

I'll start with version 1 of Functions runtime, which is the production-ready GA version as of today.

I've written Hello World HTTP function in all GA languages: C#, F# and Javascript, and I added Python for comparison. C#/F# were executed both in the form of script, and as a precompiled .NET assembly.

The following chart shows some intuition about the cold start duration per language. The languages are ordered based on mean response time, from lowest to highest. 65% of request durations are inside the vertical bar (1-sigma interval) and 95% are inside the vertical line (2-sigma):

Cold Start V1 per Language

Somewhat surprisingly, precompiled .NET is exactly on par with Javascript. Javascript "Hello World" is really lightweight, so I expected it to win, but I was wrong.

C# Script is slower but somewhat comparable. F# Script presented a really negative surprise though: it's much slower. It's even slower than experimental Python support where no performance optimization would be expected at all!

Functions Runtime: V1 vs V2

Version 2 of Functions runtime is currently in preview and not suitable for production load. That probably means they haven't done too much performance optimization, especially from cold start standpoint.

Can we see this on the chart? We sure can:

Cold Start V1 vs V2

V2 is massively slower. The fastest cold starts are around 6 seconds, but the slowest can come up to 40-50 seconds.

Javascript is again on-par with precompiled .NET.

Java is noticeably slower, even though the deployment package is just 33kB, so I assume I didn't overblow it.

Does Size Matter?

OK, enough of Hello World. A real-life function might be more heavy, mainly because it would depend on other third-party libraries.

To simulate such scenario, I've measured cold starts for a .NET function with references to Entity Framework, Automapper, Polly and Serilog.

For Javascript I did the same, but referenced Bluebird, lodash and AWS SDK.

Here are the results:

Cold Start Dependencies

As expected, the dependencies slow the loading down. You should keep your Functions lean, otherwise you will pay in seconds for every cold start.

An important note for Javascript developers: the above numbers are for Functions deployed after Funcpack preprocessor. The package contained the single js file with Webpack-ed dependency tree. Without that, the mean cold start time of the same function is 20 seconds!

Conclusions

Here are some lessons learned from all the experiments above:

  • Be prepared for 1-3 seconds cold starts even for the smallest Functions
  • Stay on V1 of runtime until V2 goes GA unless you don't care about perf
  • .NET precompiled and Javascript Functions have roughly same cold start time
  • Minimize the amount of dependencies, only bring what's needed

Do you see anything weird or unexpected in my results? Do you need me to dig deeper on other aspects? Please leave a comment below or ping me on twitter, and let's sort it all out.

Awesome F# Exchange 2018

I'm writing this post in the train to London Stensted, on my way back from F# Exchange 2018 conference.

F# Exchange is a yearly conference taking place in London, and 2018 edition was the first one for me personally. I also had an honour to speak there about creating Azure Functions with F#.

Impression

F# is still relatively niche language, so the conference is not overcrowded, but that gives it a special feeling of family gathering. There were 162 participants this year, and I have an impression that every one of them is extremely friendly, enthusiastic and just plain awesome.

The conference itself had 2 tracks of 45-minute talks and 60-minute keynotes. Most talks were of high quality, and the topics ranging from compiler internals to fun applications like music generation, car racing and map drawing.

Both Don Syme, the creator of F#, and Philip Carter, F# program manager, were there and gave keynotes, but they were careful enough not to draw too much attention on Microsoft and let the community speak loud.

Corridor Track

But the talks were just a part of the story. For me, the conference started in the evening before the first day at the speakers drinks party, and only finished at 1 a.m. after the second day (the pubs in London are lovely).

I spoke to so many great people, I learnt a lot, and had fun too. I've never seen so many F# folks at the same place, and I guess there must be something about F# which attracts the right kind of people to it.

And of course it's so much fun to meet face-to-face all those twitter, slack, github and Channel 9 persona's and to see that they are actually real people :)

My Talk

The talk I gave was called "Azure F#unctions". It was not a hard-core F# talk, but people seemed to be genuinely interested in the topic.

A decent amount of attendees are already familiar with Azure Functions, and many either run them in production or plan to do so.

The reference version conflict problem is very well known and raises a lots of questions or concerns. This even leads to workarounds like transpiling F# Functions to Javascript with Fable. Yikes.

Durable Functions seem to be sparkling a lot of initial interest. I'll be definitely spending more time to play with them, and maybe to make F# story more smooth.

Functions were mentioned in Philip's keynote as one of the important areas for F# application, which is cool. We should spend some extra effort to make the documentation and onboarding story as smooth as possible.

Call to Action

Skills Matter is the company behind the conference. Carla, Nicole and others did a great job preparing the event; everything went smooth, informal and fun.

The videos are already online at Skillscasts (requires free signup).

F# Exchange 2019 super early bird tickets are for sale now and until Monday April 9, go get one and join F# Exchange in London next year!

I'm already missing you all.

Azure Durable Functions in F#

Azure Functions are designed for stateless, fast-to-execute, simple actions. Typically, they are triggered by an HTTP call or a queue message, then they read something from the storage or database and return the result to the caller or send it to another queue. All within several seconds at most.

However, there exists a preview of Durable Functions, an extension that lets you write stateful functions for long-running workflows. Here is a picture of one possible workflow from the docs:

Fan-out Fan-in Workflow

Such workflows might take arbitrary time to complete. Instead of blocking and waiting for all that period, Durable Functions use the combination of Storage Queues and Tables to do all the work asynchronously.

The code still feels like one continuous thing because it's programmed as a single orchestrator function. So, it's easier for a human to reason about the functionality without the complexities of low-level communication.

I won't describe Durable Functions any further, just go read documentation, it's nice and clean.

Language Support

As of February 2018, Durable Functions are still in preview. That also means that language support is limited:

Currently C# is the only supported language for Durable Functions. This includes orchestrator functions and activity functions. In the future, we will add support for all languages that Azure Functions supports.

I was a bit disappointed that F# is not an option. But actually, since Durable Functions support precompiled .NET assembly model, pretty much anything doable in C# can be done in F# too.

The goal of this post is to show that you can write Durable Functions in F#. I used precompiled .NET Standard 2.0 F# Function App running on 2.0 preview runtime.

Orchestration Functions

The stateful workflows are Azure Functions with a special OrchestrationTrigger. Since they are asynchronous, C# code is always based on Task and async-await. Here is a simple example of orchestrator in C#:

public static async Task<List<string>> Run([OrchestrationTrigger] DurableOrchestrationContext context)
{
    var outputs = new List<string>();

    outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Tokyo"));
    outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Seattle"));
    outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "London"));

    // returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
    return outputs;
}

F# has its own preferred way of doing asynchronous code based on async computation expression. The direct refactoring could look something like

let Run([<OrchestrationTrigger>] context: DurableOrchestrationContext) = async {
  let! hello1 = context.CallActivityAsync<string>("E1_SayHello", "Tokyo")   |> Async.AwaitTask
  let! hello2 = context.CallActivityAsync<string>("E1_SayHello", "Seattle") |> Async.AwaitTask
  let! hello3 = context.CallActivityAsync<string>("E1_SayHello", "London")  |> Async.AwaitTask
  return [hello1; hello2; hello3]
} |> Async.StartAsTask

That would work for a normal HTTP trigger, but it blows up for the Orchestrator trigger because multi-threading operations are not allowed:

Orchestrator code must never initiate any async operation except by using the DurableOrchestrationContext API. The Durable Task Framework executes orchestrator code on a single thread and cannot interact with any other threads that could be scheduled by other async APIs.

To solve this issue, we need to keep working with Task directly. This is not very handy with standard F# libraries. So, I pulled an extra NuGet package TaskBuilder.fs which provides a task computation expression.

The above function now looks very simple:

let Run([<OrchestrationTrigger>] context: DurableOrchestrationContext) = task {
  let! hello1 = context.CallActivityAsync<string>("E1_SayHello", "Tokyo")
  let! hello2 = context.CallActivityAsync<string>("E1_SayHello", "Seattle")
  let! hello3 = context.CallActivityAsync<string>("E1_SayHello", "London")
  return [hello1; hello2; hello3]
}

And the best part is that it works just fine.

SayHello function is Activity trigger based, and no special effort is required to implement it in F#:

[<FunctionName("E1_SayHello")>]
let SayHello([<ActivityTrigger>] name) =
  sprintf "Hello %s!" name

More Examples

Durable Functions repository comes with a set of 4 samples implemented in C#. I took all of those samples and ported them over to F#.

You've already seen the first Hello Sequence sample above: the orchestrator calls the activity function 3 times and combines the results. As simple as it looks, the function will actually run 3 times for each execution, saving state before each subsequent call.

The second Backup Site Content sample is using this persistence mechanism to run a potentially slow workflow of copying all files from a given directory to a backup location. It shows how multiple activities can be executed in parallel:

let tasks = Array.map (fun f -> backupContext.CallActivityAsync<int64>("E2_CopyFileToBlob", f)) files
let! results = Task.WhenAll tasks

The third Counter example demos a potentially infinite actor-like workflow, where state can exist and evolve for indefinite period of time. The key API calls are based on OrchestrationContext:

let counterState = counterContext.GetInput<int>()
let! command = counterContext.WaitForExternalEvent<string>("operation")

The final elaborate Phone Verification workflow has several twists, like output binding for activity (ICollector is required instead of C#'s out parameter), third-party integration (Twilio to send SMSs), recursive sub-function to loop through several attempts and context-based timers for reliable timeout implementation.

So, if you happen to be an F# fan, you can still give Durable Functions a try. Be sure to leave your feedback, so that the library could get even better before going GA.

Mikhail Shilkov I'm Mikhail Shilkov, a software developer. I enjoy F#, C#, Javascript and SQL development, reasoning about distributed systems, data processing pipelines, cloud and web apps. I blog about my experience on this website.

LinkedIn@mikhailshilkovGitHubStack Overflow