Getting started with Azure Application Insights in Aurelia

Azure Application Insights is an analytics service to monitor live web applications, diagnose performance issues, and understand what users actually do with the app. Aurelia is a modern and slick single-page application framework. Unfortunately, there's not much guidance on the web about how to use AppInsights and Aurelia together in a proper manner. The task gets even more challenging in case you are using TypeScript and want to stay in type-safe land. This post will set you up and running in no time.

Get Your AppInsights Instrumentation Key

If not done yet, go register in Azure Application Insights portal. To start sending telemetry data from your application you would need a unique identifier of your web application, which is called an Instrumentation Key (it's just a guid). See Application Insights for web pages walk-through.

Install a JSPM Package

I'm using JSPM as a front-end package manager for Aurelia applications. If you use it as well, run the following command to install AppInsights package:

jspm install github:Microsoft/ApplicationInsights-js

it will add a line to config.js file:

map: {
  "Microsoft/ApplicationInsights-js": "github:Microsoft/ApplicationInsights-js@1.0.0",
...

To keep the names simple, change the line to

  "ApplicationInsights": "github:Microsoft/ApplicationInsights-js@1.0.0",

Do exactly the same change in project.json file, jspm -> dependencies section.

Create an Aurelia Plugin

In order to track Aurelia page views, we are going to plug into the routing pipeline with a custom plugin. Here is how my plugin looks like in JavaScript (see TypeScript version below):

// app-insights.js
export class AppInsights {
  client;

  constructor() {
    let snippet = {
      config: {
        instrumentationKey: 'YOUR INSTRUMENTATION KEY GUID'
      }
    };
    let init = new Microsoft.ApplicationInsights.Initialization(snippet);
    this.client = init.loadAppInsights();
  }

  run(routingContext, next) {
    this.client.trackPageView(routingContext.fragment, window.location.href);
    return next();
  }
}

The constructor instantiates an AppInsights client. It is used inside a run method, which would be called by Aurelia pipeline during page navigation.

Add the Plugin to Aurelia Pipeline

Go the the App class of your Aurelia application. Import the new plugin

// app.js
import {AppInsights} from './app-insights';

and change the configureRouter method to register a new pipeline step:

configureRouter(config, router): void {
  config.addPipelineStep('modelbind', AppInsights);
  config.map(/*routes are initialized here*/);
}

After re-building the application, you should be all set to go. Navigate several pages and wait for events to appear in Application Insights portal.

TypeScript: Obtain the Definition File

If you are using TypeScript, you are not done yet. In order to compile the AppInsights plugin you need the type definitions for ApplicationInsights package. Unfortunately, at the time of writing there is no canonical definition in typings registry, so you will have to provide a custom .d.ts file. You can download mine from my github. I created it based on a file from this NuGet repository.

I've put it into the custom_typings folder and then made the following adjustment to build/paths.js file of Aurelia setup:

  dtsSrc: [
    'typings/**/*.d.ts',
    'custom_typings/**/*.d.ts'
  ],

For the reference, here is my TypeScript version of the AppInsights plugin:

import {NavigationInstruction, Next} from 'aurelia-router';
import {Microsoft} from 'ApplicationInsights';

export class AppInsights {
  private client: Microsoft.ApplicationInsights.AppInsights;

  constructor() {
    let snippet = {
      config: {
        instrumentationKey: 'YOUR INSTRUMENTATION KEY GUID'
      },
      queue: []
    };
    let init = new Microsoft.ApplicationInsights.Initialization(snippet);
    this.client = init.loadAppInsights();
  }

  run(routingContext: NavigationInstruction, next: Next): Promise<any> {
    this.client.trackPageView(routingContext.fragment, window.location.href);
    return next();
  }
}

Conclusion

This walk-through should get you started with Azure Application Insights in your Aurelia application. Once you have page view metrics coming into the dashboard, spend more time to discover all the exciting ways to improve your application with Application Insights.

Comparing Scala to F#

F# and Scala are quite similar languages from 10.000 feet view. Both are functional-first languages developed for the virtual machines where imperative languages dominate. C# for .NET and Java for JVM are still lingua franca, but alternatives are getting stronger.

My background is in .NET ecosystem, so F# was the first of the two that I started learning. At the same time, Scala seems to have more traction, largely due to successful products and frameworks like Spark, Akka and Play. That's why I decided to broaden my skill set and pick up some Scala knowledge. I've started with Functional Programming in Scala Specialization at Coursera. While following the coursera, I'm doing some notes about which language features in Scala I find interesting, or vice versa - missing compared to F#.

In no particular order, I want to share my notes of Scala vs F# in this blog post.

Post updated based on comments by Mark Lewis and Giacomo Citi.

Implicit Parameters

A parameter of a function can be marked as implicit

def work(implicit i:Int) = print(i)

and that means you can call the function without specifying the value for this parameter and the compiler will try to figure out that value you (according to the extensive set of rules), e.g.

implicit val v = 2;
// ... somewhere below
work // prints '2'

I am not aware of any similar features in other language that I know, so I'm pretty sure I don't understand it well enough yet :) At the same time, I think implicits are very characteristic for Scala: they are a powerful tool, which can be used in many valid scenarios, or can be abused to shoot in one's feet.

Underscore In Lambdas

Underscores _ can be used to represent parameters in lambda expressions without explicitly naming them:

employees.sortBy(_.dateOfBirth)

I think that's brilliant - very short and readable. Tuple values are represented by _1 and _2, so we can sort an array of tuples like

profitByYear.sortBy(_._1)

This looks a bit hairy and should probably be used only when the meaning is obvious. (In the example above I'm not sure if we sort by year or by profit...)

In F# underscore is used in a different sense - as "something to ignore". That makes sense, but I would love to have a shorter way of writing lambda in

empoyees |> List.sort (fun e -> e.dateOfBirth)

Any hint how?

Tail-Recursion Mark

Any recursive function in Scala can be marked with @tailrec annotation, which would result in compilation error if the function is not tail-recursive. This guarantees that you won't get a nasty stack overflow exception.

@tailrec 
def boom(x: Int): Int = {
  if (x == 0) 0
  else boom(x-1) + 1
}

The code above won't compile, as the recursion can't be optimized by the compiler.

The feature sounds very reasonable, although I must admit that I have never needed it in my F# code yet.

Call By Name

When you call a function in F#, the parameter values are evaluated before the function body. This style of function substitution model is known as Call by Value.

Same is the default in Scala. But there is an alternative: you can defer the evaluation of parameters by marking them with an => symbol:

def callByName(x: => Int) = {
  println("x is " + x)
}

This style is known as Call by Name, and the evaluation is defered until the parameter is actually used. So, if parameter is never used, its value will never be evaluated. This code:

val a:Option[Int] = Some(1)
val b = a getOrElse (2/0)

will set b to 1, and no error will be thrown, even though we are dividing by zero in function parameter. This is because the parameter of getOrElse is passed by name.

The F# alternative defaultArg doesn't work this way, so the following code will blow up:

let a = Some(1)
let b = defaultArg b (2/0) // boom

You can get deferred evaluation by passing a function:

let defaultArgFunc o (f: unit -> 'a) = 
  match o with | Some v -> v | None -> f()

let b2 = defaultArgFunc a (fun () -> 2 / 0)

That's essentially what happens in Scala too, but the Scala syntax is arguably cleaner.

Lack of Type Inference

Slowly moving towards language design flavours, I'll start with Type Inference. In Scala, type inference seems to be quite limited. Yes, you don't have to explicitly define the types of local values or (most of the time) function return types, but that's about it.

def max (a: Int, b:Int) = if (a > b) a else b

You have to specify the types of all input parameters, and that's quite a bummer for people who are used to short type-less code of F# (or Haskell, OCaml and others, for that matter).

Type inference in F# plays another significant role: automatic type generalization. F# compiler would make types as generic as possible, based on implementation.

let max a b = if a > b then a else b

The type of the function above is 'a -> 'a -> 'a. Most people wouldn't make it generic from get-go, but compiler helps in this case.

Functional vs Object-Oriented Style

Both F# and Scala are running on top of managed object-oriented virtual machines, and at the same time both languages enable developers to write functional code. Functional programming means operating immutable data structures in pure, free of side effects operations. Without questioning all this, I find pure functional Scala code to be written in much more object-oriented style compared to F#.

Classes and objects are ubiquitous in Scala: they are in each example given in Martin Odersky's courses. Most F# examples refrain from classes unless needed. F# official guidance is to never expose non-abstract classes from F# API!

Scala is really heavy about inheritance. They even introduced quasi-multiple inheritance: traits. Stream inherits from List, and Nothing is a subtype of every other type, to be used for some covariance tricks.

Operations are usually defined as class methods instead of separate functions. For example the following Scala code

word filter (c => c.isLetter)

would filter a string to letters only. Why is isLetter defined as a method of Char? I don't think it's essential for the type itself...

Usage of Operators

It looks like Scala culture inclines more towards the usage of different operators, not only for arithmetic operations but also for different classes from standard library and domain-specific code too. The basic ones are nice, e.g. list concatenation:

List(1, 2) ++ List(3, 4)

but others look awkward to me, e.g. stream concatenation:

Stream(1) #::: Stream(2)

Akka streams sweetness:

in ~> f1 ~> bcast ~> f2 ~> merge ~> f3 ~> out
            bcast ~> f4 ~> merge

This can go to quite an extreme, similar to what scalaz library does.

My default would be not to use operators unless you are sure that every reader is able to instantly understand what it means.

Partial Application

Not a huge difference, but F# functions are curried by default, while Scala functions aren't. Thus, in F# partial application just works, all the time

let add a b = a + b
let add3 = add 3
let sum = add3 5 // 8

Scala function

def add (a: Int, b: Int) = a + b

is not curried, but Underscore comes to the rescue

val add3: (Int) => Int = add(3, _)
val sum = add3(5) // 8

Note how I miss the type inference again.

The parameter order is very important in F#: the short syntax will partially apply parameters from left to right. In Scala, you can put _ at any position, which gives you some flexibility.

Single-Direction Dependency

F# compiler doesn't allow circular dependencies. You can't use a function before you've defined it. Here is what Expert F# book has to say about that:

Managing dependencies and circularity is one of the most difficult and fundamental problems in good software design. The files in an F# project are presented to the F# compiler in a compilation order: constructs in the earlier files can't refer to declarations in the later files. This is a mechanism to enforce layered design, where software is carefully organized into layers, and where one layer doesn't refer to other layers in a cyclic way (...) to help you write code that is reusable and organized into components that are, where possible, independent and not combined into a "tangle" of "spaghetti code".

I think this is huge. F# forces you to structure your code in a way that avoid mutual dependencies between different functions, types and modules. This reduces the complexity and coupling, makes the developers avoid some of the design pitfalls.

There's nothing like that in Scala. You are on your own.

Conclusion

Of course I did not cover all the distinctions, for instance active patterns, type providers, computation expressions in F# and type classes, higher kinded types, macros in Scala.

Obviously, both Scala and F# are very capable languages, and I am still picking up the basics of them. While similar in many aspects, they made several different choices along the language design trade-offs.

P.S. Overheard on Twitter:

F# isn't a bad language, it's just attached to a bad platform... The opposite of Scala actually.

UPDATE: Thanks everyone for the great comments; please check out this reddit and lobste.rs to see more of them.

Mocking API calls in Aurelia

Aurelia is a modern and slick single-page application framework. "Single-page application" aspect means that it's loaded into the browser once, and then the navigation happens on the client side and all the data are loaded from a REST API endpoint.

Let's say that our front-end Aurelia app is hosted at myaureliaapp.com while the REST API is hosted at myaureliaapp.com/api. The REST API is a server-side application, which can be implemented in .NET, Java, Node.JS etc., and it talks to a database of some kinds.

For the front-end development purpose, it's usually useful to be able to mock the connection to API with some static manually generated data. This cuts the hard dependency between the client code, the backend code and database. It's much easier to mock the exact data set which is needed for the current development task.

Fortunately, it can be easily done, and here is how.

Identify your requests

Create a list of the requests that you need to mock. For our example let's say you do the following requests from the application:

GET /api/products
GET /api/products/{id}
POST /api/products

Put your mock data into files

Go to the root folder of your Aurelia app and create an /api folder.

Create a /api/products subfolder and put a new file called GET.json. This file should contain the JSON of the product list, e.g.

[ { "id": 1, "name": "Keyboard", "price": "60$" },
  { "id": 2, "name": "Mouse", "price": "20$" },
  { "id": 3, "name": "Headphones", "price": "80$" }
]

Create a new file called POST.json in the same folder. POST response won't return any data, so the file can be as simple as

{}

Create subfolders 1, 2 and 3 under products and create a GET.json file in each of them. Every file contains the data for a specific product, e.g.

{ "id": 1, 
  "name": "Keyboard", 
  "price": "60$",
  "category": "Computer Accessories",
  "brand": "Mousytech"
}

Configure BrowserSync to mock your API calls

For the purpose of this post, I assume you are using Aurelia Skeleton Navigation starter kit, specifically the version with Gulp-based tasks and BrowserSync. If so, you should be familiar with gulp serve command, which serves your application at http://localhost:9000. We will extend this command to host your API mock too.

Navigate to /build/tasks folder and edit the serve.js file. Change the definition of serve task to the following code:

gulp.task('serve', ['build'], function(done) {
  browserSync({
    online: false,
    open: false,
    port: 9000,
    server: {
      baseDir: ['.'],
      middleware: function(req, res, next) {
        res.setHeader('Access-Control-Allow-Origin', '*');

        // Mock API calls
        if (req.url.indexOf('/api/') > -1) {
          console.log('[serve] responding ' + req.method + ' ' + req.originalUrl);

          var jsonResponseUri = req._parsedUrl.pathname + '/' + req.method + '.json';

          // Require file for logging purpose, if not found require will 
          // throw an exception and middleware will cancel the retrieve action
          var jsonResponse = require('../..' + jsonResponseUri);

          // Replace the original call with retrieving json file as reply
          req.url = jsonResponseUri;
          req.method = 'GET';
        }

        next();
      }
    }
  }, done);
});

Run it

Now just run gulp serve (or gulp watch, which does serve and then watches files for changes). Every time your app makes an API call, you will see a line in the gulp console:

[serve] responding GET /api/products

If you happen to make an invalid request with no mock defined, you will get an error:

[serve] responding GET /api/notproducts
Error: Cannot find module '../../api/notproducts/GET.json'

A complete example can be found in my github repository.

Building a Poker Bot: Functional Fold as Decision Tree Pattern

This is the fifth part of Building a Poker Bot series where I describe my experience developing bot software to play in online poker rooms. I'm building the bot with .NET framework and F# language which makes the task relatively easy and very enjoyable. Here are the previous parts:

In this post I describe a simple pattern to structure the complex decision making code using partial function application and fold operation applied to a list of functions.

Context

Poker decisions are complex and depend on the multitude of parameters and attributes. We can visualize the decision making process as a Decision Tree where leaf nodes are decisions being made, and the branches are different conditions. Here is a simplistic example of such a poker decision tree:

Simplistic Poker Decision Tree

Now, if we need to implement a similar tree in code, the most straightforward way to do that is to translate each condition to an if statement. This way, the nested conditions will guide the application through the branches right to the point where an appropriate decision can be returned.

This approach works for small cases, but in reality it does not scale particularly well in terms of the tree size. Namely, the two problems are:

Tree depth. In many cases, you might need to pass ten or more conditions before you find your way to the leaf. Obviously, ten levels of if statements are not particularly readable and maintainable. We can try to split the sub-trees into sub-functions, but that only gives a limited relief.

Subtree correlation. Some tree branches deep down the hiereachy might be correlated to each other. Say, you pass 10 levels of conditions and make a bet on flop. Now, on turn, you would probably take quite a different decision path, but the logic would be based on similar 'thinking' in human terms. Ideally, we want to keep this kind of related decisions together, while isolating them from the other unrelated decision paths.

In fact, the decision tree should be generalized to the Decision Graph to allow different decision branches to merge back at some point, e.g.

If there is one Ace on flop, or an overcard came on turn or river

and stacks pre-flop were 20+ BB, or 12+ BB in limped pot

then bet 75% of the pot

There are multiple paths to the same decisions.

Solution

Break the decision graph down vertically into smaller chunks. Each chunk should represent multiple layers of conditions and lead to eventual decisions. All conditions in sub-graph should be related to each other (high cohesion) and as isolated from other sub-graphs as possible (low coupling).

Here are two examples of such sub-graphs:

Isolated Decision Sub-graphs

Each sub-graph is very focused on very specific paths and ignores all the branches which do not belong to this decision process. The idea is that those branches will be handled by other sub-graphs.

Represent each sub-graph as a function with arbitrary signature which accepts all the parameters that are required for this sub-graph. Do not accept any parameters which are not related.

The last parameter of each function should be a Maybe of Decision, so should be the function's return type.

Produce a flat list of all the sub-graph functions. Partially apply the parameters to those functions to unify the signature of all of them.

Now, when making a decision, left-fold the list of functions with the data of current poker hand. If a function returns Some value of decision, return it as the decision produced from the graph.

Code sample

We define a number of functions, each one of which represents one piece of decision logic. Then we put them all into the list:

let rules = [
  overtakeLimpedPot overtakyHand snapshot value history;
  increaseTurnBetEQvsAI snapshot;
  allInTurnAfterCheckRaiseInLimpedPot snapshot history;
  checkCallPairedTurnAfterCallWithSecondPairOnFlop snapshot value.Made history;
  bluffyCheckRaiseFlopInLimpedPotFlop bluffyCheckRaiseFlopsLimp snapshot value history;
  bluffyOvertakingRiver bluffyOvertaking snapshot history
]

The type of this list is (Decision option -> Decision option) list.

Note how each individual function accepts different set of parameters. Current hand's snapshot is used by all of them, while calculated hand value and previous action history are used only by some of the functions.

Now, here is the definition of the facade decision making function:

rules |> List.fold (fun opt rule -> rule opt) None

It calculates the decision by folding the list of rules and passing current decision between them. None is the initial seed of the fold.

Conclusion

Vertical slices are an efficient way to break down the complex decision making into smaller cohesive manageable parts. Once you get the parts right, it's easy to compose them by folding a flat list of partially applied functions into a Maybe of decision result.

Dependency Inversion Implies Interfaces Are Owned by High-level Modules

Dependency Inversion is one of the five principles of widely known and acknowledged S.O.L.I.D. design guidelines. This principle is very powerful and useful when applied consistently. But in my experience, it's actually quite easy to misunderstand the idea, or at least to mentally simplify it to somewhat less profound technique of Dependency Injection.

In this post I will try to give my understanding of the principle, and the difference between Inversion and Injection.

Let's start with the Dependency Inversion principle definition. It was given by Uncle Bob Martin, and consists of two parts.

Part 1: Abstractions

High-level modules should not depend on low-level modules. Both should depend on abstractions.

Ok, this is easy to understand. High-level modules are also high-importance modules, they are about the business domain and are not specific about technical details. Low-level modules are about wiring those high-level functions to execution environment, tools and third parties.

Thus, the implementation of high level policy should not depend on implementation of low level code, but rather on interfaces (or other abstractions).

Let's take a look at an example. Our high-level business domain is about planning and executing trips from geographical point A to point B. Our low-level code talks to a service which knows how to calculate the time required for a vehicle to go from A to B:

UML: dependency inversion violated

So the following code violates the first part of the Dependency Inversion:

namespace Mapping
{
    public class RouteCalculator
    {
        public TimeSpan CalculateDuration(
            double fromLat, double fromLng, double toLat, double toLng)
        {
            // Call a 3rd party web service
        }
    }
}

namespace Planning
{
    public class TripPlanner
    {
        public DateTime ExpectedArrival(Trip trip)
        {
            var calculator = new RouteCalculator();
            var duration = calculator.CalculateDuration(
                trip.Origin.Latitude, 
                trip.Origin.Longitude, 
                trip.Destination.Latitude, 
                trip.Destination.Longitude);
            return trip.Start.Add(duration);
        }
    }
}

It's not compliant to the principle because the high-level code (TripPlanner) explicitly depends on low-level service (RouteCalculator). Note that I've put them to distinct namespaces to emphasize the required separation.

To improve on that, we might introduce an interface to decouple the implementations:

UML: dependency inversion with dependency injection

In Trip Planner we accept the interface as constructor parameter, and we'll get the specific implementation at run time:

namespace Mapping
{
    public class IRouteCalculator
    {
        TimeSpan CalculateDuration(
            double fromLat, double fromLng, double toLat, double toLng);
    }

    public class RouteCalculator : IRouteCalculator
    {
        // Same implementation as before...
    }
}

namespace Planning
{
    public class TripPlanner
    {
        private IRouteCalculator calculator;

        public TripPlanner(IRouteCalculator calculator)
        {
            this.calculator = calculator;
        }

        public DateTime ExpectedArrival(Trip trip)
        {
            var duration = this.calculator.CalculateDuration(
                trip.Origin.Latitude, 
                trip.Origin.Longitude, 
                trip.Destination.Latitude, 
                trip.Destination.Longitude);
            return trip.Start.Add(duration);
        }
    }
}

This technique is called dependency injection or, more specifically, constructor injection. This way we can easily substitute the implementation later or inject a test double while unit testing.

But that's just one part of the principle. Let's move on to part 2.

Part 2: Details

The second part of the principle says

Abstractions should not depend upon details. Details should depend upon abstractions.

I find this wording unfortunate because it might be confusing. There are some valid examples which explain it with base and derived classes. But in our example we solved the part 1 with an interface. So now we are told that the abstraction (interface) should not depend upon details (implementation).

That probably means that the interface should not leak any entities which are specific to the given implementation, to make other implementation equally possible.

While this is try, this second part of the principle may seem to be subordinate to part one, reducing to an idea "design your interfaces well". So many people tend to leave the part 2 out ( example 1, example 2), focusing solely on part 1 - the Dependency Injection.

Interface Ownership

But Dependency Inversion is not just Dependency Injection. So, to revive the part 2 I would add the following statement to make it clearer:

Abstractions should be owned by higher-level modules and implemented by lower-level modules.

This rule is violated in our last example. The interface is defined together with implementation, and is basically just extracted from it. It's owned by the mapping namespace.

To improve the design, we can transfer the interface ownership to domain level:

UML: dependency inversion

As you can see, I also renamed the interface. The name should reflect the way how the domain experts would think of this abstraction. Here is the result:

namespace Planning
{
    public class IDurationCalculator
    {
        TimeSpan CalculateDuration(Hub origin, Hub destination);
    }

    public class TripPlanner
    {
        private IDurationCalculator calculator;

        public TripPlanner(IDurationCalculator calculator)
        {
            this.calculator = calculator;
        }

        public DateTime ExpectedArrival(Trip trip)
        {
            var duration = this.calculator.CalculateDuration(
                trip.Origin, trip.Destination);
            return trip.Start.Add(duration);
        }
    }
}

namespace Mapping
{
    public class RouteCalculator : IRouteCalculator
    {
        public TimeSpan CalculateDuration(Hub origin, Hub destination)
        {
            // Extract latitude and longitude from Hubs
            // Call a 3rd party web service
        }
    }
}

Now, the interface is defined in Planning namespace, close to its Client, not its Implementation. That's the dependency inversion in action. Even more importantly, it's defined in terms of our domain - notice the use of Hub in the interface instead of low-level double.

Why High Level Code Should Own Interfaces

There are multiple benefits to this approach, here are the most important advantages:

Concise, readable high-level code

The high-level domain code has the highest value, so the ultimate goal is to keep it as clean as possible. The interface ownership enables us to design the most concise interfaces to achieve this goal. We avoid any kind of adaptation of domain entities to whatever lower-level details.

Better abstractions

The interfaces themselves get better as well. They are closer to business, so abstractions get more ubiquitous and better understood by everyone.

They tend to live longer, just because they are born from the domain side, not the infrastructure side.

Dependencies in outer layers

Code organization tends to improve too. If an interface is defined in the same module as the implementation, the domain module now has to reference the infrastructure module just to use the interface.

With domain-level interface, the reference goes in the other direction, so dependencies are pushed up to the outer layers of application.

This principle is the foundation of domain-centric architectures Clean architecture, Ports and Adapters and the likes.

Less cross-domain dependencies

In large systems, the business domains should be split into smaller sub-domains, or bounded contexts. Still, sub-domains are not totally isolated and must cooperate to achieve the ultimate business goal.

It might be compelling to reference the interfaces of one sub-domain from another sub-domain and then say that the dependency is minimal because they are hidden behind abstractions.

But coupling with abstractions is still coupling. Instead, each domain should operate its own abstractions at the high level, and then different abstractions should be wired together on lower level with techniques like adapters, facades, context mapping etc.

Conclusion

Here is my working definition of Dependency Inversion principle:

High-level modules should not depend on low-level modules. Both should depend on abstractions.

Abstractions should not depend upon details. Details should depend upon abstractions.

Abstractions should be owned by higher-level modules and implemented by lower-level modules.

Mikhail Shilkov I'm Mikhail Shilkov, a software developer. I enjoy F#, C#, Javascript and SQL development, reasoning about distributed systems, data processing pipelines, cloud and web apps. I blog about my experience on this website.

LinkedIn@mikhailshilkovGitHubStack Overflow