End of the Functional Road (for now)

After three months hiatus from ‘work’, I’m back focusing on a specific project. While I tried to do it work with F# and Suave I found that my current (lack of) knowledge with F# meant I was spending more time trying to understand the framework than interact with it. So, out of a sense of pragmatism, I’ve returned to C# and am now expanding my horizons through the use of .NET Core, Aurelia, Docker, developing in VS Code, and targeting Linux.

Sad as it is, this is the end of the functional domain project for now. I really would like to use F# in a commercial sense at some stage, but for now the key limitation has been a desire to use .NET Core and the F# tooling not quite being ready.

The source code remains available here: https://github.com/winterlimelight/FunctionalDomainProject


While SQL Server is well supported in F# through the built in type provider FSharp.Data.SqlClient, SQL Server itself has the drawback of cost, which can be difficult to justify in a small business or an environment where its richer features are not required. The type provider I’ve used thus far, SQLProvider, supports a wider array of database engines, and this post explores using it with MySQL.

Data Migration

MySQL Workbench includes tooling to migrate from another database. To access this, open the Database menu and select Migration Wizard…. To migrate follow the Migration Task List noting the following:

  1. In Source Selection use Database System = Microsoft SQL Server, Connection Method = ODBC (native)
  2. In Manual Editing select Column Mappings from the View dropdown (top-right):
    • change the target types to suit MySQL. For instance, where Source Type is UNIQUEIDENTIFIER, set Target Type to CHAR(36).
    • remove the UNIQUE target flag from columns which are not primary keys.

Code Changes

After completing migration and verifying the creation of the tables and copying of data, the calling code needs to change to support MySQL:

let resPath = __SOURCE_DIRECTORY__ + @"../../packages/MySql.Data/lib/net45"

type Sql = SqlDataProvider<
            ResolutionPath = resPath,

Unfortunately when I did this I kept getting an error “Unable to resolve assemblies. One of MySql.Data.dll … must exist in the paths …”. Debugging errors with type providers is difficult as they run at build. In this case I directly created a MySQLCommand by typing let x = MySql.Data.MySqlClient.MySqlCommand which gave more useful errors: “The type X is required here and unavailable. You must add a reference to System.Y…”, thus adding those missing references helped. At some stage I also needed to restart Visual Studio after correctly setting the reference path for the MySQL dll.

Changing the database engine changes the objects generated by the type provider so some find-and-replace is required to correct these. There are also some type changes, such as changing bytes to signed bytes.


The first, and frankly show-stopping, problem is with Guids. MySql.Data interprets CHAR(36) as a System.Guid (it can instead use BINARY(16) via a connection string property: OldGuids). However SqlProvider created a string property and the end result is a runtime conversion error between the two within SqlProvider. My impression, although questions to that effect have not been answered, is this is a limitation of SqlProvider. Likely the best solution is to use CHAR(37) instead of CHAR(36) and then add a lot of manual string to/from Guid conversion.

A question I’ve not yet answered is whether this works on Mac and Linux. In theory I believe it should as the MySQL client dll can run against Mono as described here. However given the poor GUID support my inclination is to try a different database engine at this stage.

Running on Linux

As the F# Foundation states: “F# is a mature, open source, cross-platform, functional-first programming language”. Today I decided to try out the cross-platform part of that statement by trying to get my project running on Linux.

I’m running Ubuntu 16.10 64-bit with 3GB of RAM in VirtualBox. The steps below come from various sources, which are referenced by links.

F# Setup on Linux

Steps taken from http://fsharp.org/use/linux/ and sites it references. All steps were run in a Terminal in Ubuntu.

Step 1. Add mono to apt sources

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update

Step 2. Install Mono

sudo apt-get install mono-devel
sudo apt-get install mono-complete 

At this point I was able to create and run the hello.cs program as described here meaning Mono and therefore .NET was functioning on the machine.

Step 3. Finally, install fsharp

sudo apt-get install fsharp

Setting up the project

To get the project running we first needed the source code

git clone -b Suave https://github.com/winterlimelight/FunctionalDomainProject.git FunctionalDomainProject
cd FunctionalDomainProject

and then to restore the libraries using paket. This came with a slight hiccup as linux needed to be told that the bootstrapper and paket were executables by using chmod.

chmod a+x .paket/paket.bootstrapper.exe
chmod a+x .paket/paket.exe
.paket/paket.exe update

At this point I tried my first compile but got a series of errors rooted at the Store.fs. This is the file containing the SqlDataProvider connection string, and in order for F# to compile it needed to be able to connect to that database. This required the connection string to change to reference the IP address of the VirtualBox host machine, and to replace Trusted_Connection=true with User Id=...;Password=.... The host machine needed the above SQL login created and given dbo rights to the AssetManager database. It also needed a firewall exclusion added for SQL Server.

With those changes in place, the following command performed a successful compile:

xbuild AmApi/AmApi.fsproj

To run it, the executable needed execute rights, then could be called:

cd AmApi/bin/Debug/
chmod a+x AmApi.exe

The integration tests have been hosted in Postman, a Chrome extension, so it was a simple matter to install that extension in Chrome in Ubuntu and open the test collection and run it. The results: 22/22 passed.

Playing in the environment

Beyond this I also tried to get debugging going using VS Code with Ionide. I found some information about possible configuration steps necessary for F# debugging, but couldn’t get it working myself.

I also decided to create a FAKE build by creating an empty project using the Ionide FAKE plugin in VS Code. This created the necessary build.* files which were copied into the project repository, and a paket dependency for FAKE created. The outcome of that can be seen here.

Dependency Injection

My project architecture has been setting up to allow dependency injection. For instance, the commands take repository instances as arguments. But the approach I’ve been preparing is very object-oriented, and in my notes I had mused on thoughts about how partial application would be a more functional way of doing this. However exactly how to structure this has eluded me. Thankfully the ever understandable F# for Fun and Profit just created a post Functional approaches to dependency injection that bridges this gap, so now I’m going to walk through my conversion from interfaces to function types.

Interfaces to Function Types

The repositories the simplest place to start. At present the interface for the template repositories is:

type ITemplateReadRepository =
    abstract member FindById: System.Guid -> Template option

type ITemplateWriteRepository =
    abstract member FindById: System.Guid -> Template option
    abstract member Save: Template -> unit

Changing these to function types means replacing each member with a function type.

I firmly believe in read-write separation so it’s important that there is a distinction made between finds made by the write system for the purpose of identity and validation, and finds made by a read system for querying. So despite having identical signatures, I like the concept of different types for FindById.

type FindTemplateById = System.Guid -> Template option
type FindTemplateByIdForValidation = System.Guid -> Template option
type SaveTemplate = Template -> unit

In a large project these would likely be separated into different modules purely for the purpose of code organization.

Passing partials

The current implementations of these methods directly instantiate a data context, which means they create a dependency, which is what we’re trying to avoid.

member this.FindById (id:Guid) = 
    let dc = Sql.GetDataContext()
    // use dc to find template

In object-oriented dependency injection the domain class would have a dependency on some IRepository and the IoC container would create a repository instance passing in the connection information. In functional programming this option is not available, so the dependencies have to be passed as function arguments meaning we need a method with this signature:

DbContext -> Guid -> Template option
// for example, the persistence method:
let findTemplateById (dc:DbContext) (id:Guid) = ...

However this means the caller has to know how to create the DbContext dependency. That is likely not the responsibility of the caller, so we need another abstraction that manages the dependencies and only requires caller to provide variables that it is responsible for. We can do this by providing a function which can convert between the signature understood by the caller and the signature of the target method.

// Persistence method: DbContext -> Guid -> Template option
let FindTemplateById dc id = ...  

// Domain method: (Guid -> Template option) -> Guid -> Template option 
let GetTemplate findById id = 
    findById id

// Caller
let dc = Sql.GetDataContext()
let findTemplateByIdWithDc = FindTemplateById dc // Signature Converter
let res = GetTemplate findTemplateByIdWithDc id

The converting function, findTemplateByIdWithDc, is a partially applied function of FindTemplateById because we have not specified all of the arguments, leaving the second (id) to be set when findById is called.

In my project the DbContext instance is created in the program.fs which is a layer higher than the caller function (my Api level) above. This same pattern can be applied so that the DbContext is passed transparently through the Api level as well as the Domain. For the sake of organization, all of these ‘signature converters’ are placed into a file given the name CompositionRoot. That file is defined immediately before the first file that uses it, in this case before program.fs. The end result looks something like the following, which is a snapshot of the full stack used for the GET template request.

type FindTemplateById = System.Guid -> Template option // domain/persistence 
type IGetTemplate = System.Guid -> Template option // api/domain

// Persistence.fs
module Persistence.TemplateReadRepo =
    let findById dc id = 
        // use dc ...

// (Domain)Operations/Template.fs
module Operations.Template =
    let GetTemplate (findById:FindTemplateById) id = 
        findById id

// Api/TemplateController.fs
module Api.Template =
    let getTemplate (getTemplateById:IGetTemplate) (id:Guid) : WebPart =
        match (getTemplateById id) with ...

// CompositionRoot.fs
module Operations =
    let getTemplate dc = AmApi.Operations.Template.GetTemplate (Persistence.TemplateReadRepo.findById dc)

module ApiMethods =
    let getTemplate dc = Api.Template.getTemplate (Operations.getTemplate dc)

// Program.fs
let route dc =
    choose [
        pathScan ... (ApiMethods.getTemplate dc)

The composition root creates partial functions like Operations.getTemplate dc which mean that the argument given to Api.Template.getTemplate still conforms to the signature it requires while the information about the context travels to the domain, and in a similar fashion to the persistence call where it is finally used.


Having been some time with an in-memory mutable dictionary serving as a data store, the time seemed right to introduce a permanent data store.

Selecting a data store

Choosing a store for a solution is a key architectural decision and requires evaluating a huge set of factors including requirements, domain-store match, cost, scale, read vs write counts, eventual consistency vs availability, transactional integrity, etc.

There are a number of different stores that are reasonable for this project. Ideally I’d try implementing them all, however I don’t expect that to happen in the short-term. These are some options:

  • Document store e.g. MongoDB. This may be appropriate given our aggregates (template and asset) are quite complex objects.
  • Search index e.g. ElasticSearch. If there are many orders of magnitudes more reads than writes then optimizing for querying using a search index may be appropriate.
  • Event store. An event store, typically implemented manually on flat files, records all the business events triggered by API or system actions. It is often used in a CQRS system in concert with a read-specific store. Having read/write separation in our stack allows for this option.
  • Relational database e.g. SQL Server. Provides transactions, is the most mature, and the best supported by libraries.

Choosing between these relies on criteria that don’t exist because this isn’t a commercial project, so because I’m learning and trying to reduce the number of learning variables I chose the tried-and-true, a relational database.

As an aside, as an architect we also want to allow for future contingencies where it is practical. By providing a clear persistence layer in the code and comprehensive integration tests, we allow the choice of store to be re-evaluated in the future (providing the organization is prepared for the cost).


F# has an excellent feature, Type Providers, that generate strongly typed code at compile time for interacting with data sources. This page has a nice comparison of some type providers for F#. For no particular reason I selected SQLProvider

As mentioned earlier, the domain aggregates are relatively deep, so they map to multiple tables, which is why in a production solution I’d lean towards a non-relational store. Here is a comparison of the domain type template, and the tables that store it in SQL (constraints excluded):

type FieldValue = 
    | StringField of string
    | DateField of System.DateTime
    | NumericField of float

type FieldDefinition = {
    Id: System.Guid
    Name: string
    Field: FieldValue

type Template = {
    Id: System.Guid
    Name: string
    Fields: FieldDefinition list
    MaintenanceProgramId: System.Guid option
CREATE TABLE [dbo].[FieldValue](
    [FieldValueId] [uniqueidentifier] NOT NULL,
    [FieldDefinitionId] [uniqueidentifier] NOT NULL,
    [AssetId] [uniqueidentifier] NULL,
    [ValueType] [tinyint] NOT NULL,
    [StringValue] [nvarchar](max) NULL,
    [DateValue] [datetime] NULL,
    [NumericValue] [real] NULL)

CREATE TABLE [dbo].[FieldDefinition](
    [FieldDefinitionId] [uniqueidentifier] NOT NULL,
    [Name] [nvarchar](255) NOT NULL,
    [TemplateId] [uniqueidentifier] NOT NULL)

CREATE TABLE [dbo].[Template](
    [TemplateId] [uniqueidentifier] NOT NULL,
    [Name] [nvarchar](255) NOT NULL,
    [MaintenanceProgramId] [uniqueidentifier] NULL)

The end result is that we end up with some fairly complex queries and mapping logic. One of the sources of complexity is that the FieldValue table is used both for default template field values (when AssetId is null) and asset field values.

The query contains two left outer joins because a template may exist without any fields. This is shown using the rather cryptic (!!) operator. It would be nice if a more pleasant name could be used for this operator.

The mapping between discriminated unions and the underlying store can be handled by joining to a different table for each case, or by the approach used here which is to have a different column for each case. Using different columns makes the query easier, but results in a sparse table. Given we only have three cases the sparseness of the table shouldn’t be a big space penalty.

The read mapping is shown below. The full code is available here.

let private mapSingleTemplate (rows:TemplateQueryResultSet list) : Template =
    let fields = [ for row in rows do
                    let (_, defn, value) = row
                    if defn.FieldDefinitionId <> System.Guid.Empty then // empty guid means template has no fields
                        yield {
                            Id = defn.FieldDefinitionId
                            Name = defn.Name
                            Field = match value.ValueType with
                                    | 1uy -> StringField(value.StringValue.Value)
                                    | 2uy -> DateField(value.DateValue.Value)
                                    | 3uy -> NumericField(float value.NumericValue.Value)
                                    | _ -> failwith "Unknown field type"
    let (templateCols, _, _) = rows.Head
        Id = templateCols.TemplateId
        Name = templateCols.Name
        Fields = fields
        MaintenanceProgramId = templateCols.MaintenanceProgramId

let private templateByIdQuery (dc:DbContext) id : System.Linq.IQueryable =
    query { 
        for template in dc.Dbo.Template do
        // (!!) means left outer join
        for fieldDef in (!!) template.``dbo.FieldDefinition by TemplateId`` do
        for fieldVal in (!!) fieldDef.``dbo.FieldValue by FieldDefinitionId`` do
        where (template.TemplateId = id && fieldVal.AssetId.IsNone)
        select (template, fieldDef, fieldVal)

let private templateById id : Template option =
    let dc:DbContext = Sql.GetDataContext()
    let rows = templateByIdQuery dc id |> Seq.toList
    if [] = rows then None else Some (mapSingleTemplate rows)


Moving to Suave meant a new logger was required as the original logger used the .NET Core LoggerFactory. Generally for logging I’ve used log4net or flat files. In this case I decided to try using the System.Diagnostics.Trace tools – a little old fashioned perhaps – but supported in .NET 4.6.1 as well as .NET Core (future-proofing!).

Creating a trace involves two parts:
1. Creating a TraceSource instance and calling methods on it;
2. Adding listeners to the config file

In this case it is configured to write warnings and higher to the console, and everything to a file log (AmApi.log).

    <trace autoflush="true" />
      <source name="Log" switchValue="All" switchType="System.Diagnostics.SourceSwitch">
          <add name="console" type="System.Diagnostics.ConsoleTraceListener">
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Warning"/>
          <add name="logToFileListener"/>
          <remove name="Default"/>
      <add name="logToFileListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="AmApi.log" />     

Suave has built in logging capabilities, such as the colored text displayed in the console. It allows these logs to be accessed by creating a logging adapter and configuring it to be used. This is described here, although the interface definition is more advanced than given on that page, as illustrated by the adapter shown below. This implementation calls a method, SuaveLog, on the main logger class that understands and converts Suave log levels to Diagnostics.TraceEventType.

type SuaveLoggerAdapter() =
    let _log (msg:Suave.Logging.Message) = 
        use strWriter = new System.IO.StringWriter()
        let txt = Suave.Logging.TextWriterTarget(Suave.Logging.LogLevel.Verbose, strWriter) :> Suave.Logging.Logger
        txt.logSimple msg
        Logger.SuaveLog (strWriter.ToString()) msg.level

    interface Suave.Logging.Logger with
        member __.logSimple msg = _log msg
        member __.log level msgFactory = _log (msgFactory level)
        member __.logWithAck level msgFactory = async { do _log (msgFactory level) }

Configuration is done in the startWebServer method. I wanted to preserve the existing logging capabilities, particularly the console, so a CombiningTarget was used. CombiningTarget sends the log messages to multiple loggers.

let defaultLog = Suave.Logging.Targets.create Suave.Logging.LogLevel.Info
let logger = Suave.Logging.CombiningTarget([ defaultLog; Util.SuaveLoggerAdapter() ])
let config = { defaultConfig with logger = logger }
startWebServer config handleRequest

To complete the picture the logging class and instance are shown here. I’ve been lazy and used the underscore here to denote the intention to keep the class private. Alternatively an interface could have been created and a private class defined to implement it. A singleton was also considered but one never knows when it might be useful to split logs so I try to avoid that approach.

type _Logger() = 
    let log = new System.Diagnostics.TraceSource("Log")

    let _log (eventType:Diagnostics.TraceEventType) (msg:string) =
        log.TraceEvent(eventType, 0, (sprintf "%s: %s" (DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff K")) msg))

    override this.Finalize() = 

    member this.Info msg = _log Diagnostics.TraceEventType.Information msg
    member this.Warn msg = _log Diagnostics.TraceEventType.Warning msg
    member this.Error msg = _log Diagnostics.TraceEventType.Error msg

    member this.SuaveLog msg (level:Suave.Logging.LogLevel) = 
        let traceEventType = match level with
                                | Suave.Logging.LogLevel.Verbose -> Diagnostics.TraceEventType.Verbose
                                | Suave.Logging.LogLevel.Debug   -> Diagnostics.TraceEventType.Verbose
                                | Suave.Logging.LogLevel.Info    -> Diagnostics.TraceEventType.Information
                                | Suave.Logging.LogLevel.Warn    -> Diagnostics.TraceEventType.Warning
                                | Suave.Logging.LogLevel.Error   -> Diagnostics.TraceEventType.Error
                                | Suave.Logging.LogLevel.Fatal   -> Diagnostics.TraceEventType.Critical
        _log traceEventType msg

let Logger = _Logger()


A Change of Direction

I’ve decided to step away from ASP.NET Core for the moment. I’d like to learn more about F# and good functional practice, and from what I’ve learned so far I feel this would be better achieved via tools and frameworks that were designed for F#. Stepping back also allows use of the .NET Framework and therefore my very much missed debugger.

Choosing a Web Framework

In looking at the options for a web framework for F#, two options stood out as being mature and well-supported: WebSharper; and Suave.IO. My choice is to use Suave.IO because I’m building an API and so am not in need of all the client capabilities that WebSharper seems to be strong in. To that end I’ve been through (i.e. typed out) the Suave Music Store tutorial and read through the documentation.

Picking up a new framework is never easy, and less so when the paradigm being used is new. To aid with that I’ve got the Suave source code from github so I can debug into functions to understand what is going on. There were a few problems referencing the linked files (e.g. Utils/Facade.fs) when building it, which seemed to mysteriously evaporate when the project was reopened. So I’m finally at the point where I can ‘port’ my existing code over the Suave.IO and see what happens. This is where it is at so far, but I hope to report more progress in a day or two:

let webPart =
    choose [
        path Path.Assets.template >=> choose [
                PUT >=> OK "Template PUT"
        pathScan Path.Assets.templateById (fun guid -> OK (sprintf "Get Template by Id: %s" guid))

        path Path.Assets.template >=> choose [
                PUT >=> OK "Asset PUT"
        pathScan Path.Assets.assetById (fun guid -> OK (sprintf "Get Asset by Id: %s" guid))
        NOT_FOUND "No handler found"

let main argv = 
    printfn "%A" argv

    let config = { defaultConfig with bindings = [ { scheme = HTTP; socketBinding = { ip = Net.IPAddress.Parse(""); port = 5000us }} ]}

    startWebServer config webPart
    0 // return an integer exit code


Using xUnit

A key goal of the architecture of the functional domain project is that it is unit testable. xUnit.net seems to be the default test tool of choice for .NET Core, so I decided to try it out.

xUnit seems to be an evolution of earlier testing frameworks (see this for a comparison), in particular adding data driven testing.

Trial by xUnit

Unfortunately the documentation for xUnit is poor – while the homepage links to basic attribute use, there is no obvious link to documentation on theories and the various forms of data they take – so using data driven testing requires assembling information from widespread Google searching (which this post may add to) or downloading and figuring out the source code (which I just didn’t feel like this time!). Furthermore at one point I receieved a ‘NotSupportedException’ without a line number or anything else indicating what wasn’t supported, eventually discovering that exception referred to the MemberData attribute.

I’m going to engage in a small rant now. This lack of attention to documentation is a disappointing but fairly expected consequence of being open-source. It’s something that was repeatedly infuriating about working with Node.js, and frankly the opposite of what I expect from something that exists in the eco-bubble of Microsoft, an enterprise framework provider. I don’t know the whole governance structure of the .NET Foundation, but to me .NET and Microsoft go together, so perhaps Microsoft could lend a few technical writers to the cause. And to be clear, no disrespect to the developers – their greatest value is in progressing the platform not spending weeks documenting it. End rant.

Using ClassData

Finally I was able to get the xUnit ClassData attribute working in F#, and from this construct a nice base class so data can be easily created as sub-classes.

open System
open System.Collections
open System.Collections.Generic
open Xunit

type BaseTestData() =
    abstract member data: seq<obj[]>
    interface IEnumerable<obj[]> with 
        member this.GetEnumerator() : IEnumerator<obj[]> = this.data.GetEnumerator()
        member this.GetEnumerator() : IEnumerator = this.data.GetEnumerator() :> IEnumerator

type MyTestData() =
    inherit Util.BaseTestData()
    override this.data = Seq.ofList [[| box 4 |]; [| box 5 |]]

type SomeTestClass() =
    let IsOdd value = Assert.True(value % 2 = 1)

To complete the first test, a little additional type coercion was needed to avoid errors like “Object of type ‘…Railway+Result`2+Failure[System.Object,AssetManagementApi.Commands.Template+TemplateCommandError]’ cannot be converted to type ‘…Railway+Result`2[Microsoft.FSharp.Core.Unit,AssetManagementApi.Commands.Template+TemplateCommandError]’. Note the extra (pointless) conversion at line 7 which is the same type as the argument to the test method.

type TemplateValidationTestData() =
        inherit Util.BaseTestData()
        override this.data = 
            Seq.ofList [
                    ({ Id = System.Guid.NewGuid(); Name = "abc"; Fields = []; MaintenanceProgramId = System.Guid.Empty } : DomainTypes.Template) :> obj
                    (Railway.Failure (InvalidTemplate "Template may not have an empty list")) :> Railway.Result :> obj
type TemplateDomainOperationsTests() =
    let ``Template validation`` (input: DomainTypes.Template) (expected: Railway.Result<unit,TemplateCommandError>) =

        let cmd = TemplateCommand.Create(input)
        let result : Railway.Result = TemplateCommandHandler.Execute cmd mockRepo
        Assert.Equal(result, expected)

Overall this is pretty ugly. It’s incredibly far from ideal that the test data essentially gets its type destroyed because ClassData deals in object arrays. An alternative would be to use InlineData with basic types to populate data but that is very limiting as we can’t use an empty list or discriminated union type as data.

The end result is that I’ll live with theories and ClassData for now, but certainly keep my eye on something better, and something more suited to F#. FsUnit may be better, but currently doesn’t support CoreCLR (although there is a PR open since Jun 1 with changes for that).


One weird behavior I ran into was that a let binding kept returning null during a test, so I put a couple of logging statements in and discovered the root of the module is never run. This makes sense given the test runner is instantiating specific classes – but it is a tricky little trap. Solving the problem is as simple as turning the let (line 1) into a function.

let basicTemplate = { ... }

Logger.info (sprintf "1: %A" basicTemplate) // never called!

type TemplateDomainOperationsTests() =
    let ``Test method description`` =
        Logger.info (sprintf "2: %A" basicTemplate) // returns 

Tidier Controllers with Request Filters

All of my ASP.NET Core controllers are starting to follow a certain pattern (as follows) and in the spirit of good DRY code, I’d like to define this in a single place.

    Logger.info ("TemplateController.Get(" + id.ToString() + ")")
    // Actual logic
| _ as ex -> Logger.error (sprintf "%O" ex); this.BadRequest() :> IActionResult

Furthermore, I’ve also noted that ASP.NET Core will happily provide me with an invalid model, which means I then have to do an extra null check. It’d be nice to handle all these scenarios transparently for all controllers.

ASP.NET Core provides a solution for this via filters, which exposes interfaces that can be implemented to intercept requests during the processing pipeline.

Exception Filter

An exception filter can be used to handle exceptions. An exception filter class must implement IExceptionFilter which includes a single method, OnException (context: ExceptionContext).

Unfortunately the setup for an exception filter goes in the Startup.ConfigureServices while the global logger factory is available from Startup.Configure. So using the application logger factory requires an un-F# hack as follows:

type Startup(env: IHostingEnvironment) =
    let mutable _loggerFactory : ILoggerFactory option = None

    member this.ConfigureServices(services: IServiceCollection) =
        let mvc = services.AddMvcCore()
        mvc.AddMvcOptions(fun mvcOptions -> mvcOptions.Filters.Add(new GlobalExceptionFilter(_loggerFactory.Value)))

    member this.Configure (app: IApplicationBuilder, loggerFactory: ILoggerFactory) =
        _loggerFactory <- Some(loggerFactory)

In my case I’m happy using my global logger, so my GlobalExceptionFilter won’t take any arguments. And here it is:

type GlobalExceptionFilter() =
    interface IExceptionFilter with
        member this.OnException (context: ExceptionContext) =
            Logger.error (sprintf "%O" context.Exception)

Action Filter

The action filter needs to do two things:
1. Log calls
2. Prevent invalid model state from reaching the actions.

Action Filters implement either the IActionFilter or IAsyncActionFilter interface and their execution surrounds the execution of action methods. Action filters are ideal for any logic that needs to see the results of model binding, or modify the controller or inputs to an action method. Additionally, action filters can view and directly modify the result of an action method.

The aim here is to check and log the state before the action method is called so only the before-action method, OnActionExecuting, needs to be implemented. This implementation checks if the model state is valid, and if not logs the error and terminates the request with a 400 error without the actual action being executed. Where the model is valid, it logs the parameters.

type GeneralActionFilter() = 
    interface IActionFilter with

        member this.OnActionExecuting (context: ActionExecutingContext) =
            if not context.ModelState.IsValid then
                let errors = 
                    |> Seq.collect (fun (value: ModelStateEntry) -> value.Errors)
                    |> Seq.map (fun (modelError: ModelError) -> sprintf "%s" modelError.Exception.Message)
                    |> String.concat "\n\t  "

                Logger.error (sprintf "Called %s. Error: Invalid model state\n\tException messages: \n\t  %s" context.ActionDescriptor.DisplayName errors)
                context.Result <- new BadRequestObjectResult(context.ModelState)
                let args = [ for kvp in context.ActionArguments -> sprintf "%s %A" kvp.Key kvp.Value ] |> String.concat "\n\t"
                Logger.info (sprintf "Called %s with: \n\t%s" context.ActionDescriptor.DisplayName args)

        member this.OnActionExecuted (context: ActionExecutedContext) = ()

The end result is much cleaner controller methods:

    member this.Get(id: System.Guid) : IActionResult =
        match GetTemplate id (new TemplateRepository()) with
        | Some template -> this.Json(template) :> IActionResult
        | None -> this.NotFound() :> IActionResult

    member this.Create([<FromBody>]template: Template) : IActionResult =       
        CreateTemplate template (new TemplateCommandHandler())
        let url = new UrlActionContext (Controller = "Template", Action = "Get", Values = new RouteValueDictionary(dict [("id", box template.Id)]))
        this.Created((this.Url.Action url), "") :> IActionResult

Filters are added to the MVC pipeline in the Startup.ConfigureServices method:

member this.ConfigureServices(services: IServiceCollection) =
        let mvc = services.AddMvcCore()
        mvc.AddMvcOptions(fun mvcOptions -> mvcOptions.Filters.Add(new Api.Filters.GlobalExceptionFilter()))
        mvc.AddMvcOptions(fun mvcOptions -> mvcOptions.Filters.Add(new Api.Filters.GeneralActionFilter()))
        mvc.AddJsonFormatters() |> ignore


As I noted in an earlier post I don’t have a functioning debugger or intellisense, and I’ve come to appreciate just how much time having a debugger and a navigable watch window saves – being able to scan through fields to find something appropriate is much faster and easier than trawling through documentation. An additional challenge is that the docs don’t include inherited members, so you have to open those separately to find all inherited members.

Web Services Round Trip

My objective today was to have a web client save some data and retrieve it again. The data was to be stored in an F# record, representing a domain object, and be serialized and deserialized at the web service boundary.

The first issue was with JSON serialization. While a number of blogs indicated a need to change the serializerSettings ContractResolver, this does not appear to be necessary in ASP.NET Core.

This may be because of the change from app.UseMvc() to app.UseMvcCore() which is a lighter method that allows the developer to decide what MVC features are required. In my case this meant adding in app.AddJsonFormatters() so the services would serialize JSON.
One issue I keep running into is not knowing what namespace or module methods found in internet samples are in. To that end I’ve included the whole Program.fs here.

open System
open System.IO
open Microsoft.AspNetCore.Hosting
open Microsoft.AspNetCore.Builder
open Microsoft.AspNetCore.Http
open Microsoft.AspNetCore.Mvc.Formatters.Json
open Microsoft.AspNetCore.Diagnostics
open Microsoft.Extensions.DependencyInjection

type Startup(env: IHostingEnvironment) =

    member this.ConfigureServices(services: IServiceCollection) =
        let mvc = services.AddMvcCore()
        mvc.AddJsonFormatters() |> ignore

    member this.Configure (app: IApplicationBuilder) =
        app.UseDeveloperExceptionPage() |> ignore
        app.UseMvc() |> ignore

let main argv = 
    printfn "Starting"
    Logger.info "Startup"
    let host = WebHostBuilder().UseKestrel().UseContentRoot(Directory.GetCurrentDirectory()).UseStartup<Startup>().Build()
    0 //exit code

In contrast, the CLIMutable attribute is required on an F# record in order for it to be serializable. Without it the serialization of objects fails because MVC requires a default (empty) constructor and public properties to perform serialization. Using CLIMutable causes the record to be compiled with these.

type Template = {
    Id: System.Guid
    Name: string
    Fields: FieldDefinition list
    MaintenanceProgramId: System.Guid

The template service (still called ‘First’ here) exposes a GET and a PUT, and passes the request onto domain layer operations (I will put together a post on the full architectural implementation when I have all the layers) to process and store it in-memory.

type FirstController() =
    inherit Controller()

    member this.Get(id: System.Guid) =
        DomainOperations.GetTemplate id (new TemplateRepository())

    member this.Create([<FromBody>]template: Template) =
        DomainOperations.CreateTemplate template (new TemplateCommandHandler())

Using Postman I was able to PUT the following to http://localhost:5000/first/api


and was returned a similar object from GET http://localhost:5000/first/api?id={{id}}
The full source as it was at this stage is available on my github.