Hangfire

As part of my application I wanted to run a background service. In some fantasy future this might run as a separate process on another machine, scaling independently of the API server, so the service would naturally be isolated in its own class. For now I just needed something that would run scheduled jobs and be initialized during the Startup methods. The most popular solution for this problem seems to be a library called Hangfire which has had ASP.NET Core support since v1.6.0 (v1.6.16 at the time of writing).

Hangfire is backed by a database, so part of the setup involves selecting a database connector. There are two options for MySql, but the link for Hangfire.MySql goes 404, so I opted for Hangfire.MySqlStorage. I was able to get the basics of Hangfire working with this connector, although I did encounter some problems, notably that the Recurring Jobs dashboard page causes MySql exceptions and doesn’t load. One factor in this may be that, with Hangfire.Mysql as well as Pomelo.EntityFrameworkCore.MySql, I have references to different definitions of various MySql.Data.* classes in multiple assemblies. But as it currently works for my purposes, I haven’t pursued those errors further.

The other decision around the database is whether to share with the application database or use a separate schema. I opted for the latter to avoid any complications with my migration and test data code.

With that, we present the code. Firstly the .proj file:

<PackageReference Include="Hangfire" Version="1.6.*" />
<PackageReference Include="Hangfire.Autofac" Version="2.3.*" />
<PackageReference Include="Hangfire.MySqlStorage" Version="1.1.0-alpha" />

And then the startup functions. The first is called from ConfigureServices:

protected virtual void AddHangfireService(IServiceCollection services)
{
    services.AddHangfire(options =>
    {
        options.UseStorage(new Hangfire.MySql.MySqlStorage(
            Configuration["ConnectionStrings:HangfireMySql"],
            new Hangfire.MySql.MySqlStorageOptions
            {
                TransactionIsolationLevel = System.Data.IsolationLevel.ReadCommitted,
                QueuePollInterval = TimeSpan.FromSeconds(60),
                JobExpirationCheckInterval = TimeSpan.FromHours(1),
                CountersAggregateInterval = TimeSpan.FromMinutes(5),
                PrepareSchemaIfNecessary = true,
                DashboardJobListLimit = 50000,
                TransactionTimeout = TimeSpan.FromMinutes(1),
            }));
        options.UseAutofacActivator(this.IocContainer);
    });
}

and the second from Configure:

protected virtual void ConfigureHangfire(IApplicationBuilder app)
{
    app.UseHangfireDashboard();
    app.UseHangfireServer();

    RecurringJob.AddOrUpdate<Domain.Notification.INotifier>(
        "cbe-api-notification",
        notifier => notifier.Rollup(DateTime.UtcNow.AddDays(-1)),
        Cron.Daily(15) // 15:00 UTC - i.e. 3am NZST, 1am AEST
    );
}

This runs my job daily at 1500 UTC, which is the middle of the night from my perspective.

One aspect that Hangfire does very well is integrate with dependency injection frameworks. I have used Autofac, and you can see in the code above that nowhere have I had to construct the class for the notifier variable, instead the interface parameter INotifier suffices. The integration with Autofac is established in options.UseAutofacActivator(this.IocContainer); in the first code block. At the time UseAutofacActivator is called this.IocContainer is still null, but it doesn’t appear to be used until after Autofac is setup, which happens very soon thereafter.

Mocking MySqlException

As pleased as I am about Entity Framework Core’s InMemoryDatabase for testing, some failures cannot be easily simulated because InMemory doesn’t enforce database integrity, like constraint validation.

One of my operations uses foreign key constraints to validate data on insert and I wanted to mock this functionality to ensure a correct return value. This proved a little challenging because the type which triggers the failure, MySqlException, has no public constructors. Therefore a little reflection magic was required:

using System.Reflection;

public class MockSaveChangesAsyncSqlContext : SqlContext
{
    public MockSaveChangesAsyncSqlContext() { }
    public MockSaveChangesAsyncSqlContext(DbContextOptions<SqlContext> options) : base(options) { }

    public override Task<int> SaveChangesAsync(CancellationToken cancellationToken = default(CancellationToken))
    {
        var mySqlExceptionType = typeof(MySql.Data.MySqlClient.MySqlException).GetTypeInfo();
        var internalConstructor = (from consInfo in mySqlExceptionType.DeclaredConstructors
                                    let paramInfos = consInfo.GetParameters()
                                    where paramInfos.Length == 1 && paramInfos[0].ParameterType == typeof(string)
                                    select consInfo).Single();

        var innerException = internalConstructor.Invoke(new[] { "foreign key constraint fails" }) as Exception;
        throw new Exception("", innerException);
    }
}

It is important to note this is testing code. I would absolutely advise against using reflection to dig out hidden constructors in application code for many reasons, not the least of which is that a change to the library could suddenly break your application!

.Net Core Serializing File and Objects

For one of my API methods I wanted to send a file as well as object data. This is straight-forward enough when the object data consists of value types: the front end adds key-value-pairs to a FormData object, including the File object as one of the values; and the .NET Core back-end model object includes an IFormFile. e.g.

// JavaScript client
let data = new FormData();       
data.append("file", file);
data.append("id", "44b6...");
return this.httpClient.fetch(`...`, { method: 'post', body: data });
// C# Model
public class MyObj {
    public Microsoft.AspNetCore.Http.IFormFile File { get; set; }
    public Guid Id { get; set; }
}
// C# Controller Method
[HttpPost]
public async Task<IActionResult> Post(MyObj request) { ... }

However this approach fails if the model includes objects as in the following case where Numbers will be null.

public class MyObj {
    public Microsoft.AspNetCore.Http.IFormFile File { get; set; }
    public Guid Id { get; set; }
    public List<int> Numbers { get; set; }
}

At this point the model deserialization in .NET Core and the serialization done in JavaScript don’t match. However I found trying to use the suggested techniques to be somewhat over-complicated. My impression is the ‘right’ approach is to use a custom Model Binder. This seemed nice enough, but then got into details of needing to create and configure value binders, when I really just wanted to use some built-in ones for handling lists.

In the end I went with a different, perhaps less flexible or DRY, but vastly simpler approach: creating objects that shadowed the real object and whose get/set did the serialization.

public class ControllerMyObj : MyObj {
    public new string Numbers {
        get {
            return base.Numbers == null ? null : Newtonsoft.Json.JsonConvert.SerializeObject(base.Numbers);
        }
        set {
            base.Numbers = Newtonsoft.Json.JsonConvert.DeserializeObject<List<int>>(Numbers);
        }
    }
}

// Controller Method
[HttpPost]
public async Task<IActionResult> Post(ControllerMyObj request) { 
   MyObj myObj = request;
   ...
}

And now the front-end needs to be changed to send JSON serialized objects. That can be done specifically by key or using a more generic approach as follows.

let body = new FormData();
Object.keys(data).forEach(key => {
    let value = data[key];
    if (typeof (value) === 'object')
        body.append(key, JSON.stringify(value));
    else
        body.append(key, value);
});
body.append("file", file);
// fetch ...

.NET Core Secrets

Securing sensitive configuration information is one of those things that we know as developers is important, but so often gets deferred for more pressing commercial concerns, usually because of our confidence in the security infrastructure of our environments e.g. firewalls, VPNs. However in the field I’m heading into, security of personal information is important and expected to be audited, so with that commercial consideration in mind today I decided to tackle the challenge of securing configuration.

Secret Manager

There is a lot of chatter around the .NET Core Secret Manager but there appears to be two problems with it. Firstly, it is not a trusted store: “The Secret Manager tool does not encrypt the stored secrets and should not be treated as a trusted store. It is for development purposes only. The keys and values are stored in a JSON configuration file in the user profile directory.”. Secondly, and more significantly I believe, it is user specific. That means that each user has their own credentials.

When I set up a development environment for a team I want it to be as uniform as possible for the whole team. A uniform environment makes it easier for team members to help each other, and makes it easier to script tools for automation. And many development resources will be shared, such as an AWS test instance.

Finally, this doesn’t help with production. For production the above website suggests using environment variables. Such variables are almost certainly stored in plaintext somewhere – in my case in Elastic Beanstalk configurations. Storing in plain-text is insecure and if nothing else is going to be a black mark on a security audit.

Extending .NET Core Configuration

What I want is sensitive information to be stored in an encrypted file where the encrypted file and the key are stored separately i.e. at least one of those is not checked into the source repository. I also still want to have different configurations available for different environments. It is also important that the file is relatively easy to modify.

What I propose is a custom configuration provider that is inserted into the ConfigurationBuilder which processes the other settings file when it is instantiated. The concept is outlined in this extension method:

public static IConfigurationBuilder AddEncryptedAndJsonFiles(this IConfigurationBuilder builder, string fileName, string basePath, bool optional, bool reloadOnChange = false)
{
    string jsonFilePath = builder.GetFileProvider().GetFileInfo(fileName).PhysicalPath;
    var encryptedConfiguration = new EncryptedConfigurationSource(jsonFilePath, basePath);
    encryptedConfiguration.UpdateStoredSettings();

    return builder
        .AddJsonFile(fileName, optional, reloadOnChange)
        .Add(encryptedConfiguration);
}

UpdateStoredSettings() will look through the appsettings file for keys starting with SENSITIVE_name. It will then add the name and corresponding value to the encrypted file and remove it from the appsettings file. The ConfigurationProvider returned by the IConfigurationSource.Build method will read the encrypted file and return a data dictionary of keys and values. The location of the key file will be set in the appsettings and read by both the source method and provider.

The extension method above will allow a simple replacement of AddJsonFile with AddEncryptedAndJsonFiles leaving Startup like this:

var builder = new ConfigurationBuilder()
    .SetBasePath(configBasePath)
    .AddEncryptedAndJsonFiles("appsettings.json", configBasePath, optional: true, reloadOnChange: true)
    .AddEncryptedAndJsonFiles($"appsettings.{env.EnvironmentName}.json", configBasePath, optional: true)
    .AddEnvironmentVariables();
Configuration = builder.Build();

Implementation

The implementation requires three classes as is standard for configuration providers:

  • a ConfigurationProvider which writes the properties into the dictionary used by consumers of configuration;
  • an IConfigurationSource which is the factory for ConfigurationProvider and where I opted to put the pre-processing method; and
  • an extension method for convenience.

The implementation uses AES for encryption. I considered deliberately using a slower method, but had trouble finding documentation and examples specific to .NET Core for symmetric encryption (as opposed to password hashing which is where those algorithms tend to be used).

Unlike the appsettings.json, the encrypted settings are stored in a single flat object, with the key being that used for configuration lookups e.g. configuration["parent:child"]. If a matching setting is found then it will overwrite the old one, allowing settings to be repeatedly updated.

One delightful problem I had was that the default IFileProvider refused to resolve paths above the base path, making it impossible to use for a relative path pointing outside the repository. As a result I had to pass in the base path, which feels like something of a hack.

A gist with the full source code can be found here

Perils of AddDbContext

The following snippet is the suggested approach to injecting an Entity Framework context in ASP.NET Core, taken from the docs page on Dependency Injection in ASP.NET Core:

services.AddDbContext<SqlContext>(options => 
    options.UseMySql(Configuration.GetConnectionString("DefaultConnection")));

This means that whenever you have a constructor that includes SqlContext, the runtime will provide an instance of SqlContext without the developer having to type new anywhere.

Dependency injection in ASP.NET Core comes in three flavours: Transient, meaning a new object (e.g. SqlContext) is called each time a constructor requires the dependency; Scoped, meaning the same object is used for all dependencies of that type during the current web request; and Singleton, where only one instance of the object is ever created. The default for AddDbContext is Scoped meaning that during a web request all classes will be accessing the same SqlContext.

When Entity Framework fetches a record from the database it caches it so if the application requests it again it returns the results from its cache. If someone else updated the database during the request, their change won’t be seen by the first caller. For short and stateless web requests this seems like a reasonable optimization. While technically the data you’re seeing is inconsistent, it was valid a few milliseconds ago, so any permissions that might have been revoked were valid then, and if you try and write anything then optimistic concurrency will alert you to a problem.

Problems

However care must be taken as it is very easy to end up getting cached values when that wasn’t intended.

If the dependent object is a singleton, then it will receive a context instance when the object is first created, and hold onto that context until the server recycles. During that time anything it reads will be cached in the context and any changes to the database ignored. This is particularly problematic where there are balanced web servers where the singletons on two different servers will be unaware of the changes the other made.

This approach is also quite opaque. It is not obvious from the code, compared to say a using statement, the lifetime of the context and therefore what side-effects might occur among the many classes in a given web request using this context. This is true of any object with scoped lifetime and is why I tend to avoid using scoped.

While not really a problem with its usage, that we injecting a concrete class rather than an interface doesn’t really follow the basic pattern of dependency injection. It is not terribly difficult to create an interface for the context, but as noted earlier, it is not the standard being documented. I assume the reason for this is that the testability problem with injecting a concrete context has been removed by the introduction of DbContextOptionsBuilder.UseInMemoryDatabase().

A Solution

Frankly I prefer more control and transparency so I’ve returned to the time-honored tradition of having a data factory so that my classes can create a database context when they want to. To do this the factory uses the IoC container to create contexts on demand.

public interface ISqlContextFactory { SqlContext NewContext { get; }}

public class SqlContextFactory : ISqlContextFactory
{
    private Func<SqlContext> _getSqlContext;
    public SqlContextFactory(Func<SqlContext> getSqlContext)
    {
        _getSqlContext = getSqlContext;
    }
    public SqlContext NewContext { get { return _getSqlContext(); } }
}

This could be done by passing the IoC container into the factory, but I chose to pass the factory a function which generates the contexts, leaving options configuration in Startup.

Edit 16-March-2017: The original code source caused problems with the mysql connector similar to those discussed on Connection reuse here. The following code has been updated so the options are set for every request, rather than just once at startup, and this appears to have fixed the issue.

services.AddSingleton<ISqlContextFactory, SqlContextFactory>(provider => {

    string mysqlConnStr = Configuration.GetConnectionString("Mysql");

    return new Data.SqlContextFactory(() => {
        var options = new DbContextOptionsBuilder<Data.SqlContext>();
        options.UseMySql(mysqlConnStr);
        return new Data.SqlContext(options.Options);
    });
});

By removing the AddDbContext we have broken migrations and fixing this requires a little unsavory use of statics. The dotnet ef command line seems to run the Startup class but does not use the dependency injector, so instead I created a little class just for the migrations to use, and set the static connection string in Startup.ConfigureServices() so that the migration class doesn’t have to repeat the configuration loading code.

using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Infrastructure;

/// This is ONLY for use by dotnet command line tool
public class SqlContextMigrationsTarget : IDbContextFactory<SqlContext>
{
    public static string ConnectionString;
    private DbContextOptionsBuilder<SqlContext> _builder = new DbContextOptionsBuilder<SqlContext>();

    public SqlContextMigrationsTarget() {}

    public SqlContext Create(DbContextFactoryOptions options)
    {
        _builder.UseMySql(ConnectionString);
        return new SqlContext(_builder.Options);
    }
}

// and in Startup.ConfigureServices()
SqlContextMigrationsTarget.ConnectionString = Configuration.GetConnectionString("Mysql");