Auth0 Mock

Auth0 is a well-known authentication-as-a-service provider. Its database connection storage option allows organizations to reference a custom database, which is very useful if you want to store your user information with your business data and maintain integrity between those using foreign key constraints. You can do this in Auth0 by setting up a connection that accesses your hosted database (with appropriate firewall restrictions!) to add, update, and remove users.

A challenge with this is that each new environment requires a new database and Auth0 setup. This is particularly difficult if that environment is a developer’s machine and isn’t accessible to a connection string from the internet (due to Firewalls/NAT). One option is for each developer to have their own cloud database, but that gets expensive quickly, and adds unrealistic latency to database calls from their machine, making development more difficult.

I was faced with this problem while building integration tests using Auth0 and .NET Core, and opted to create a mock object.

Implementation

The top level interface for Auth0 in C# is IManagementApiClient. This consists of a number of client interface properties, and it’s these that I found most appropriate to mock using Moq. This leads to a basic structure as follows:

using Auth0.Core;
using Auth0.Core.Collections;
using Auth0.Core.Http;
using Auth0.ManagementApi;
using Auth0.ManagementApi.Clients;
using Auth0.ManagementApi.Models;
using Moq;

public class Auth0Mock : IManagementApiClient
{
  Mock<IUsersClient> _usersClient = new Mock<IUsersClient>();
  Mock<ITicketsClient> _ticketsClient = new Mock<ITicketsClient>();

  public Auth0Mock()
  {
    // setup for _usersClient and _ticketsClient methods
  }

  public IUsersClient Users => _usersClient.Object;
  public ITicketsClient Tickets => _ticketsClient.Object;

  public IBlacklistedTokensClient BlacklistedTokens => throw new NotImplementedException();
  // etc. for ClientGrants, Clients, Connections, DeviceCredentials,  EmailProvider, Jobs, Logs, ResourceServers, Rules, Stats, TenantSettings, UserBlocks
  public ApiInfo GetLastApiInfo()
  {
    throw new NotImplementedException();
  }
}

In this project only a small number of Auth0 methods were used (something I expect would be true for most projects), so only a few Auth0 client methods actually needed to be mocked. However it is quite important, for integration testing, that these methods replicate the key behaviours of Auth0, including writing to a database, and storing user metadata (which isn’t always in the database). To support these, the mock class includes some custom SQL, and a small cache, which are used by the mocked methods. The following code illustrates this using two methods. They are set up in the constructor, and implemented in separate methods.

using System.Collections.Generic;
using System.Data.SqlClient;
using Dapper;

private string _sql;

// local cache storing information that our sql table doesn't
private Dictionary<string, User> _users = new Dictionary<string, User>();

public Auth0Mock(/* injection for _sql connection string */)
{
  _usersClient.Setup(s => s.CreateAsync(It.IsAny<UserCreateRequest>())).Returns<UserCreateRequest>((req) => CreateAsync(req));
  _usersClient.Setup(s => s.DeleteAsync(It.IsAny<string>())).Returns<string>((id) => DeleteAsync(id));
}

private async Task CreateAsync(UserCreateRequest request)
{
  int userId = 0;
  using (var conn = new SqlConnection(_sql))
  {
    var rows = await conn.QueryAsync(@"INSERT INTO [MyUserTable] ...", new { ... });
    userId = (int)rows.Single().userId;
  }

  var user = new Auth0.Core.User
  {
    AppMetadata = request.AppMetadata,
    Email = request.Email,
    FirstName = request.FirstName,
    LastName = request.LastName,
    UserId = "auth0|" + userId
  };
  _users[user.UserId] = user;
  return user;
}

private async Task DeleteAsync(string id)
{
  var match = Regex.Match(id, @"auth0\|(.+)");
  string userId = match.Groups.Last().Value;

  using (var conn = new SqlConnection(_connStr))
    await conn.ExecuteAsync(@"DELETE FROM [MyUserTable] ...", new { userId });

  if(_users.ContainsKey(id))
    _users.Remove(id);
}

Being a mock object there are limitations. For instance, in this example the cache only includes users added via CreateAsync, not all the users in the test database. However where these limitations lie depends entirely your testing priorities, as the sophistication of the mock is up to you.

One downside to this approach is that Moq doesn’t support optional parameters, so the signatures for some methods can get quite onerous:

_usersClient.Setup(s => s.GetAllAsync(0, 100, null, null, null, null, null, It.IsAny<string>(), "v2"))
  .Returns<int?, int?, bool?, string, string, string, bool?, string, string>((i1, i2, b3, s4, s5, s6, b7, q, s9) => GetAllAsync(i1, i2, b3, s4, s5, s6, b7, q, s9));

private Task<IPagedList%gt; GetAllAsync(int? page, int? perPage, bool? includeTotals, string sort, string connection, string fields, bool? includeFields, string query, string searchEngine)
{
  // regex to match query and fetch from SQL and/or _users cache
}

Authorization

The Auth0 mock class provides authentication, but not authorization, and it would be nice if any integration tests could also check authorization policies. The run-time system is expecting to process a cookie or token on each request and turn that into a UserPrincipal with a set of claims. Therefore our tests must also populate the UserPrincipal, and do so before authorization is checked.

For this we need a piece of middleware that goes into the pipeline before authorization (which is part of UseMvc()). My approach was to place the call to UseAuthentication() into a virtual method in Startup and override that method in the test’s Startup:

public class TestStartup : Startup
{
  protected override void SetAuthenticationMiddleware(IApplicationBuilder app)
  {
    app.UseMiddleware<TestAuthentication>();
  }
  
  protected override void SetAuthenticationService(IServiceCollection services)
  {
    // This is here to get expected responses on Authorize failures.
    // Authentication outcomes (user /claims) will be set via TestAuthentication middleware,
    // hence there are no token settings.
    services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer();
  }
}

The middleware, TestAuthentication, remembers the last user that was set. It must be registered as a singleton with the dependency-injection framework so that the user is remembered between service calls. Testing code can set the user at any time by calling SetUser().

When a request is made TestAuthentication‘s InvokeAsync method applies claims based on that user. These claims will be processed as policies in the normal way so that Authorize attributes work as intended.

public class TestAuthentication : IMiddleware
{
  private string _userId;
  private string _roleName;

  public async Task InvokeAsync(HttpContext context, RequestDelegate next)
  {
    if (_userId > 0)
    {
      var identity = new ClaimsIdentity(new List
      {
        new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", "auth0|" + _userId),
        new Claim("http://myuri/", $"Role:{_roleName}")
      });

      var principal = new ClaimsPrincipal(identity);
      context.User = principal;
    }
    await next(context);
  }

  public void SetUser(string userId, string roleName)
  {
    _userId = userId;
    _roleName = roleName;
  }
}

With this combination we are able to successfully mock Auth0 while retaining our ability to work with our database, test non-Auth0 functionality, and test authorization.

Sharing Test Dependencies with Startup

An issue I’ve had while developing integration tests in .NET Core is sharing information between my TestContext and the Startup class.

The documented approach looks something like this:

var hostBuilder = new WebHostBuilder().UseStartup()
_server = new TestServer(hostBuilder);

The problem is that Startup is called from deep within new TestServer making it impossible to pass a reference from the calling context. This is particularly a problem with integration tests on an API, where we need the an HttpClient to be made from the TestServer instance in order to call API methods.

_client = _server.CreateClient();

Dependency Injection into Startup

What I hadn’t originally appreciated is that Startup class accepts dependencies defined by the host. Therefore anything already configured in the services, which is the container for ASP.NET’s dependency injection system, is available for injection into Startup.

For instance, to pass a reference to the current TestContext we register the current instance as a singleton before calling UseStartup:

var hostBuilder = new WebHostBuilder()
  .ConfigureServices(s => { s.AddSingleton(this); })
  .UseStartup()

Now, a the TestContext in the following Startup class will be populated:

public class Startup {
  private TestContext _ctx;
  public Startup(IConfiguration config, TestContext ctx) {
     _ctx = ctx;
  }
...

Passing a Shared Object

A more cohesive approach is to place mutual dependencies in another class and make it available via much the same approach. The following is an example allowing any class access to the TestServer’s client.

public interface ITestDependencies {
  public TestContext Context {get;}
  // also various Mock objects...
}

public class TestDependencies : ITestDependencies {
  public TestContext Context {get; private set;}

  public TestDependencies(TestContext ctx) {
    Context = ctx;
  }
}

public class Startup {
  private readonly ITestDependencies _testDependencies;
  public Startup(IConfiguration configuration, ITestDependencies testDependencies) {
    _testDependencies = testDependencies;
  }
  // other methods - use _testDependencies.Context.Client
}

public class TestContext {
  public HttpClient Client {get; private set;}
  private readonly TestServer _server;

  public TestContext() {
    var builder = new WebHostBuilder()
      .ConfigureServices((IServiceCollection services) => {
        services.AddSingleton(typeof(ITestDependencies), new TestDependencies(this));
      })
      .UseStartup();
    _server = new TestServer(builder);
    Client = _server.CreateClient();
  }
}

Hangfire

As part of my application I wanted to run a background service. In some fantasy future this might run as a separate process on another machine, scaling independently of the API server, so the service would naturally be isolated in its own class. For now I just needed something that would run scheduled jobs and be initialized during the Startup methods. The most popular solution for this problem seems to be a library called Hangfire which has had ASP.NET Core support since v1.6.0 (v1.6.16 at the time of writing).

Hangfire is backed by a database, so part of the setup involves selecting a database connector. There are two options for MySql, but the link for Hangfire.MySql goes 404, so I opted for Hangfire.MySqlStorage. I was able to get the basics of Hangfire working with this connector, although I did encounter some problems, notably that the Recurring Jobs dashboard page causes MySql exceptions and doesn’t load. One factor in this may be that, with Hangfire.Mysql as well as Pomelo.EntityFrameworkCore.MySql, I have references to different definitions of various MySql.Data.* classes in multiple assemblies. But as it currently works for my purposes, I haven’t pursued those errors further.

The other decision around the database is whether to share with the application database or use a separate schema. I opted for the latter to avoid any complications with my migration and test data code.

With that, we present the code. Firstly the .proj file:

<PackageReference Include="Hangfire" Version="1.6.*" />
<PackageReference Include="Hangfire.Autofac" Version="2.3.*" />
<PackageReference Include="Hangfire.MySqlStorage" Version="1.1.0-alpha" />

And then the startup functions. The first is called from ConfigureServices:

protected virtual void AddHangfireService(IServiceCollection services)
{
    services.AddHangfire(options =>
    {
        options.UseStorage(new Hangfire.MySql.MySqlStorage(
            Configuration["ConnectionStrings:HangfireMySql"],
            new Hangfire.MySql.MySqlStorageOptions
            {
                TransactionIsolationLevel = System.Data.IsolationLevel.ReadCommitted,
                QueuePollInterval = TimeSpan.FromSeconds(60),
                JobExpirationCheckInterval = TimeSpan.FromHours(1),
                CountersAggregateInterval = TimeSpan.FromMinutes(5),
                PrepareSchemaIfNecessary = true,
                DashboardJobListLimit = 50000,
                TransactionTimeout = TimeSpan.FromMinutes(1),
            }));
        options.UseAutofacActivator(this.IocContainer);
    });
}

and the second from Configure:

protected virtual void ConfigureHangfire(IApplicationBuilder app)
{
    app.UseHangfireDashboard();
    app.UseHangfireServer();

    RecurringJob.AddOrUpdate<Domain.Notification.INotifier>(
        "cbe-api-notification",
        notifier => notifier.Rollup(DateTime.UtcNow.AddDays(-1)),
        Cron.Daily(15) // 15:00 UTC - i.e. 3am NZST, 1am AEST
    );
}

This runs my job daily at 1500 UTC, which is the middle of the night from my perspective.

One aspect that Hangfire does very well is integrate with dependency injection frameworks. I have used Autofac, and you can see in the code above that nowhere have I had to construct the class for the notifier variable, instead the interface parameter INotifier suffices. The integration with Autofac is established in options.UseAutofacActivator(this.IocContainer); in the first code block. At the time UseAutofacActivator is called this.IocContainer is still null, but it doesn’t appear to be used until after Autofac is setup, which happens very soon thereafter.

Profiling .NET Core

Some of my application requests are running slowly and I need an overview of what is taking so long, so I turned to the internet to find a profiler. With .NET Core still relatively new I expected that finding mature profilers would be challenging. In addition .NET (in general) has thrown a curve-ball in the direction of profilers in the last few years with the use of async.

Calling await in a method causes dotnet to generate a state-machine that splits the function up into many different parts, and fills stack traces with MoveNext() functions. To be useful, a profiler needs to link these pieces of state-machine – which I believe could be running on different threads – back-together so the developer can understand what it is waiting for.

The Field

The only profiler that seemed to handle async was ANTS Performance 9.6. I initially found it’s results quite counter-intuitive until I changed the timing options drop-down to wall-clock time. Then it became much clearer from the call tree where the delays were. However it didn’t seem to load the source code despite PDB files being and place, and it was also the most expensive tool I evaluated.

The best free tool, in my opinion, was CodeTrack which provides a reasonable timeline view to enable navigation of the calls, but doesn’t have any in-built async handling.

A similar function was provided by dotTrace 2017.2 (EAP3). dotTrace also seems to be able to handle a few async cases, combining calls from the same source with async or cont, but for most cases it didn’t link them together.

There are also light-profilers, intended more for monitoring. MiniProfiler seems tailored for full MVC apps, and I couldn’t get it to produce output in my view-less API project. Prefix didn’t seem to work at all, as noted by other commented on their website, which may be related to using Core 1.1.

Finally, I should not I do not have Visual Studio 2017 so I don’t know what its profiler is like.

.Net Core Serializing File and Objects

For one of my API methods I wanted to send a file as well as object data. This is straight-forward enough when the object data consists of value types: the front end adds key-value-pairs to a FormData object, including the File object as one of the values; and the .NET Core back-end model object includes an IFormFile. e.g.

// JavaScript client
let data = new FormData();       
data.append("file", file);
data.append("id", "44b6...");
return this.httpClient.fetch(`...`, { method: 'post', body: data });
// C# Model
public class MyObj {
    public Microsoft.AspNetCore.Http.IFormFile File { get; set; }
    public Guid Id { get; set; }
}
// C# Controller Method
[HttpPost]
public async Task<IActionResult> Post(MyObj request) { ... }

However this approach fails if the model includes objects as in the following case where Numbers will be null.

public class MyObj {
    public Microsoft.AspNetCore.Http.IFormFile File { get; set; }
    public Guid Id { get; set; }
    public List<int> Numbers { get; set; }
}

At this point the model deserialization in .NET Core and the serialization done in JavaScript don’t match. However I found trying to use the suggested techniques to be somewhat over-complicated. My impression is the ‘right’ approach is to use a custom Model Binder. This seemed nice enough, but then got into details of needing to create and configure value binders, when I really just wanted to use some built-in ones for handling lists.

In the end I went with a different, perhaps less flexible or DRY, but vastly simpler approach: creating objects that shadowed the real object and whose get/set did the serialization.

public class ControllerMyObj : MyObj {
    public new string Numbers {
        get {
            return base.Numbers == null ? null : Newtonsoft.Json.JsonConvert.SerializeObject(base.Numbers);
        }
        set {
            base.Numbers = Newtonsoft.Json.JsonConvert.DeserializeObject<List<int>>(Numbers);
        }
    }
}

// Controller Method
[HttpPost]
public async Task<IActionResult> Post(ControllerMyObj request) { 
   MyObj myObj = request;
   ...
}

And now the front-end needs to be changed to send JSON serialized objects. That can be done specifically by key or using a more generic approach as follows.

let body = new FormData();
Object.keys(data).forEach(key => {
    let value = data[key];
    if (typeof (value) === 'object')
        body.append(key, JSON.stringify(value));
    else
        body.append(key, value);
});
body.append("file", file);
// fetch ...

.NET Core Code Coverage

I haven’t written anything for a while because, frankly, I’m passed the platform R&D stages of my current application and just churning out features, and so far I haven’t found anything much to inspire me to write about. After digging through my code looking at things I’d done, one area I thought may be interesting to readers is getting (free) code coverage in .NET Core.

OpenCover

The open source tool of choice for code coverage seems to be OpenCover. This comes as a nice zip file which can be extracted anywhere. Getting set up for .NET Core was mostly a case of following various instructions online, and there was just one gotcha: the MSBuild DebugType must be full, which is typically not the case for .NET Core where the goal is deployment to multiple operating systems. To get around this my coverage script overwrites the .proj file before running and puts portable back when it is done.

The script runs the dotnet executable from the assembly folder, meaning the assemblies aren’t directly specified in the script. The graphical output of the coverage is put together using ReportGenerator, which I have deployed inside my report output folder.

Here is a cut-down version of my Powershell script:

Push-Location

# change portable to full for projects
$csprojFiles = gci [Repository-Path] -Recurse -Include *.csproj
$csprojFiles | %{
     (Get-Content $_ | ForEach  { $_ -replace 'portable', 'full' }) | 
     Set-Content $_
}

# Setup filter to exclude classes with no methods
$domainNsToInclude = @("MyNamespace.Auth.*", "MyNamespace.Data.*")
# Combine [assembly] with namespaces
$domainFilter = '+[AssemblyPrefix.Domain]' + ($domainNsToInclude -join ' +[AssemblyPrefix.Domain]')
$filter = "+[AssemblyPrefix.Api]* $domainFilter"

# Integration Test Project
$integrationOutput = "[output-path]\Integration.xml"
cd "D:\Code\RepositoryRoot\test\integration"
dotnet build
[open-cover-path]\OpenCover.Console.exe `
    -register:user `
    -oldStyle `
    "-target:C:\Program Files\dotnet\dotnet.exe" `
    "-targetargs:test" `
    "-filter:$filter" `
    "-output:$integrationOutput" `
    -skipautoprops

# Generate Report
$reportFolder = "[output-path]\ReportFolder"
[report-generator-path]\ReportGenerator.exe `
    "-reports:$integrationOutput" `
    "-targetdir:$reportFolder"

# restore portable in projects
$csprojFiles | %{
     (Get-Content $_ | ForEach  { $_ -replace 'full', 'portable' }) |
     Set-Content $_
}

Pop-Location

The end result after opening the index.html is something like this (looks like I need to work on that branch coverage!):
coverage

ASP.NET Core Authentication

One of the challenges I had early in developing my current project was getting authentication set up nicely. My back-end is an API running in .NET Core, and my general impression is that ASP.NET Core’s support for API use cases is somewhat weaker than for MVC applications.

ASP.NET Core’s default transport for authentication context still seems to be via cookies. This was quite surprising as my impression of the industry is that, between their complexity (from which it is easy to make security mistakes) and recent EU rules, cookies were on their way out. ASP.NET Core also introduced Identity for authentication, but the use of ViewModel in examples indicates that is targeted towards an MVC application.

My preference was to use JSON Web Tokens (JWTs) sent as bearer tokens in the authorization header of an HTTP request. I also wanted to use authorization attributes, like [Authorize("PolicyName")], to enforce security policy on the API controllers.

Validation and Authorization

.NET Core has support for validating JWTs via the System.IdentityModel.Tokens.Jwt package. Applying this requires something like the following in the Startup.Configure method:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
app.UseJwtBearerAuthentication(new JwtBearerOptions()
{
    Authority = Configuration["AuthorityUrl"],
    TokenValidationParameters = new TokenValidationParameters() { ValidateAudience = false },
    RequireHttpsMetadata = true,
    AutomaticAuthenticate = true,
    Events = new JwtBearerEvents { OnTokenValidated = IocContainer.Resolve<Auth.IValidatedTokenHandling>().AddUserClaimsToContext },
};

The recommended approach to authorization in ASP.NET Core is to use claims and policies. To that end the code above responds to the OnTokenValidated event and sends it to a method that queries the user and adds claims based on information about the user.

public async Task AddUserClaimsToContext(TokenValidatedContext context) 
{
    var claims = new List<Claim>();

    // JWT subject is the userid
    var sub = context.Ticket.Principal.FindFirst("sub")?.Value;
    if(sub != null)
    {
        var user = await _users.FindById(Guid.Parse(sub));
        if(user != null)
        {
            if(user.UserVerification > 0)
                claims.Add(new Claim("MustBeValidatedUser", "true", ClaimValueTypes.Boolean));
        }
    }
    var claimsIdentity = context.Ticket.Principal.Identity as ClaimsIdentity;
    claimsIdentity.AddClaims(claims);
}

Finally the policies themselves must be defined, typically in the Startup.ConfigureServices method:

mvc.AddAuthorization(options => {
    options.AddPolicy("MustBeValidatedUser", policy => policy.RequireClaim(Auth.ClaimDefinitions.MustBeValidatedUser, "true"));                   
});

Generating Tokens

.NET Core does not have support for generating JWTs. For this it recommends IdentityServer4.

IdentityServer4 is intended to be a fully fledged authentication server supporting the many flows of OAuth2 and Open ID Connect. For my purposes I only required username and password validation, so in many respects IdentityServer4 was overkill, but given lack of alternatives for generating JWTs, I forged ahead with it anyway.

It is worth noting my solution deviates from the norm. IdentityServer seems predicated on the idea that the authentication service is a standalone server, microservice style. Given the early stage of development I was at, having another server seemed like an annoyance, so I opted to have the authentication service as part of the API server. Really the only problem with this was it obscured the distinction between the ‘client’ (the JWT validation and authorization) and the ‘server’ (IdentityServer4) meaning it perhaps took a little longer than I’d have preferred to understand my authentication and authorization solution.

Using identity server is trivial – one line in the Startup.Configure: app.UseIdentityServer();. Set up, even for a basic solution, is a little more complex and will admit that to this day I do not fully understand scopes and the implications of them.

Supporting the server involves defining various resources in Startup. The scopes referenced in the Configure method end up in the scopes field in the JWT payload.

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using System.IdentityModel.Tokens.Jwt;
using Microsoft​.AspNetCore​.Authentication​.JwtBearer;
using Microsoft.IdentityModel.Tokens;

public virtual IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.AddIdentityServer()
        .AddInMemoryIdentityResources(Auth.IdentityServerConfig.GetIdentityResources())
        .AddInMemoryApiResources(Auth.IdentityServerConfig.GetApiResources())
        .AddInMemoryClients(Auth.IdentityServerConfig.GetClients())
        .AddTemporarySigningCredential();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifetime)
{
    app.UseIdentityServer();
    // Configure authorization in the API to parse and validate JWT bearer tokens
    JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();
    app.UseJwtBearerAuthentication(GetJwtBearerOptions());
    app.AllowScopes(new[] {
        IdentityServer4.IdentityServerConstants.StandardScopes.OpenId,
        IdentityServer4.IdentityServerConstants.StandardScopes.Profile,
        Auth.IdentityServerConfig.MY_API_SCOPE
    });
}

The configurations referenced in ConfigureServices link to a static class with a similar structure to that from the quick starts.

Testing

The final challenge with this set up was running integration tests with ASP.NET Core’s TestServer. The difficultly was that the authentication process would try to make a web request to the authentication server URL (e.g. http://localhost:5000). However because TestServer is not a real server listening on a port, then no authentication response would be received.

To resolve this an additional option was added to the JwtBearerOptions during Startup only for the integration tests. This class intercepts the authentication request and copies it to the TestServer’s client instance (using a static, which I’m not proud of). This is all illustrated below.

options.BackchannelHttpHandler = new RedirectToTestServerHandler();

public class RedirectToTestServerHandler : System.Net.Http.HttpClientHandler
{
    ///<summary>Change URL requests made to the server to use the TestServer.HttpClient rather than a custom one</summary>
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        HttpRequestMessage copy = new HttpRequestMessage(request.Method, request.RequestUri);
        foreach (var header in request.Headers)
            copy.Headers.Add(header.Key, header.Value);
        copy.Content = request.Content;

        Serilog.Log.Information("Intercepted request to {uri}", request.RequestUri);
        HttpResponseMessage result = TestContext.Instance.Client.SendAsync(copy, cancellationToken).GetAwaiter().GetResult();
        return Task.FromResult(result);
    }
}