Authenticating Azure B2C in ASP.NET Core

Generally I prefer the holistic approach of .NET, as opposed to the small-core plus ‘lots of libraries that haven’t been tested together’ approach in other ecosystems as it tends to provide a more predictable platform. However one area where I struggle with Microsoft’s approach is authentication. The .NET Core documentation makes it clear they want you to use Identity, and everything else is a second-class citizen. But Identity, with database backed roles, seems like an all-or-nothing proposition, and overkill for a basic solution simply asking ‘who are you?’.

What I want is a signed token with identifying information. I’m using Razor Pages, so this is a postback environment, and I’d like it to be stateless which means the user-agent needs to hold their credentials, which is usually done using cookies. Cookies make me a little nervous since the GDPR rules have come in, however consent isn’t required for strictly necessary cookies such as these.

Strictly necessary cookies — These cookies are essential for you to browse the website and use its features, such as accessing secure areas of the site

To comply with the regulations governing cookies under the GDPR and the ePrivacy Directive you must: Receive users’ consent before you use any cookies except strictly necessary cookies.

Choosing an Authentication Grant

Azure B2C is an authorization server supporting OAuth2 as defined in RFC 6749. RFC 6749 defines four roles. In this case two are obvious: the resource-owner is the end-user and the authorization-server is Azure B2C. The distinction between the other two roles is more subtle.

This is a Razor Pages application so the logic for requesting resources resides on the web-server making the web-server the client. The web-server is also the resource server, as it is where the protected resources reside. Assuming a classic 3-tier architecture, we could say the presentation layer is the client, while the domain and store layers are the resource. In practice, the authorization will be checked at the presentation layer which will return a different presentation if authorization fails.

Regardless, we have a client that can keep secrets. This allows us to use the default (and more secure) OAuth2 grant, Authorization Code.

Authorization Code Grant with AzureB2C

Azure B2C needs it’s own Active Directory instance. Azure calls this a tenant and it’s known by two identifiers: a domain and a GUID. Following the steps in the Create B2C Tenant tutorial will create that instance, and the domain name and GUID will be displayed in the Azure Directory + subscription filter.

Authorization Code grant requires a client id and secret. The client id tells the authorization server which client is requesting access (on behalf of the user). The client secret is used as a password when the client directly communicates with the authorization server. Azure B2C calls these the Application ID and App Key respectively, and these are set in the Applications area of the Azure B2C blade in Azure Portal.

If you want an access token (as opposed to just an ID token), it is also important to add API Access. This is done in the Azure Portal under B2C by setting the App ID Url (typically to api), then going to Api Access, pressing Add, and selecting the application from the top drop-down and everything from the second. This will add a scope of https://tenant-name.onmicrosoft.com/api/user_impersonation

ASP.NET Core

ASP.NET Core’s documentation for authentication would benefit from focusing beyond Identity, by including how authentication works (i.e. different schemes and providers), and providing information on using OpenIDConnect or JwtBearer, two very common approaches. The best resource I can find at present is the AspNetCore source code which includes a lot of samples under the /src/Security path. In this case, I’ve worked from the OpenIdConnectSample project.

The second challenge is configuration. Some documentation suggests you get application information from App Registrations however Azure Portal current indicates this isn’t fully supported, and it’s the same information that comes from the Azure AD B2C – Applications blade. The terminology in that blade is a little confusing as it refers to the client ID as Application ID, and the tenant ID varies depending on which Active Directory you allow your application users to come from. The most common case is to use the directory you created earlier, so the tenant value will be your domain.
The following configuration, with values from Azure, goes into the root level of the appsettings.json.

"AzureAdB2C": {
  "ClientId": "ApplicationID from Azure AD B2C - Applications"
  "ClientSecret": "Key from Azure AD B2C - Applications"
  "Domain": "xxx.onmicrosoft.com (from Directory + subscription filter)",
  "SignUpSignInPolicyId": "Policy name from Azure AD B2C - User flows (policies)",
  "Tenant": "Tenant Name (first part of URL from Directory + subscription filter)",
  "TenantId": "TenantID Guid (from Directory + subscription filter)"
}

This configuration is loaded by the following class

public class AzureAdB2COptions
{
  public string Authority => $"https://{Tenant}.b2clogin.com/tfp/{TenantId}/{SignUpSignInPolicyId}/v2.0/";
  public string ClientId { get; set; }
  public string ClientSecret { get; set; }
  public string Scope => $"https://{Tenant}.onmicrosoft.com/api/user_impersonation";
  public string SignUpSignInPolicyId { get; set; }
  public string Tenant { get; set; }
  public string TenantId { get; set; }
}

Finally, to include this in your ASP.NET Core application, it needs to be configured in Startup.

// in ConfigureServices(IServiceCollection services)
services.AddAuthentication(sharedOptions =>
{
  sharedOptions.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
  sharedOptions.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
  sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddCookie()
.AddOpenIdConnect(options =>
{
  var b2cOptions = new AzureAdB2COptions();
  Configuration.Bind("AzureAdB2C", b2cOptions);

  options.Authority = b2cOptions.Authority;
  options.ClientId = b2cOptions.ClientId;
  options.ClientSecret = b2cOptions.ClientSecret;
  options.ResponseType = OpenIdConnectResponseType.Code;
  options.Scope.Add(b2cOptions.Scope);
});

// in Configure(IApplicationBuilder app, IWebHostEnvironment env) before app.UseEndpoints()
app.UseAuthentication();
app.UseAuthorization();

ASP.NET Core Inject All Dependencies in the Assembly

ASP.NET Core would very much like you to use the built in dependency injection system rather than a substitute like Autofac:

The built-in service container is meant to serve the needs of the framework and most consumer apps. We recommend using the built-in container unless you need a specific feature that it doesn’t support.

While I think I’d struggle without “Func support for lazy initialization”, the feature I want the most is assembly-wide service registration. This means instead of having to create a new services.AddTransient() for every class, I can add a new interface and class pair and be confident it will be added to the dependency container.

My solution to this is to find all the classes in my assembly which have an interface name which is exactly the classname, preceded by a capital I, and registered them as transients.

One further twist is that some services need to be registered manually, and thus may need to be removed from the collection of automatically added services. They may also need a different lifetime. To support this the AddService method removes any existing registrations for the interface type before creating the new one. It also takes the implementation type as a factory allowing transient, scoped, and singleton implementations in the one method.

public virtual void RegisterServices(IServiceCollection services)
{
  var assy = Assembly.GetCallingAssembly();
  var types = assy.GetTypes().Where(t => t.IsClass && !t.IsAbstract && !t.IsGenericType && !t.IsNested);
  foreach(var type in types)
  {
    var iface = type.GetInterface("I" + type.Name);
    if (iface != null && iface.Assembly.FullName == assy.FullName)
      AddService(services, iface, (_) => Activator.CreateInstance(type), ServiceLifetime.Transient);
  }

  // register services that take configuration data
  var fileStorage = new FileStorage(Configuration.GetConnectionString("..."));
  AddService(services, typeof(IFileStorage), (_) => fileStorage, ServiceLifetime.Singleton);
}

public void AddService(IServiceCollection services, Type tInterface, Func factory, ServiceLifetime lifetime)
{
  var existing = services.SingleOrDefault(s => s.ServiceType.FullName == tInterface.FullName);
  if (existing != null)
    services.Remove(existing);

  services.Add(new ServiceDescriptor(tInterface, factory, lifetime));
}

Creating a .NET Core PDF Library

I’ve been working towards hosting a website for my musical compositions and one thing I wanted to do is to add text and images to my PDFs to indicate the music is a preview, i.e. a watermark.

There are a great many existing PDF libraries out there but I opted to build something myself. This would be a poor economic decision smacking of severe NIH-syndrome if it were done in a business setting. However, this was for a personal project meaning cost was a factor and the solutions out there for .NET either come at considerable cost (which I can understand having now spent time with the spec), have hard to judge quality, or are ports from other languages and don’t take advantage of .NET features. Finally it has been quite some time since I wrote a lexer and parser so it was a nice exercise.

The library, in the state it is in, is available at GitHub. There is no nuget package thus far so using it requires cloning the repo and then following one of the examples from it. The classes created focus on loading and saving PDFs and working with the objects found directly in the document. Once it comes to manipulating the contents of the page, any user must (at present) understand the format being used (i.e. sections 8 and 9 of the PDF spec 1.7).

Taking a first look at the PDF format was quite interesting. Its syntax is based on PostScript, so for instance dictionaries are surrounded by double-angle-brackets. It structures items as objects, which can be referenced-from or embedded-in objects that use them. Binary objects, like images, are typically stored within compressed streams.

I look forward to putting this library into practice, and maybe it will find some uses for other people too.

Rate Limited Async Loop

A recent project included some modest load testing. For this we created a small console application to hit our API over HTTPS. A key metric in load testing is the number of requests an endpoint can handle per second, so it’s useful to be able to control and configure the rate at which requests are made.

This in itself is not difficult: a basic sleep wait of duration 1/requests-per-sec will achieve this. However we had an additional constraint that called for a slightly more complex solution.

The application uses Auth0, an authentication-as-a-service provider, and it rate limits use of its API. Exceeding the rate results in failed HTTP requests, and if frequent enough, can result in users being blocked. Furthermore, it is a remote and relatively slow API, with round-trip times in the order of 3 seconds (i.e. fetching 100 users serially would take 5 minutes), so it’s important that we access it concurrently, up to our limit. Additionally, the token received from calling it is cachable until its expiry, and if we can get the token from our cache then we want to skip any sleep-wait in order to minimize running time.

This leads to the goal: to maximize the number of concurrent requests made to an API up to a fixed number of requests per second; and to use cached data (and therefore not use a request) where possible. To solve this I want a rate-limited concurrent loop.

Implementation

A little searching on the internet resulted in either extensive libraries that implemented a different paradigm, like Reactive, or things that didn’t quite meet my requirements. I therefore – having taking the appropriate remedies to treat potential Not-Invented-Here Syndrome – went ahead and put something together myself.

public class RateLimitedTaskProperties
{
    public bool IgnoreRateLimit { get; set; }
}

public static async Task RateLimitedLoop(int perSec, IEnumerable enumerable, Func<T, Task> action)
{
    int periodMs = 1000 / perSec;
    var tasks = new List();
    foreach(T item in enumerable)
    {
        T capture = item;
        Task task = action(capture);
        tasks.Add(task);

        if (task.IsCompleted && task.Result.IgnoreRateLimit)
            continue;

        System.Threading.Thread.Sleep(periodMs);
    }

    await Task.WhenAll(tasks);
}

The loop starts a new task every periodMs. Concurrency is achieve by using tasks, which are non-blocking, and waiting for their completion outside the loop with await Task.WhenAll(tasks). The case where something has been retrieved from a cache is handled by the task returning synchronously and setting the IgnoreRateLimit flag. This combination causes the loop to skip the sleep and move straight onto triggering the next task.

The following is an example of its use, where MyOperation() is a method that returns a flag indicating whether or not it performed a fetch from the rate-limited API.

const int tokenReqsPerSec = 5;
await RateLimitedLoop(tokenReqsPerSec, items, async(item) =>
{
    bool requiredFetch = await item.MyOperation();
    // don't rate limit if I got it from the cache (fetch wasn't required)
    return new RateLimitedTaskProperties { IgnoreRateLimit = !requiredFetch };
});

Auth0 Mock

Auth0 is a well-known authentication-as-a-service provider. Its database connection storage option allows organizations to reference a custom database, which is very useful if you want to store your user information with your business data and maintain integrity between those using foreign key constraints. You can do this in Auth0 by setting up a connection that accesses your hosted database (with appropriate firewall restrictions!) to add, update, and remove users.

A challenge with this is that each new environment requires a new database and Auth0 setup. This is particularly difficult if that environment is a developer’s machine and isn’t accessible to a connection string from the internet (due to Firewalls/NAT). One option is for each developer to have their own cloud database, but that gets expensive quickly, and adds unrealistic latency to database calls from their machine, making development more difficult.

I was faced with this problem while building integration tests using Auth0 and .NET Core, and opted to create a mock object.

Implementation

The top level interface for Auth0 in C# is IManagementApiClient. This consists of a number of client interface properties, and it’s these that I found most appropriate to mock using Moq. This leads to a basic structure as follows:

using Auth0.Core;
using Auth0.Core.Collections;
using Auth0.Core.Http;
using Auth0.ManagementApi;
using Auth0.ManagementApi.Clients;
using Auth0.ManagementApi.Models;
using Moq;

public class Auth0Mock : IManagementApiClient
{
  Mock _usersClient = new Mock();
  Mock _ticketsClient = new Mock();

  public Auth0Mock()
  {
    // setup for _usersClient and _ticketsClient methods
  }

  public IUsersClient Users => _usersClient.Object;
  public ITicketsClient Tickets => _ticketsClient.Object;

  public IBlacklistedTokensClient BlacklistedTokens => throw new NotImplementedException();
  // etc. for ClientGrants, Clients, Connections, DeviceCredentials,  EmailProvider, Jobs, Logs, ResourceServers, Rules, Stats, TenantSettings, UserBlocks
  public ApiInfo GetLastApiInfo()
  {
    throw new NotImplementedException();
  }
}

In this project only a small number of Auth0 methods were used (something I expect would be true for most projects), so only a few Auth0 client methods actually needed to be mocked. However it is quite important, for integration testing, that these methods replicate the key behaviours of Auth0, including writing to a database, and storing user metadata (which isn’t always in the database). To support these, the mock class includes some custom SQL, and a small cache, which are used by the mocked methods. The following code illustrates this using two methods. They are set up in the constructor, and implemented in separate methods.

using System.Collections.Generic;
using System.Data.SqlClient;
using Dapper;

private string _sql;

// local cache storing information that our sql table doesn't
private Dictionary _users = new Dictionary();

public Auth0Mock(/* injection for _sql connection string */)
{
  _usersClient.Setup(s => s.CreateAsync(It.IsAny())).Returns((req) => CreateAsync(req));
  _usersClient.Setup(s => s.DeleteAsync(It.IsAny())).Returns((id) => DeleteAsync(id));
}

private async Task CreateAsync(UserCreateRequest request)
{
  int userId = 0;
  using (var conn = new SqlConnection(_sql))
  {
    var rows = await conn.QueryAsync(@"INSERT INTO [MyUserTable] ...", new { ... });
    userId = (int)rows.Single().userId;
  }

  var user = new Auth0.Core.User
  {
    AppMetadata = request.AppMetadata,
    Email = request.Email,
    FirstName = request.FirstName,
    LastName = request.LastName,
    UserId = "auth0|" + userId
  };
  _users[user.UserId] = user;
  return user;
}

private async Task DeleteAsync(string id)
{
  var match = Regex.Match(id, @"auth0\|(.+)");
  string userId = match.Groups.Last().Value;

  using (var conn = new SqlConnection(_connStr))
    await conn.ExecuteAsync(@"DELETE FROM [MyUserTable] ...", new { userId });

  if(_users.ContainsKey(id))
    _users.Remove(id);
}

Being a mock object there are limitations. For instance, in this example the cache only includes users added via CreateAsync, not all the users in the test database. However where these limitations lie depends entirely your testing priorities, as the sophistication of the mock is up to you.

One downside to this approach is that Moq doesn’t support optional parameters, so the signatures for some methods can get quite onerous:

_usersClient.Setup(s => s.GetAllAsync(0, 100, null, null, null, null, null, It.IsAny(), "v2"))
  .Returns((i1, i2, b3, s4, s5, s6, b7, q, s9) => GetAllAsync(i1, i2, b3, s4, s5, s6, b7, q, s9));

private Task<IPagedList> GetAllAsync(int? page, int? perPage, bool? includeTotals, string sort, string connection, string fields, bool? includeFields, string query, string searchEngine)
{
  // regex to match query and fetch from SQL and/or _users cache
}

Authorization

The Auth0 mock class provides authentication, but not authorization, and it would be nice if any integration tests could also check authorization policies. The run-time system is expecting to process a cookie or token on each request and turn that into a UserPrincipal with a set of claims. Therefore our tests must also populate the UserPrincipal, and do so before authorization is checked.

For this we need a piece of middleware that goes into the pipeline before authorization (which is part of UseMvc()). My approach was to place the call to UseAuthentication() into a virtual method in Startup and override that method in the test’s Startup:

public class TestStartup : Startup
{
  protected override void SetAuthenticationMiddleware(IApplicationBuilder app)
  {
    app.UseMiddleware();
  }
  
  protected override void SetAuthenticationService(IServiceCollection services)
  {
    // This is here to get expected responses on Authorize failures.
    // Authentication outcomes (user /claims) will be set via TestAuthentication middleware,
    // hence there are no token settings.
    services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer();
  }
}

The middleware, TestAuthentication, remembers the last user that was set. It must be registered as a singleton with the dependency-injection framework so that the user is remembered between service calls. Testing code can set the user at any time by calling SetUser().

When a request is made TestAuthentication‘s InvokeAsync method applies claims based on that user. These claims will be processed as policies in the normal way so that Authorize attributes work as intended.

public class TestAuthentication : IMiddleware
{
  private string _userId;
  private string _roleName;

  public async Task InvokeAsync(HttpContext context, RequestDelegate next)
  {
    if (_userId > 0)
    {
      var identity = new ClaimsIdentity(new List
      {
        new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", "auth0|" + _userId),
        new Claim("http://myuri/", $"Role:{_roleName}")
      });

      var principal = new ClaimsPrincipal(identity);
      context.User = principal;
    }
    await next(context);
  }

  public void SetUser(string userId, string roleName)
  {
    _userId = userId;
    _roleName = roleName;
  }
}

With this combination we are able to successfully mock Auth0 while retaining our ability to work with our database, test non-Auth0 functionality, and test authorization.

Sharing Test Dependencies with Startup

An issue I’ve had while developing integration tests in .NET Core is sharing information between my TestContext and the Startup class.

The documented approach looks something like this:

var hostBuilder = new WebHostBuilder().UseStartup()
_server = new TestServer(hostBuilder);

The problem is that Startup is called from deep within new TestServer making it impossible to pass a reference from the calling context. This is particularly a problem with integration tests on an API, where we need the an HttpClient to be made from the TestServer instance in order to call API methods.

_client = _server.CreateClient();

Dependency Injection into Startup

What I hadn’t originally appreciated is that Startup class accepts dependencies defined by the host. Therefore anything already configured in the services, which is the container for ASP.NET’s dependency injection system, is available for injection into Startup.

For instance, to pass a reference to the current TestContext we register the current instance as a singleton before calling UseStartup:

var hostBuilder = new WebHostBuilder()
  .ConfigureServices(s => { s.AddSingleton(this); })
  .UseStartup()

Now, a the TestContext in the following Startup class will be populated:

public class Startup {
  private TestContext _ctx;
  public Startup(IConfiguration config, TestContext ctx) {
     _ctx = ctx;
  }
...

Passing a Shared Object

A more cohesive approach is to place mutual dependencies in another class and make it available via much the same approach. The following is an example allowing any class access to the TestServer’s client.

public interface ITestDependencies {
  public TestContext Context {get;}
  // also various Mock objects...
}

public class TestDependencies : ITestDependencies {
  public TestContext Context {get; private set;}

  public TestDependencies(TestContext ctx) {
    Context = ctx;
  }
}

public class Startup {
  private readonly ITestDependencies _testDependencies;
  public Startup(IConfiguration configuration, ITestDependencies testDependencies) {
    _testDependencies = testDependencies;
  }
  // other methods - use _testDependencies.Context.Client
}

public class TestContext {
  public HttpClient Client {get; private set;}
  private readonly TestServer _server;

  public TestContext() {
    var builder = new WebHostBuilder()
      .ConfigureServices((IServiceCollection services) => {
        services.AddSingleton(typeof(ITestDependencies), new TestDependencies(this));
      })
      .UseStartup();
    _server = new TestServer(builder);
    Client = _server.CreateClient();
  }
}

Hangfire

As part of my application I wanted to run a background service. In some fantasy future this might run as a separate process on another machine, scaling independently of the API server, so the service would naturally be isolated in its own class. For now I just needed something that would run scheduled jobs and be initialized during the Startup methods. The most popular solution for this problem seems to be a library called Hangfire which has had ASP.NET Core support since v1.6.0 (v1.6.16 at the time of writing).

Hangfire is backed by a database, so part of the setup involves selecting a database connector. There are two options for MySql, but the link for Hangfire.MySql goes 404, so I opted for Hangfire.MySqlStorage. I was able to get the basics of Hangfire working with this connector, although I did encounter some problems, notably that the Recurring Jobs dashboard page causes MySql exceptions and doesn’t load. One factor in this may be that, with Hangfire.Mysql as well as Pomelo.EntityFrameworkCore.MySql, I have references to different definitions of various MySql.Data.* classes in multiple assemblies. But as it currently works for my purposes, I haven’t pursued those errors further.

The other decision around the database is whether to share with the application database or use a separate schema. I opted for the latter to avoid any complications with my migration and test data code.

With that, we present the code. Firstly the .proj file:

<PackageReference Include="Hangfire" Version="1.6.*" />
<PackageReference Include="Hangfire.Autofac" Version="2.3.*" />
<PackageReference Include="Hangfire.MySqlStorage" Version="1.1.0-alpha" />

And then the startup functions. The first is called from ConfigureServices:

protected virtual void AddHangfireService(IServiceCollection services)
{
    services.AddHangfire(options =>
    {
        options.UseStorage(new Hangfire.MySql.MySqlStorage(
            Configuration["ConnectionStrings:HangfireMySql"],
            new Hangfire.MySql.MySqlStorageOptions
            {
                TransactionIsolationLevel = System.Data.IsolationLevel.ReadCommitted,
                QueuePollInterval = TimeSpan.FromSeconds(60),
                JobExpirationCheckInterval = TimeSpan.FromHours(1),
                CountersAggregateInterval = TimeSpan.FromMinutes(5),
                PrepareSchemaIfNecessary = true,
                DashboardJobListLimit = 50000,
                TransactionTimeout = TimeSpan.FromMinutes(1),
            }));
        options.UseAutofacActivator(this.IocContainer);
    });
}

and the second from Configure:

protected virtual void ConfigureHangfire(IApplicationBuilder app)
{
    app.UseHangfireDashboard();
    app.UseHangfireServer();

    RecurringJob.AddOrUpdate<Domain.Notification.INotifier>(
        "cbe-api-notification",
        notifier => notifier.Rollup(DateTime.UtcNow.AddDays(-1)),
        Cron.Daily(15) // 15:00 UTC - i.e. 3am NZST, 1am AEST
    );
}

This runs my job daily at 1500 UTC, which is the middle of the night from my perspective.

One aspect that Hangfire does very well is integrate with dependency injection frameworks. I have used Autofac, and you can see in the code above that nowhere have I had to construct the class for the notifier variable, instead the interface parameter INotifier suffices. The integration with Autofac is established in options.UseAutofacActivator(this.IocContainer); in the first code block. At the time UseAutofacActivator is called this.IocContainer is still null, but it doesn’t appear to be used until after Autofac is setup, which happens very soon thereafter.