Angular, Jest, and async NgOnInit

It is common to fetch data in Angular’s ngOnInit method with the expectation that the page will update when the data is returned and bound. In Typescript it is nice to use async methods to improve readability leading to components that have async ngOnInit() methods.

This presents a challenge when working with a testing framework, as we need to wait for promises to be resolved before our component is set up and tests can continue. To handle this, Angular’s TestBed includes async support, and in particular the whenStable() method for waiting for promises to resolve. This is useful, but means having a whenStable()/detectChanges() pair in every test method. Instead I found that whenStable() runs fine in the beforeEach() method where the fixture is created (via TestBed.createComponent()). I’m not clear on why this works when it’s not in an Angular async() (as opposed to a TypeScript async) block, with my best guess being that the TestBed.createComponent automatically creates the NgZone that whenStable() needs.

As the beforeEach is calling whenStable() it now needs to be (TypeScript) async and Jest will wait for the resulting promise to finish before it considers beforeEach to be done. With this approach the unit tests no longer need to be wrapped in an (Angular test) async.

The following code illustrates the full pattern, and also uses a mocking library, ts-jest. The code is all in TypeScript and uses (TypeScript) async for handling promises. It is organized so each inner describe block (e.g. ‘with specificMockDataset’) covers a specific test data set.

// --- MyComponent.Component.ts ---
// ... usual imports ...
export class MyComponent implements OnInit {
	async ngOnInit() {
		this.thing = await this.myService.getFromServer();

// --- MyComponent.Component.test.ts ---
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { By } from '@angular/platform-browser';
import { mocked } from 'ts-jest/utils';
import { MyComponent } from 'app/components/my.component';
import { MyService } from 'app/services/my.service';
import { AppModule } from 'app/app.module';

const mockService = mocked(MyService, true);

describe('MyComponent component', () => {
    let fixture: ComponentFixture;

    beforeEach(async () => {
        await TestBed.configureTestingModule({
            declarations: [MyComponent],
            providers: [mockService],
            imports: [AppModule]

    describe('with specificMockDataset', () => {
		beforeEach(async () => {
            // Mock the async service method getFromServer(). In theory this is the same as .mockResolvedValue() - but that fails and this works
            mockService.prototype.getFromServer.mockImplementation(() => { return Promise.resolve({ field: "value" }); });

            fixture = TestBed.createComponent(MyComponent);
            fixture.detectChanges(); // calls ngOnInit()
            await fixture.whenStable(); // waits for promises to complete
            fixture.detectChanges(); // detect changes made due to field changes made during ngOnInit

        it('should have options', () => {
            const selectClient: HTMLSelectElement = fixture.debugElement.query(By.css('#someEle')).nativeElement;

REST and Actions

I’ve always felt there is a tension when creating RESTful APIs. REST focuses on managing the present contents of a resource, supporting the basic CRUD (create/read/update/delete) operations via HTTP verbs. However when a user is modifying something they are not simply updating state, they are executing an action. This is reflected in some architectures where we issue a command that expresses the action we want to take. Often that action is simply to update a entity, which is nicely modeled in REST via a PUT to a resource, but in other cases it is not.

One example where an action doesn’t correspond to a CRUD operation is in implementing workflow. Take a simple bug reporting system with the following state model:
In this workflow it is not possible to change between some pairs of states, for instance I can’t go from Closed to Testing, or Open to/from Testing.

In a purely RESTful implementation, I would PUT my bug to the API and it would check if the current state has changed and then check the legality of that workflow. I would also PUT my bug to the API if a few fields needed changing. This means we have two different actions being represented by the same endpoint, and the intent of the user has been lost. Having different intentions normally tells us that each intent should be considered separately: they are likely to be triggered by different elements in the UI; their validation will be different; and from a code-design perspective we usually seek cohesive designs where each code-element has a single responsibility. Furthermore, this allows us to perform both actions in a single call, making the handling logic considerably more complex. So handling multiple actions behind a single call runs in the face of normal design principles and creates complexity (which is software-developer speak for ‘creates bugs’).

I believe the normal solution to this problem is to introduce sub-paths on the resource. e.g. PUT resource/:id/statechange*. At this point the purity of our RESTful implementation has been violated: specifically we cannot update all the fields in our resource with the underlying PUT. Thus we have introduced a method that’s not a verb-resource combination, and we need to know which fields can be updated via the main end-point and which ones can only be updated via sub-paths.

This resolves the tension I’ve found with RESTful APIs and is a solution I’m happy with. My experience has been that being ‘purist’ about any technology usually results in more effort than it’s worth, and this mix of resource-based verbs for the standard CRUD operations and sub-paths for other actions on the resource is simple and pragmatic. Documentation will tell us what those actions are and which fields on the resource can’t be updated via the PUT.

* Some might argue POST is more correct here. I was always told PUT is idempotent while POST is not i.e. doing PUT multiple times will have the same outcome. As we’re always targeting the same state, I’d argue this is idempotent, although if the action has side-effects like sending notification emails then maybe POST is more accurate.

Authenticating Azure B2C in ASP.NET Core

Generally I prefer the holistic approach of .NET, as opposed to the small-core plus ‘lots of libraries that haven’t been tested together’ approach in other ecosystems as it tends to provide a more predictable platform. However one area where I struggle with Microsoft’s approach is authentication. The .NET Core documentation makes it clear they want you to use Identity, and everything else is a second-class citizen. But Identity, with database backed roles, seems like an all-or-nothing proposition, and overkill for a basic solution simply asking ‘who are you?’.

What I want is a signed token with identifying information. I’m using Razor Pages, so this is a postback environment, and I’d like it to be stateless which means the user-agent needs to hold their credentials, which is usually done using cookies. Cookies make me a little nervous since the GDPR rules have come in, however consent isn’t required for strictly necessary cookies such as these.

Strictly necessary cookies — These cookies are essential for you to browse the website and use its features, such as accessing secure areas of the site

To comply with the regulations governing cookies under the GDPR and the ePrivacy Directive you must: Receive users’ consent before you use any cookies except strictly necessary cookies.

Choosing an Authentication Grant

Azure B2C is an authorization server supporting OAuth2 as defined in RFC 6749. RFC 6749 defines four roles. In this case two are obvious: the resource-owner is the end-user and the authorization-server is Azure B2C. The distinction between the other two roles is more subtle.

This is a Razor Pages application so the logic for requesting resources resides on the web-server making the web-server the client. The web-server is also the resource server, as it is where the protected resources reside. Assuming a classic 3-tier architecture, we could say the presentation layer is the client, while the domain and store layers are the resource. In practice, the authorization will be checked at the presentation layer which will return a different presentation if authorization fails.

Regardless, we have a client that can keep secrets. This allows us to use the default (and more secure) OAuth2 grant, Authorization Code.

Authorization Code Grant with AzureB2C

Azure B2C needs it’s own Active Directory instance. Azure calls this a tenant and it’s known by two identifiers: a domain and a GUID. Following the steps in the Create B2C Tenant tutorial will create that instance, and the domain name and GUID will be displayed in the Azure Directory + subscription filter.

Authorization Code grant requires a client id and secret. The client id tells the authorization server which client is requesting access (on behalf of the user). The client secret is used as a password when the client directly communicates with the authorization server. Azure B2C calls these the Application ID and App Key respectively, and these are set in the Applications area of the Azure B2C blade in Azure Portal.

If you want an access token (as opposed to just an ID token), it is also important to add API Access. This is done in the Azure Portal under B2C by setting the App ID Url (typically to api), then going to Api Access, pressing Add, and selecting the application from the top drop-down and everything from the second. This will add a scope of


ASP.NET Core’s documentation for authentication would benefit from focusing beyond Identity, by including how authentication works (i.e. different schemes and providers), and providing information on using OpenIDConnect or JwtBearer, two very common approaches. The best resource I can find at present is the AspNetCore source code which includes a lot of samples under the /src/Security path. In this case, I’ve worked from the OpenIdConnectSample project.

The second challenge is configuration. Some documentation suggests you get application information from App Registrations however Azure Portal current indicates this isn’t fully supported, and it’s the same information that comes from the Azure AD B2C – Applications blade. The terminology in that blade is a little confusing as it refers to the client ID as Application ID, and the tenant ID varies depending on which Active Directory you allow your application users to come from. The most common case is to use the directory you created earlier, so the tenant value will be your domain.
The following configuration, with values from Azure, goes into the root level of the appsettings.json.

"AzureAdB2C": {
  "ClientId": "ApplicationID from Azure AD B2C - Applications"
  "ClientSecret": "Key from Azure AD B2C - Applications"
  "Domain": " (from Directory + subscription filter)",
  "SignUpSignInPolicyId": "Policy name from Azure AD B2C - User flows (policies)",
  "Tenant": "Tenant Name (first part of URL from Directory + subscription filter)",
  "TenantId": "TenantID Guid (from Directory + subscription filter)"

This configuration is loaded by the following class

public class AzureAdB2COptions
  public string Authority => $"https://{Tenant}{TenantId}/{SignUpSignInPolicyId}/v2.0/";
  public string ClientId { get; set; }
  public string ClientSecret { get; set; }
  public string Scope => $"https://{Tenant}";
  public string SignUpSignInPolicyId { get; set; }
  public string Tenant { get; set; }
  public string TenantId { get; set; }

Finally, to include this in your ASP.NET Core application, it needs to be configured in Startup.

// in ConfigureServices(IServiceCollection services)
services.AddAuthentication(sharedOptions =>
  sharedOptions.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
  sharedOptions.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
  sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
.AddOpenIdConnect(options =>
  var b2cOptions = new AzureAdB2COptions();
  Configuration.Bind("AzureAdB2C", b2cOptions);

  options.Authority = b2cOptions.Authority;
  options.ClientId = b2cOptions.ClientId;
  options.ClientSecret = b2cOptions.ClientSecret;
  options.ResponseType = OpenIdConnectResponseType.Code;

// in Configure(IApplicationBuilder app, IWebHostEnvironment env) before app.UseEndpoints()

ASP.NET Core Inject All Dependencies in the Assembly

ASP.NET Core would very much like you to use the built in dependency injection system rather than a substitute like Autofac:

The built-in service container is meant to serve the needs of the framework and most consumer apps. We recommend using the built-in container unless you need a specific feature that it doesn’t support.

While I think I’d struggle without “Func support for lazy initialization”, the feature I want the most is assembly-wide service registration. This means instead of having to create a new services.AddTransient() for every class, I can add a new interface and class pair and be confident it will be added to the dependency container.

My solution to this is to find all the classes in my assembly which have an interface name which is exactly the classname, preceded by a capital I, and registered them as transients.

One further twist is that some services need to be registered manually, and thus may need to be removed from the collection of automatically added services. They may also need a different lifetime. To support this the AddService method removes any existing registrations for the interface type before creating the new one. It also takes the implementation type as a factory allowing transient, scoped, and singleton implementations in the one method.

public virtual void RegisterServices(IServiceCollection services)
  var assy = Assembly.GetCallingAssembly();
  var types = assy.GetTypes().Where(t => t.IsClass && !t.IsAbstract && !t.IsGenericType && !t.IsNested);
  foreach(var type in types)
    var iface = type.GetInterface("I" + type.Name);
    if (iface != null && iface.Assembly.FullName == assy.FullName)
      AddService(services, iface, (_) => Activator.CreateInstance(type), ServiceLifetime.Transient);

  // register services that take configuration data
  var fileStorage = new FileStorage(Configuration.GetConnectionString("..."));
  AddService(services, typeof(IFileStorage), (_) => fileStorage, ServiceLifetime.Singleton);

public void AddService(IServiceCollection services, Type tInterface, Func factory, ServiceLifetime lifetime)
  var existing = services.SingleOrDefault(s => s.ServiceType.FullName == tInterface.FullName);
  if (existing != null)

  services.Add(new ServiceDescriptor(tInterface, factory, lifetime));

Creating a .NET Core PDF Library

I’ve been working towards hosting a website for my musical compositions and one thing I wanted to do is to add text and images to my PDFs to indicate the music is a preview, i.e. a watermark.

There are a great many existing PDF libraries out there but I opted to build something myself. This would be a poor economic decision smacking of severe NIH-syndrome if it were done in a business setting. However, this was for a personal project meaning cost was a factor and the solutions out there for .NET either come at considerable cost (which I can understand having now spent time with the spec), have hard to judge quality, or are ports from other languages and don’t take advantage of .NET features. Finally it has been quite some time since I wrote a lexer and parser so it was a nice exercise.

The library, in the state it is in, is available at GitHub. There is no nuget package thus far so using it requires cloning the repo and then following one of the examples from it. The classes created focus on loading and saving PDFs and working with the objects found directly in the document. Once it comes to manipulating the contents of the page, any user must (at present) understand the format being used (i.e. sections 8 and 9 of the PDF spec 1.7).

Taking a first look at the PDF format was quite interesting. Its syntax is based on PostScript, so for instance dictionaries are surrounded by double-angle-brackets. It structures items as objects, which can be referenced-from or embedded-in objects that use them. Binary objects, like images, are typically stored within compressed streams.

I look forward to putting this library into practice, and maybe it will find some uses for other people too.

Institutional Knowledge

It is hard to escape the chatter around software contracting and the ‘gig economy’, but there is an aspect that isn’t raised as often, institutional knowledge. Although the term institutional might lead one to think about businesses of crazy people in straight-jackets, in reality it is critical to the longevity of organizations.

Wikipedia’s dry definition of institutional knowledge is: “Institutional knowledge is gained by organizations translating historical data into useful knowledge and wisdom.” That sounds very nice, but gives the impression of something well-organized and easily accessible. Reality is far more haphazard.

I think of institutional knowledge as: the ideas that were evaluated but discarded; the intricate understanding of customers through long relationships; knowing where ‘the bodies lie’ in the source code; understanding which bits of the product are important to whom, and just how important they are; and the seemingly infinite set of answers to ‘why’ questions. Put together they are the lessons learned by the organization and knowing them can prevent bad decisions and avoid wasting time and resource.

In an ideal world all this knowledge would be documented somewhere. But even if that were possible (can you really write down the nuances of an inter-personal customer relationship?), who’d have the capacity to read and digest it – in fact who would even know relevant information is there to be read? The answer, unsurprisingly, are the people who’ve been around long enough to establish institutional knowledge.

The trouble is that in the modern economy of maximum efficiency, waste is to be discarded – well at least the waste that is easily visible on a balance sheet – which means using the bare minimum of head-count possible. This translates into turning on and off the spigot of contractors according to the project plan. As a result, a significant percentage of the people actually doing the work, the people who understand the details, will not be working with you for long. So when a maintenance issue comes up (software never needs maintenance right!? bytes don’t decay….) the people who could’ve zoned in on the fix quickly are gone, and the people who are trying to fix it really don’t have a good concept of what they might consequentially break. Certainly, that risk can be minimized via comprehensive test suites and documentation (assuming it’s up to date), but fundamentally, by not retaining people with detailed knowledge you’re consciously increasing your maintenance costs.

Obviously having a huge stack of people sitting around waiting for maintenance issues to appear is a cost too. However, unless the organization has opted to do something in-house that is outside their usual business activities (which one might then argue they ought to have contracted-out to a specialized organization), those people are likely to be useful on future projects. In fact, they’ll be more useful than short-term hires because they’ll have a solid understanding of company strategy, goals, and practices, and (where relevant) a better understanding of systems they might be integrating with. This is institutional knowledge adding value and ultimately saving the company money.

It is not an easy balancing act, keeping the right number of people and having them shuffle between maintenance and new projects, but it’s also nothing new: organizations have developed all kinds of management tactics to handle that.

Running a business focused on efficiency seeks to minimize immediate costs. But it does so at the expense of long-term savings, like the cost reductions to be had by having the right people able to make smart decisions for the business because they’re already on the team and can apply institutional knowledge.


I can’t stand the term ‘professionalism’. Whenever I hear it, it my BS-detector jumps immediately to red-alert because it’s a way of shutting people down with a power play or an appeal to tradition rather than positively engaging with them.

Unlike say law, where there are defined professional standards, professionalism in the software industry is highly subjective. Take for example a relatively simple aspect of development like commenting. A more traditional perspective on commenting is that every (i.e. not just the public ones) method should have comments indicating what the method does and the purposes of its parameters and return values. My concept of quality code is that clear naming, good cohesion and design will provide you that information – the ‘what’ if you will – and comments should be reserved for the ‘why’, i.e. to explain decisions, whether design-level or business-level, that are not intuitive. There are tomes on this subject and ongoing debates about the discipline of maintaining documentation versus letting the current clean code speak for itself. Regardless of where you fall in this debate, swiping it all away by calling one approach ‘professional’ and the other not removes the opportunity for reasoned debate, for improved understanding, and for finding the most appropriate solution specific to the circumstances.

The net result of invoking ‘professionalism’ is disempowerment. If you shut down a discussion on the basis of this power play then you’ve rejected without consideration an opinion of a peer. They will likely resent this lack of consideration, and it certainly won’t encourage them to contribute further opinions. As is well documented in the literature on teamwork, it is the consideration of a combination of perspectives and ideas that enables the best solutions to be found. In effect, by invoking ‘professionalism’ you’re throwing away the investment your company is making into expensive human resources.

This doesn’t mean it should be an anarchic free-for-all. Coding standards are fine: they create consistency that allows code to be more easily understood by unfamiliar parties. There can also be other factors at play like external forces which push decisions in a certain direction, and those should be the answer to questions of “why do we do it this way?” rather than the dismissive ‘professionalism’. Standards and practices that are built on solid reasoning will stand up to scrutiny, and scrutiny of them should be welcomed because if they don’t stand up to the challenge of re-examination then there is an opportunity to make an improvement within the organization.

‘Professional’ needs to become another corporate punchline, like ‘synergy’, and that way we keep our people empowered and can keep improving our organizations.