Responsive Tables

It’s not unusual to have data which suits being displayed in a tabular format. In the web world we have for this the table element, a classical feature of HTML, and once upon a time also a misused staple of layout. Today we have far better tools for layout, but there remains a challenge with tables: how to display them as the screen-width shrinks, or to use the current parlance, how to make them responsive.

The internet includes a number of options on how best to handle narrow tables, including hiding columns and using horizontal scroll. However I’m operating under the assumption that all the table data must still be displayed, and it should remain grouped by its data rows (which may or may not be rendered horizontally).

I considered three general approaches to a responsive tabular layout: what I will call div-flex; CSS grids; and tables.

In div-flex, each data row is rendered as a div with display: flex, and each data element is a block element within. As the screen-width reduces, the flex layout will wrap the elements onto subsequent lines. The big drawback with this approach is that all the block elements need to be sized consistently, and as it wraps it loses the context of any header row.

In CSS grids each data element is a block element, but unlike div-flex, all those elements are direct children of some container, i.e. there is no element representing the data row. As the screen-width decreases, media selectors are used to change the shape of the grid so it gradually moves from, say, 6 columns to 1 column as the page width shrinks. The biggest drawback of CSS grids is that they have no row grouping – no way of indicating various blocks belong together as the size changes. They simply aren’t designed to have a notion of row-association.

Finally we return to the table. It is semantically the preferred choice and there is a good solution out there which changes the table into a card layout when the media-width is reached. However the drawback with that solution is the need to specify column headers in CSS, something that at best separates content from layout (HTML is for content, not CSS) and, if the headers are dynamically set, is impractical.

My change is to create a small amount of content duplication as a penalty: given a table with headers, we add the header information into a hidden field in each td that is only visible once the screen width is suitably reduced. This is a lot of duplication, but most of us are using HTML generated by templates which support looping, so it’s very little extra effort on our part. The resulting HTML will look something like this (using Angular syntax):

<table class="table-reponsive">
  <tr *ngFor="let shirt of shirts">
    <td><span class="hidden-label">Name</span>{{}}</td>
	<td><span class="hidden-label">Color</span>{{shirt.color}}</td>
	<td><span class="hidden-label">Size</span>{{shirt.size}}</td>

And the CSS:

table.table-reponsive {

    td > span.hidden-label { display: none; } /* Hide labels */

    /* based on */
    @media (max-width: 768px) {

        table, thead, tbody, th, td, tr { display: block; }
        thead tr { position: absolute; top: -9999px; left: -9999px; } /* Hide table headers */       
        tr { border: 1px solid #ccc }
        td { border: 0 }
        td > span.hidden-label { display: inline-block; width: 7em; }

The result is that the table layout will change to a card layout at width 768px, and display each data-row as a card with the header and the data value.

Midpoint Rounding

Today I had a strange issue with currencies being 0.01 off. I’ve always been wary of float/double representation of decimals so I’ve stuck to the decimal type in C# for currency.
Despite this, Math.Round(222.485m, 2) was giving me 222.48 – which is not what I learnt at school!

Turns out .NET Core defaults to something called bankers rounding:

This kind of rounding is sometimes called round half to even or banker’s rounding. It minimizes rounding errors that result from consistently rounding a midpoint value in a single direction.

This, according to the documentation follows IEEE Standard 754, section 4, and means that 222.485 rounds to 222.48, but 222.495 rounds to 222.50.

Thankfully, since this wasn’t relevant to this case, it can be disabled with an extra argument, Math.Round(222.485m, 2, MidpointRounding.AwayFromZero) giving me my expected answer of 222.49.

Angular, Jest, and async NgOnInit

It is common to fetch data in Angular’s ngOnInit method with the expectation that the page will update when the data is returned and bound. In Typescript it is nice to use async methods to improve readability leading to components that have async ngOnInit() methods.

This presents a challenge when working with a testing framework, as we need to wait for promises to be resolved before our component is set up and tests can continue. To handle this, Angular’s TestBed includes async support, and in particular the whenStable() method for waiting for promises to resolve. This is useful, but means having a whenStable()/detectChanges() pair in every test method. Instead I found that whenStable() runs fine in the beforeEach() method where the fixture is created (via TestBed.createComponent()). I’m not clear on why this works when it’s not in an Angular async() (as opposed to a TypeScript async) block, with my best guess being that the TestBed.createComponent automatically creates the NgZone that whenStable() needs.

As the beforeEach is calling whenStable() it now needs to be (TypeScript) async and Jest will wait for the resulting promise to finish before it considers beforeEach to be done. With this approach the unit tests no longer need to be wrapped in an (Angular test) async.

The following code illustrates the full pattern, and also uses a mocking library, ts-jest. The code is all in TypeScript and uses (TypeScript) async for handling promises. It is organized so each inner describe block (e.g. ‘with specificMockDataset’) covers a specific test data set.

// --- MyComponent.Component.ts ---
// … usual imports …
export class MyComponent implements OnInit {
    async ngOnInit() {
        this.thing = await this.myService.getFromServer();

// --- MyComponent.Component.test.ts ---
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { By } from '@angular/platform-browser';
import { mocked } from 'ts-jest/utils';
import { MyComponent } from 'app/components/my.component';
import { MyService } from 'app/services/my.service';
import { AppModule } from 'app/app.module';

const mockService = mocked(MyService, true);

describe('MyComponent component', () => {
    let fixture: ComponentFixture;
    beforeEach(async () => {
        await TestBed.configureTestingModule({
            declarations: [MyComponent],
            providers: [mockService],
            imports: [AppModule]

    describe('with specificMockDataset', () => {

        beforeEach(async () => {
            mockService.prototype.getFromServer.mockImplementation(() => { return Promise.resolve({ field: "value" }); });

            fixture = TestBed.createComponent(MyComponent);
            fixture.detectChanges(); // calls ngOnInit()
            await fixture.whenStable(); // waits for promises to complete
            fixture.detectChanges(); // detect changes made due to field changes made during ngOnInit

        it('should have options', () => {
            const selectClient: HTMLSelectElement = fixture.debugElement.query(By.css('#someEle')).nativeElement;

Edit 23-Feb-2021:

The above approach, which worked with Angular 5, no longer seems to work with Angular 10. Now whenStable() never returns causing a timeout in the beforeEach. My solution has been to replace the whenStable with a timeout call which gives up the event loop, allowing the ngOnInit to run.
The beforeEach() now looks like this:

beforeEach((done) => {
    mockService.prototype.getFromServer.mockImplementation(() => { return Promise.resolve({ field: "value" }); });fixture = TestBed.createComponent(MyComponent);

    fixture.detectChanges(); // calls ngOnInit()
    setTimeout(() => {
        fixture.detectChanges(); // detect changes made due to field changes made during ngOnInit
    }, 1); // let the event loop run

REST and Actions

I’ve always felt there is a tension when creating RESTful APIs. REST focuses on managing the present contents of a resource, supporting the basic CRUD (create/read/update/delete) operations via HTTP verbs. However when a user is modifying something they are not simply updating state, they are executing an action. This is reflected in some architectures where we issue a command that expresses the action we want to take. Often that action is simply to update a entity, which is nicely modeled in REST via a PUT to a resource, but in other cases it is not.

One example where an action doesn’t correspond to a CRUD operation is in implementing workflow. Take a simple bug reporting system with the following state model:
In this workflow it is not possible to change between some pairs of states, for instance I can’t go from Closed to Testing, or Open to/from Testing.

In a purely RESTful implementation, I would PUT my bug to the API and it would check if the current state has changed and then check the legality of that workflow. I would also PUT my bug to the API if a few fields needed changing. This means we have two different actions being represented by the same endpoint, and the intent of the user has been lost. Having different intentions normally tells us that each intent should be considered separately: they are likely to be triggered by different elements in the UI; their validation will be different; and from a code-design perspective we usually seek cohesive designs where each code-element has a single responsibility. Furthermore, this allows us to perform both actions in a single call, making the handling logic considerably more complex. So handling multiple actions behind a single call runs in the face of normal design principles and creates complexity (which is software-developer speak for ‘creates bugs’).

I believe the normal solution to this problem is to introduce sub-paths on the resource. e.g. PUT resource/:id/statechange*. At this point the purity of our RESTful implementation has been violated: specifically we cannot update all the fields in our resource with the underlying PUT. Thus we have introduced a method that’s not a verb-resource combination, and we need to know which fields can be updated via the main end-point and which ones can only be updated via sub-paths.

This resolves the tension I’ve found with RESTful APIs and is a solution I’m happy with. My experience has been that being ‘purist’ about any technology usually results in more effort than it’s worth, and this mix of resource-based verbs for the standard CRUD operations and sub-paths for other actions on the resource is simple and pragmatic. Documentation will tell us what those actions are and which fields on the resource can’t be updated via the PUT.

* Some might argue POST is more correct here. I was always told PUT is idempotent while POST is not i.e. doing PUT multiple times will have the same outcome. As we’re always targeting the same state, I’d argue this is idempotent, although if the action has side-effects like sending notification emails then maybe POST is more accurate.

Authenticating Azure B2C in ASP.NET Core

Generally I prefer the holistic approach of .NET, as opposed to the small-core plus ‘lots of libraries that haven’t been tested together’ approach in other ecosystems as it tends to provide a more predictable platform. However one area where I struggle with Microsoft’s approach is authentication. The .NET Core documentation makes it clear they want you to use Identity, and everything else is a second-class citizen. But Identity, with database backed roles, seems like an all-or-nothing proposition, and overkill for a basic solution simply asking ‘who are you?’.

What I want is a signed token with identifying information. I’m using Razor Pages, so this is a postback environment, and I’d like it to be stateless which means the user-agent needs to hold their credentials, which is usually done using cookies. Cookies make me a little nervous since the GDPR rules have come in, however consent isn’t required for strictly necessary cookies such as these.

Strictly necessary cookies — These cookies are essential for you to browse the website and use its features, such as accessing secure areas of the site

To comply with the regulations governing cookies under the GDPR and the ePrivacy Directive you must: Receive users’ consent before you use any cookies except strictly necessary cookies.

Choosing an Authentication Grant

Azure B2C is an authorization server supporting OAuth2 as defined in RFC 6749. RFC 6749 defines four roles. In this case two are obvious: the resource-owner is the end-user and the authorization-server is Azure B2C. The distinction between the other two roles is more subtle.

This is a Razor Pages application so the logic for requesting resources resides on the web-server making the web-server the client. The web-server is also the resource server, as it is where the protected resources reside. Assuming a classic 3-tier architecture, we could say the presentation layer is the client, while the domain and store layers are the resource. In practice, the authorization will be checked at the presentation layer which will return a different presentation if authorization fails.

Regardless, we have a client that can keep secrets. This allows us to use the default (and more secure) OAuth2 grant, Authorization Code.

Authorization Code Grant with AzureB2C

Azure B2C needs it’s own Active Directory instance. Azure calls this a tenant and it’s known by two identifiers: a domain and a GUID. Following the steps in the Create B2C Tenant tutorial will create that instance, and the domain name and GUID will be displayed in the Azure Directory + subscription filter.

Authorization Code grant requires a client id and secret. The client id tells the authorization server which client is requesting access (on behalf of the user). The client secret is used as a password when the client directly communicates with the authorization server. Azure B2C calls these the Application ID and App Key respectively, and these are set in the Applications area of the Azure B2C blade in Azure Portal.

If you want an access token (as opposed to just an ID token), it is also important to add API Access. This is done in the Azure Portal under B2C by setting the App ID Url (typically to api), then going to Api Access, pressing Add, and selecting the application from the top drop-down and everything from the second. This will add a scope of


ASP.NET Core’s documentation for authentication would benefit from focusing beyond Identity, by including how authentication works (i.e. different schemes and providers), and providing information on using OpenIDConnect or JwtBearer, two very common approaches. The best resource I can find at present is the AspNetCore source code which includes a lot of samples under the /src/Security path. In this case, I’ve worked from the OpenIdConnectSample project.

The second challenge is configuration. Some documentation suggests you get application information from App Registrations however Azure Portal current indicates this isn’t fully supported, and it’s the same information that comes from the Azure AD B2C – Applications blade. The terminology in that blade is a little confusing as it refers to the client ID as Application ID, and the tenant ID varies depending on which Active Directory you allow your application users to come from. The most common case is to use the directory you created earlier, so the tenant value will be your domain.
The following configuration, with values from Azure, goes into the root level of the appsettings.json.

"AzureAdB2C": {
  "ClientId": "ApplicationID from Azure AD B2C - Applications"
  "ClientSecret": "Key from Azure AD B2C - Applications"
  "Domain": " (from Directory + subscription filter)",
  "SignUpSignInPolicyId": "Policy name from Azure AD B2C - User flows (policies)",
  "Tenant": "Tenant Name (first part of URL from Directory + subscription filter)",
  "TenantId": "TenantID Guid (from Directory + subscription filter)"

This configuration is loaded by the following class

public class AzureAdB2COptions
  public string Authority => $"https://{Tenant}{TenantId}/{SignUpSignInPolicyId}/v2.0/";
  public string ClientId { get; set; }
  public string ClientSecret { get; set; }
  public string Scope => $"https://{Tenant}";
  public string SignUpSignInPolicyId { get; set; }
  public string Tenant { get; set; }
  public string TenantId { get; set; }

Finally, to include this in your ASP.NET Core application, it needs to be configured in Startup.

// in ConfigureServices(IServiceCollection services)
services.AddAuthentication(sharedOptions =>
  sharedOptions.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
  sharedOptions.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;
  sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
.AddOpenIdConnect(options =>
  var b2cOptions = new AzureAdB2COptions();
  Configuration.Bind("AzureAdB2C", b2cOptions);

  options.Authority = b2cOptions.Authority;
  options.ClientId = b2cOptions.ClientId;
  options.ClientSecret = b2cOptions.ClientSecret;
  options.ResponseType = OpenIdConnectResponseType.Code;

// in Configure(IApplicationBuilder app, IWebHostEnvironment env) before app.UseEndpoints()

ASP.NET Core Inject All Dependencies in the Assembly

ASP.NET Core would very much like you to use the built in dependency injection system rather than a substitute like Autofac:

The built-in service container is meant to serve the needs of the framework and most consumer apps. We recommend using the built-in container unless you need a specific feature that it doesn’t support.

While I think I’d struggle without “Func support for lazy initialization”, the feature I want the most is assembly-wide service registration. This means instead of having to create a new services.AddTransient() for every class, I can add a new interface and class pair and be confident it will be added to the dependency container.

My solution to this is to find all the classes in my assembly which have an interface name which is exactly the classname, preceded by a capital I, and registered them as transients.

One further twist is that some services need to be registered manually, and thus may need to be removed from the collection of automatically added services. They may also need a different lifetime. To support this the AddService method removes any existing registrations for the interface type before creating the new one. It also takes the implementation type as a factory allowing transient, scoped, and singleton implementations in the one method.

public virtual void RegisterServices(IServiceCollection services)
  var assy = Assembly.GetCallingAssembly();
  var types = assy.GetTypes().Where(t => t.IsClass && !t.IsAbstract && !t.IsGenericType && !t.IsNested);
  foreach(var type in types)
    var iface = type.GetInterface("I" + type.Name);
    if (iface != null && iface.Assembly.FullName == assy.FullName)
      AddService(services, iface, (_) => Activator.CreateInstance(type), ServiceLifetime.Transient);

  // register services that take configuration data
  var fileStorage = new FileStorage(Configuration.GetConnectionString("..."));
  AddService(services, typeof(IFileStorage), (_) => fileStorage, ServiceLifetime.Singleton);

public void AddService(IServiceCollection services, Type tInterface, Func factory, ServiceLifetime lifetime)
  var existing = services.SingleOrDefault(s => s.ServiceType.FullName == tInterface.FullName);
  if (existing != null)

  services.Add(new ServiceDescriptor(tInterface, factory, lifetime));

Creating a .NET Core PDF Library

I’ve been working towards hosting a website for my musical compositions and one thing I wanted to do is to add text and images to my PDFs to indicate the music is a preview, i.e. a watermark.

There are a great many existing PDF libraries out there but I opted to build something myself. This would be a poor economic decision smacking of severe NIH-syndrome if it were done in a business setting. However, this was for a personal project meaning cost was a factor and the solutions out there for .NET either come at considerable cost (which I can understand having now spent time with the spec), have hard to judge quality, or are ports from other languages and don’t take advantage of .NET features. Finally it has been quite some time since I wrote a lexer and parser so it was a nice exercise.

The library, in the state it is in, is available at GitHub. There is no nuget package thus far so using it requires cloning the repo and then following one of the examples from it. The classes created focus on loading and saving PDFs and working with the objects found directly in the document. Once it comes to manipulating the contents of the page, any user must (at present) understand the format being used (i.e. sections 8 and 9 of the PDF spec 1.7).

Taking a first look at the PDF format was quite interesting. Its syntax is based on PostScript, so for instance dictionaries are surrounded by double-angle-brackets. It structures items as objects, which can be referenced-from or embedded-in objects that use them. Binary objects, like images, are typically stored within compressed streams.

I look forward to putting this library into practice, and maybe it will find some uses for other people too.

Institutional Knowledge

It is hard to escape the chatter around software contracting and the ‘gig economy’, but there is an aspect that isn’t raised as often, institutional knowledge. Although the term institutional might lead one to think about businesses of crazy people in straight-jackets, in reality it is critical to the longevity of organizations.

Wikipedia’s dry definition of institutional knowledge is: “Institutional knowledge is gained by organizations translating historical data into useful knowledge and wisdom.” That sounds very nice, but gives the impression of something well-organized and easily accessible. Reality is far more haphazard.

I think of institutional knowledge as: the ideas that were evaluated but discarded; the intricate understanding of customers through long relationships; knowing where ‘the bodies lie’ in the source code; understanding which bits of the product are important to whom, and just how important they are; and the seemingly infinite set of answers to ‘why’ questions. Put together they are the lessons learned by the organization and knowing them can prevent bad decisions and avoid wasting time and resource.

In an ideal world all this knowledge would be documented somewhere. But even if that were possible (can you really write down the nuances of an inter-personal customer relationship?), who’d have the capacity to read and digest it – in fact who would even know relevant information is there to be read? The answer, unsurprisingly, are the people who’ve been around long enough to establish institutional knowledge.

The trouble is that in the modern economy of maximum efficiency, waste is to be discarded – well at least the waste that is easily visible on a balance sheet – which means using the bare minimum of head-count possible. This translates into turning on and off the spigot of contractors according to the project plan. As a result, a significant percentage of the people actually doing the work, the people who understand the details, will not be working with you for long. So when a maintenance issue comes up (software never needs maintenance right!? bytes don’t decay….) the people who could’ve zoned in on the fix quickly are gone, and the people who are trying to fix it really don’t have a good concept of what they might consequentially break. Certainly, that risk can be minimized via comprehensive test suites and documentation (assuming it’s up to date), but fundamentally, by not retaining people with detailed knowledge you’re consciously increasing your maintenance costs.

Obviously having a huge stack of people sitting around waiting for maintenance issues to appear is a cost too. However, unless the organization has opted to do something in-house that is outside their usual business activities (which one might then argue they ought to have contracted-out to a specialized organization), those people are likely to be useful on future projects. In fact, they’ll be more useful than short-term hires because they’ll have a solid understanding of company strategy, goals, and practices, and (where relevant) a better understanding of systems they might be integrating with. This is institutional knowledge adding value and ultimately saving the company money.

It is not an easy balancing act, keeping the right number of people and having them shuffle between maintenance and new projects, but it’s also nothing new: organizations have developed all kinds of management tactics to handle that.

Running a business focused on efficiency seeks to minimize immediate costs. But it does so at the expense of long-term savings, like the cost reductions to be had by having the right people able to make smart decisions for the business because they’re already on the team and can apply institutional knowledge.


I can’t stand the term ‘professionalism’. Whenever I hear it, it my BS-detector jumps immediately to red-alert because it’s a way of shutting people down with a power play or an appeal to tradition rather than positively engaging with them.

Unlike say law, where there are defined professional standards, professionalism in the software industry is highly subjective. Take for example a relatively simple aspect of development like commenting. A more traditional perspective on commenting is that every (i.e. not just the public ones) method should have comments indicating what the method does and the purposes of its parameters and return values. My concept of quality code is that clear naming, good cohesion and design will provide you that information – the ‘what’ if you will – and comments should be reserved for the ‘why’, i.e. to explain decisions, whether design-level or business-level, that are not intuitive. There are tomes on this subject and ongoing debates about the discipline of maintaining documentation versus letting the current clean code speak for itself. Regardless of where you fall in this debate, swiping it all away by calling one approach ‘professional’ and the other not removes the opportunity for reasoned debate, for improved understanding, and for finding the most appropriate solution specific to the circumstances.

The net result of invoking ‘professionalism’ is disempowerment. If you shut down a discussion on the basis of this power play then you’ve rejected without consideration an opinion of a peer. They will likely resent this lack of consideration, and it certainly won’t encourage them to contribute further opinions. As is well documented in the literature on teamwork, it is the consideration of a combination of perspectives and ideas that enables the best solutions to be found. In effect, by invoking ‘professionalism’ you’re throwing away the investment your company is making into expensive human resources.

This doesn’t mean it should be an anarchic free-for-all. Coding standards are fine: they create consistency that allows code to be more easily understood by unfamiliar parties. There can also be other factors at play like external forces which push decisions in a certain direction, and those should be the answer to questions of “why do we do it this way?” rather than the dismissive ‘professionalism’. Standards and practices that are built on solid reasoning will stand up to scrutiny, and scrutiny of them should be welcomed because if they don’t stand up to the challenge of re-examination then there is an opportunity to make an improvement within the organization.

‘Professional’ needs to become another corporate punchline, like ‘synergy’, and that way we keep our people empowered and can keep improving our organizations.

Free and Open Source

I’ve never quite understood the appeal of FOSS (free and open source software) as a developer. There’s ego to be sure; it also functions as self-marketing in many cases; it may be a chance to work on something more interesting that your day job; but often is just outright altruism in the form of sharing your solution to a hard problem.

The challenge is that once software is out there, whether as a library or an application, people may use it and even come to depend on it. This is particularly so with libraries that form parts of library ecosystems, such as Json.NET in the .NET world. At that point the software becomes ‘alive’, and like living creatures needs ongoing attention. However, for many a developer there was never any intention on becoming a parent and instead they look for ways to forward on the child they are proud of, but would rather not have to bring up (I might be stretching the analogy a little here…).

This is something that was highlighted recently when the maintainer of a heavily used library opted to retire and a consequential volunteer allowed malicious dependencies into the library. Said maintainer raised a number of excellent points in his follow up.

We have to eat

Commerce exists because giving things away means starving. Seriously! We’ve developed complex and highly specialized societies that enable people to focus their talents for significant gain for all of us, but we still ultimately need to fulfill basic human needs. In this context FOSS doesn’t seem rational. However our actual contexts are more complex than that: we’ve generally met our basic needs through other means, and we’re instead searching for a way to achieve self-actualization and FOSS is one way of achieving this.

Altruism, or the appearance of it, is hardly a new phenomenon. People have been giving away the fruits of their day jobs throughout history. So what is different with FOSS, if anything?

There’s two analogies that spring to mind: working pro-bono or establishing charities. Working pro-bono implies a short-term service and doesn’t tend to have consequences after the transaction is finished, which is not analogous to putting software on the internet. Whereas establishing a charity is a more permanent endeavor.

To be charitable to a great many people you have to commit to running a non-profit organization with all its regulatory and societal demands. That’s a high bar to cross and demands significant organizational skills. By comparison, FOSS allows you to achieve that kind of scale of charity without any bar to entry at all, and so perhaps the problem is the unpreparedness of the creators.

FOSS can work, commercially

Continuing the charity analogy, an alternative to establishing a charity is to donate to an existing organization who manages your generosity for you (taking a cut). This is where FOSS has been most successful, ironically, as a tool for commercial organizations.

Open source is a lovely way of increasing demand for your product by reducing the cost of its complements. If say I’m Microsoft, and I want to sell operating systems, then giving away the software for free that enables people to build products for said operating system is a solid business strategy. Furthermore, because Microsoft want their developers to be as happy as possible, they’ll also enable them to make their own fixes to that software (think .NET libraries) rather than be bound to the priorities and release cycle of Microsoft. On top of this Microsoft now get the benefit of a whole lot of free development and QA. (I’ll admit I don’t understand Microsoft’s current strategy in this area with cross-platform .NET Core as now they’re making it easier to run on their competitors products, whether that competitor is Linux or AWS).

Something similar can be said for freely giving away APIs for Facebook and Google services, all of which solidify their market position and/or drive traffic to their products. It’s not quite so clear what the benefits of providing libraries like React have, although I imagine one could make business cases around brand management via goodwill, or control of a complementary market.

There’s also the smaller, but at times successful, model of selling support to FOSS products, as is the approach of Red Hat. This arises because enterprises tend to demand a strong level of support and well-known life-cycle models as part of making any significant investment and they are quite happy to pay for that.

In all these cases an organization exists to tangentially profit from the existence of the charity, FOSS. Non-profits fit the charity model even more aptly, requiring some revenue to promote or maintain their charitable activities (I’ve effectively been paying $100 a year to use Firefox for quite some time now).


It’s clear that any piece of software requires ongoing support. So how can the supporter be supported?

One option is to take the F out of FOSS, and acknowledge the cost of providing the software. After all, FOSS is not just about lack of upfront cost. With open source the consumer has the option to fix or improve the software for their specific needs and not be at the mercy of a commercial organization’s release schedule. However, paying for open-source software would be a sociological change, creating a barrier to entry that hasn’t existed for several decades in the software world, which would entrench incumbents, severely inhibit the proliferation of ideas, and remove the ability of people to lift themselves up without the provision of a great deal of capital.

Even if it were possible to make FOSS cost something, it wouldn’t deal with the underlying problem that sometimes people just want to move on. In commercial practice this means ‘selling the business’, but FOSS encompasses many vital but tiny pieces of software which are too small to create a single business around. Take for instance the library made by the maintainer in the introduction: it’s currently 352 lines of code plus packaging and test artifacts, and over five years (before 2018) it had around 30 issues logged. It’s hard to imagine a business built around just that.

However, you could have a business around lots of these. Is it possible to have a business that would take on well-used libraries if their creator no longer wishes to maintain them? Such a business might be able to generate paid enterprise support contracts supporting its FOSS suite, and that enterprise oversight would also prevent them from ‘going rogue’ – there may already be organizations like this out there. I’m quite sure there are lots of software consultancies that will gladly enhance a library for a particular client – how big a stretch would it be for them to take over a library when a creator didn’t want to maintain it? Perhaps Github can create a marketplace for passing on control of libraries and thus make it easier for players on both sides of the library maintenance business to come together?

There needs to be some industry maturity in the lifetime management of FOSS, particularly for small libraries, and my impression is the best option would be the formation of businesses that specialize in this, paid for through support contracts.