ASP.NET Core Inject All Dependencies in the Assembly

ASP.NET Core would very much like you to use the built in dependency injection system rather than a substitute like Autofac:

The built-in service container is meant to serve the needs of the framework and most consumer apps. We recommend using the built-in container unless you need a specific feature that it doesn’t support.

While I think I’d struggle without “Func support for lazy initialization”, the feature I want the most is assembly-wide service registration. This means instead of having to create a new services.AddTransient() for every class, I can add a new interface and class pair and be confident it will be added to the dependency container.

My solution to this is to find all the classes in my assembly which have an interface name which is exactly the classname, preceded by a capital I, and registered them as transients.

One further twist is that some services need to be registered manually, and thus may need to be removed from the collection of automatically added services. They may also need a different lifetime. To support this the AddService method removes any existing registrations for the interface type before creating the new one. It also takes the implementation type as a factory allowing transient, scoped, and singleton implementations in the one method.

public virtual void RegisterServices(IServiceCollection services)
{
  var assy = Assembly.GetCallingAssembly();
  var types = assy.GetTypes().Where(t => t.IsClass && !t.IsAbstract && !t.IsGenericType && !t.IsNested);
  foreach(var type in types)
  {
    var iface = type.GetInterface("I" + type.Name);
    if (iface != null && iface.Assembly.FullName == assy.FullName)
      AddService(services, iface, (_) => Activator.CreateInstance(type), ServiceLifetime.Transient);
  }

  // register services that take configuration data
  var fileStorage = new FileStorage(Configuration.GetConnectionString("..."));
  AddService(services, typeof(IFileStorage), (_) => fileStorage, ServiceLifetime.Singleton);
}

public void AddService(IServiceCollection services, Type tInterface, Func factory, ServiceLifetime lifetime)
{
  var existing = services.SingleOrDefault(s => s.ServiceType.FullName == tInterface.FullName);
  if (existing != null)
    services.Remove(existing);

  services.Add(new ServiceDescriptor(tInterface, factory, lifetime));
}

Creating a .NET Core PDF Library

I’ve been working towards hosting a website for my musical compositions and one thing I wanted to do is to add text and images to my PDFs to indicate the music is a preview, i.e. a watermark.

There are a great many existing PDF libraries out there but I opted to build something myself. This would be a poor economic decision smacking of severe NIH-syndrome if it were done in a business setting. However, this was for a personal project meaning cost was a factor and the solutions out there for .NET either come at considerable cost (which I can understand having now spent time with the spec), have hard to judge quality, or are ports from other languages and don’t take advantage of .NET features. Finally it has been quite some time since I wrote a lexer and parser so it was a nice exercise.

The library, in the state it is in, is available at GitHub. There is no nuget package thus far so using it requires cloning the repo and then following one of the examples from it. The classes created focus on loading and saving PDFs and working with the objects found directly in the document. Once it comes to manipulating the contents of the page, any user must (at present) understand the format being used (i.e. sections 8 and 9 of the PDF spec 1.7).

Taking a first look at the PDF format was quite interesting. Its syntax is based on PostScript, so for instance dictionaries are surrounded by double-angle-brackets. It structures items as objects, which can be referenced-from or embedded-in objects that use them. Binary objects, like images, are typically stored within compressed streams.

I look forward to putting this library into practice, and maybe it will find some uses for other people too.

Institutional Knowledge

It is hard to escape the chatter around software contracting and the ‘gig economy’, but there is an aspect that isn’t raised as often, institutional knowledge. Although the term institutional might lead one to think about businesses of crazy people in straight-jackets, in reality it is critical to the longevity of organizations.

Wikipedia’s dry definition of institutional knowledge is: “Institutional knowledge is gained by organizations translating historical data into useful knowledge and wisdom.” That sounds very nice, but gives the impression of something well-organized and easily accessible. Reality is far more haphazard.

I think of institutional knowledge as: the ideas that were evaluated but discarded; the intricate understanding of customers through long relationships; knowing where ‘the bodies lie’ in the source code; understanding which bits of the product are important to whom, and just how important they are; and the seemingly infinite set of answers to ‘why’ questions. Put together they are the lessons learned by the organization and knowing them can prevent bad decisions and avoid wasting time and resource.

In an ideal world all this knowledge would be documented somewhere. But even if that were possible (can you really write down the nuances of an inter-personal customer relationship?), who’d have the capacity to read and digest it – in fact who would even know relevant information is there to be read? The answer, unsurprisingly, are the people who’ve been around long enough to establish institutional knowledge.

The trouble is that in the modern economy of maximum efficiency, waste is to be discarded – well at least the waste that is easily visible on a balance sheet – which means using the bare minimum of head-count possible. This translates into turning on and off the spigot of contractors according to the project plan. As a result, a significant percentage of the people actually doing the work, the people who understand the details, will not be working with you for long. So when a maintenance issue comes up (software never needs maintenance right!? bytes don’t decay….) the people who could’ve zoned in on the fix quickly are gone, and the people who are trying to fix it really don’t have a good concept of what they might consequentially break. Certainly, that risk can be minimized via comprehensive test suites and documentation (assuming it’s up to date), but fundamentally, by not retaining people with detailed knowledge you’re consciously increasing your maintenance costs.

Obviously having a huge stack of people sitting around waiting for maintenance issues to appear is a cost too. However, unless the organization has opted to do something in-house that is outside their usual business activities (which one might then argue they ought to have contracted-out to a specialized organization), those people are likely to be useful on future projects. In fact, they’ll be more useful than short-term hires because they’ll have a solid understanding of company strategy, goals, and practices, and (where relevant) a better understanding of systems they might be integrating with. This is institutional knowledge adding value and ultimately saving the company money.

It is not an easy balancing act, keeping the right number of people and having them shuffle between maintenance and new projects, but it’s also nothing new: organizations have developed all kinds of management tactics to handle that.

Running a business focused on efficiency seeks to minimize immediate costs. But it does so at the expense of long-term savings, like the cost reductions to be had by having the right people able to make smart decisions for the business because they’re already on the team and can apply institutional knowledge.

‘Professionalism’

I can’t stand the term ‘professionalism’. Whenever I hear it, it my BS-detector jumps immediately to red-alert because it’s a way of shutting people down with a power play or an appeal to tradition rather than positively engaging with them.

Unlike say law, where there are defined professional standards, professionalism in the software industry is highly subjective. Take for example a relatively simple aspect of development like commenting. A more traditional perspective on commenting is that every (i.e. not just the public ones) method should have comments indicating what the method does and the purposes of its parameters and return values. My concept of quality code is that clear naming, good cohesion and design will provide you that information – the ‘what’ if you will – and comments should be reserved for the ‘why’, i.e. to explain decisions, whether design-level or business-level, that are not intuitive. There are tomes on this subject and ongoing debates about the discipline of maintaining documentation versus letting the current clean code speak for itself. Regardless of where you fall in this debate, swiping it all away by calling one approach ‘professional’ and the other not removes the opportunity for reasoned debate, for improved understanding, and for finding the most appropriate solution specific to the circumstances.

The net result of invoking ‘professionalism’ is disempowerment. If you shut down a discussion on the basis of this power play then you’ve rejected without consideration an opinion of a peer. They will likely resent this lack of consideration, and it certainly won’t encourage them to contribute further opinions. As is well documented in the literature on teamwork, it is the consideration of a combination of perspectives and ideas that enables the best solutions to be found. In effect, by invoking ‘professionalism’ you’re throwing away the investment your company is making into expensive human resources.

This doesn’t mean it should be an anarchic free-for-all. Coding standards are fine: they create consistency that allows code to be more easily understood by unfamiliar parties. There can also be other factors at play like external forces which push decisions in a certain direction, and those should be the answer to questions of “why do we do it this way?” rather than the dismissive ‘professionalism’. Standards and practices that are built on solid reasoning will stand up to scrutiny, and scrutiny of them should be welcomed because if they don’t stand up to the challenge of re-examination then there is an opportunity to make an improvement within the organization.

‘Professional’ needs to become another corporate punchline, like ‘synergy’, and that way we keep our people empowered and can keep improving our organizations.

Free and Open Source

I’ve never quite understood the appeal of FOSS (free and open source software) as a developer. There’s ego to be sure; it also functions as self-marketing in many cases; it may be a chance to work on something more interesting that your day job; but often is just outright altruism in the form of sharing your solution to a hard problem.

The challenge is that once software is out there, whether as a library or an application, people may use it and even come to depend on it. This is particularly so with libraries that form parts of library ecosystems, such as Json.NET in the .NET world. At that point the software becomes ‘alive’, and like living creatures needs ongoing attention. However, for many a developer there was never any intention on becoming a parent and instead they look for ways to forward on the child they are proud of, but would rather not have to bring up (I might be stretching the analogy a little here…).

This is something that was highlighted recently when the maintainer of a heavily used library opted to retire and a consequential volunteer allowed malicious dependencies into the library. Said maintainer raised a number of excellent points in his follow up.

We have to eat

Commerce exists because giving things away means starving. Seriously! We’ve developed complex and highly specialized societies that enable people to focus their talents for significant gain for all of us, but we still ultimately need to fulfill basic human needs. In this context FOSS doesn’t seem rational. However our actual contexts are more complex than that: we’ve generally met our basic needs through other means, and we’re instead searching for a way to achieve self-actualization and FOSS is one way of achieving this.

Altruism, or the appearance of it, is hardly a new phenomenon. People have been giving away the fruits of their day jobs throughout history. So what is different with FOSS, if anything?

There’s two analogies that spring to mind: working pro-bono or establishing charities. Working pro-bono implies a short-term service and doesn’t tend to have consequences after the transaction is finished, which is not analogous to putting software on the internet. Whereas establishing a charity is a more permanent endeavor.

To be charitable to a great many people you have to commit to running a non-profit organization with all its regulatory and societal demands. That’s a high bar to cross and demands significant organizational skills. By comparison, FOSS allows you to achieve that kind of scale of charity without any bar to entry at all, and so perhaps the problem is the unpreparedness of the creators.

FOSS can work, commercially

Continuing the charity analogy, an alternative to establishing a charity is to donate to an existing organization who manages your generosity for you (taking a cut). This is where FOSS has been most successful, ironically, as a tool for commercial organizations.

Open source is a lovely way of increasing demand for your product by reducing the cost of its complements. If say I’m Microsoft, and I want to sell operating systems, then giving away the software for free that enables people to build products for said operating system is a solid business strategy. Furthermore, because Microsoft want their developers to be as happy as possible, they’ll also enable them to make their own fixes to that software (think .NET libraries) rather than be bound to the priorities and release cycle of Microsoft. On top of this Microsoft now get the benefit of a whole lot of free development and QA. (I’ll admit I don’t understand Microsoft’s current strategy in this area with cross-platform .NET Core as now they’re making it easier to run on their competitors products, whether that competitor is Linux or AWS).

Something similar can be said for freely giving away APIs for Facebook and Google services, all of which solidify their market position and/or drive traffic to their products. It’s not quite so clear what the benefits of providing libraries like React have, although I imagine one could make business cases around brand management via goodwill, or control of a complementary market.

There’s also the smaller, but at times successful, model of selling support to FOSS products, as is the approach of Red Hat. This arises because enterprises tend to demand a strong level of support and well-known life-cycle models as part of making any significant investment and they are quite happy to pay for that.

In all these cases an organization exists to tangentially profit from the existence of the charity, FOSS. Non-profits fit the charity model even more aptly, requiring some revenue to promote or maintain their charitable activities (I’ve effectively been paying $100 a year to use Firefox for quite some time now).

Alternatives

It’s clear that any piece of software requires ongoing support. So how can the supporter be supported?

One option is to take the F out of FOSS, and acknowledge the cost of providing the software. After all, FOSS is not just about lack of upfront cost. With open source the consumer has the option to fix or improve the software for their specific needs and not be at the mercy of a commercial organization’s release schedule. However, paying for open-source software would be a sociological change, creating a barrier to entry that hasn’t existed for several decades in the software world, which would entrench incumbents, severely inhibit the proliferation of ideas, and remove the ability of people to lift themselves up without the provision of a great deal of capital.

Even if it were possible to make FOSS cost something, it wouldn’t deal with the underlying problem that sometimes people just want to move on. In commercial practice this means ‘selling the business’, but FOSS encompasses many vital but tiny pieces of software which are too small to create a single business around. Take for instance the library made by the maintainer in the introduction: it’s currently 352 lines of code plus packaging and test artifacts, and over five years (before 2018) it had around 30 issues logged. It’s hard to imagine a business built around just that.

However, you could have a business around lots of these. Is it possible to have a business that would take on well-used libraries if their creator no longer wishes to maintain them? Such a business might be able to generate paid enterprise support contracts supporting its FOSS suite, and that enterprise oversight would also prevent them from ‘going rogue’ – there may already be organizations like this out there. I’m quite sure there are lots of software consultancies that will gladly enhance a library for a particular client – how big a stretch would it be for them to take over a library when a creator didn’t want to maintain it? Perhaps Github can create a marketplace for passing on control of libraries and thus make it easier for players on both sides of the library maintenance business to come together?

There needs to be some industry maturity in the lifetime management of FOSS, particularly for small libraries, and my impression is the best option would be the formation of businesses that specialize in this, paid for through support contracts.

No Pure Ideology

Welcome to another essay reaching Brooks’ “No Silver Bullet” conclusion.

Genesis from Analogy

I’ve been reading Klein’s somewhat harrowing The Shock Doctrine and I found an interesting (to my twisted mind) analogy between the application of pure capitalism and the application of pure software concepts/paradigms/technologies, or as I have titled this, ideology.

Around 1990 the Cold War was over and capitalism had emerged as the triumphant economic ideology. But this capitalism, which had underpinned the great scientific and social achievements of the 1950s-1970s, was very different – vastly more state controlled – than the free-market approach of ‘pure’ capitalism espoused by Friedman, and with which we are more familiar today. During the 1990s there were major attempts to achieve this pure capitalism in Russia and East Asia, and both were dramatic failures. The attempted conversion of Russia to capitalism was a disaster for all but a few scavenger capitalists and even the IMF admitted that its approach to the East Asian financial crisis late in the decade significantly exacerbated what was a fairly minor problem. Since then, such unfettered free-market reforms, ‘pure’ capitalism, have become viewed with much more suspicion and caution, and it’s quite ironic that in the demise of its ideological enemy, communism, capitalism should lose such face.

My supposition is that no ideology when enacted in its purest form will be successful, as there will always be aspects of the target domain that the ideology doesn’t address effectively. Oddly enough most economic-political ideologies fail in the same area, which is the negative impact they have on the population, a population which typically has no input into the selection of the ideology. The most successful outcomes arise when ideologies are mixed and moderated, as in ‘Swedish socialism’ or the aforementioned success of the tightly controlled United States of the third quarter of the 1900s. I believe that this mixing of concepts to achieve success applies just as much to software development as it does to nation building.

Back to software development

When I started software development, object-orientation was the great new hope, heralded through its flagship, the recently released Java. Java, with its fairly pure object-oriented implementation, solved the problems of imperative languages like C and Pascal through encapsulation – the idea that data and the actions that manipulated it should be bound together and hidden behind some contract.

Except that Java sometimes made things more difficult. One aspect that has consistently bugged me is event handling, where Java demanded an extra object to implement an event interface separately to the object that wanted to respond to the events. By comparison, C was happy with a function pointer (albeit hidden behind some typedef because the type signatures for function pointers were not exactly intuitive).

Another example of Java’s purity, although not to object-orientism, was the need to state all the exceptions which a function could possibly receive. This meant that aggregating functions either needed to have a ridiculously unmanageable list of exception types appended to them or that every function was decorated with the base exception, making the whole thing pointless.

C#, some five years later, was much more pragmatic about both of these cases, supporting function references (delegates) and keeping exception handling in the code body rather than in declarations. C# is a much less pure object-oriented language than Java, but (in my experience at least) is better for ‘getting things done’.

Another example

Functional programming as a pure ideology has struggled get to traction outside of academia. For all the expressive power of being able to write ‘what’ a program needs to do rather than ‘how’ to do it, in its purest form functional programming makes it surprisingly hard to do the basic computational task: modify data. I’ll admit it’s been a while since I last got my head fully stuck into monads and lenses, but I recall genuinely understanding these to be fairly challenging compared to understanding, say, manual memory management and pointer manipulation.

In its attempt to bring functional programming more mainstream, F# takes the non-pure approach of allowing explicit mutability (while defaulting to a functional style). Naturally this weakens the benefits of the pure paradigm, but it also allows people to get things done! Again, the pragmatic compromise provides a more workable solution, while the pure concept remains a curiosity.

Conversely, functional concepts, like list-processing, have made their way into other paradigms where their expressive power is a welcome respite from pure imperative constructs, like nested loops.

Conclusion

Whether it’s micro-services, CQRS, REST, TDD, or any other of the myriad of software concepts, the costs of a pure implementation of a concept always outweighs the benefits. This is because, like with our political ideologies, each concept focuses on a specific type of problem – its core domain – and simply can’t handle all the situations that real software requirements will throw at it. Every concept has its pros and cons, problems where it is a good fit and places where it isn’t. To solve problems effectively we must understand what options best fit in a given situation and how a combination of concepts can be applied to achieve the desired end in a quick, inexpensive, and good-quality fashion. Or as Brooks’ put it, there is no silver bullet.

Criticizing the Past

Hindsight is supposedly 20/20, yet when it comes to appraising existing code and solutions we often run our mouths with the blinders on.

Existing code nearly always seems poor because when people look at code they look with their current perspective and not with the perspective of the time it was written. As the writer, we’re likely to have learned new coding techniques or learned more about the domain, and had we known those things we’d have chosen different approaches. As the reader, we seldom understand the constraints that were in place when that code was created i.e considerations like time-to-market, customers demanding fixes, and the time and resources available to developers for research.

Human Factors

It is important that we evaluate past decisions – we can’t learn without analysis – and very often it is necessary in order to allow changes to the solution. Trouble arises because we’re human and have feelings, and don’t particularly enjoy being criticized, even in the pursuit of learning.

One solution to the human problem, egoless programming (summarized here), seems like a pretty healthy concept. However, it essentially asks people to be detached from what they’ve achieved, which to me seems like a recipe for mediocrity. I’m from the software craftsman camp, where people take pride in the work they’ve done (while still accepting that nobody’s perfect); and people take better care of the things they have pride in.

Feelings exist, so how do we manage that? Egoless programming highlights a key message that is surprisingly often forgotten: “Critique code instead of people”; or to use the sporting metaphor I prefer “play the ball not the player”. At times it seems like people forget that other people are in the room when evaluating old code – rather than “this doesn’t make sense to me”, it’s “this is spaghetti-tangled rubbish” or “what idiot put this together”. At some point our frustration at trying to understand something turns into an utter lack of tact, and as peers we shouldn’t tolerate that – we should be quick to point out the potential for hurting feelings and our own limitations.

Reader’s Perspective

As mentioned in the introduction, anyone reading code or analyzing solutions brings with them their perspectives based on their knowledge and experience. When that experience doesn’t include the relevant business constraints, their evaluation can sound horrible – like the coding equivalent of saying “you’d be much better off if your parents had paid for a better school” to someone from a poor background – circumstances matter!

Our experience can also bias us towards solutions that we understand, and make us uncertain of different approaches. It seems anachronistic to have to say this in the era of equality, but just because something is different doesn’t mean it’s wrong! And we don’t learn anything by sticking to what we know!

Instead, solutions should be considered and appraised for what they are, while being considerate of circumstances. For instance, I find the stack depth that results from the design patterns heavy approach used in Java quite frustrating to debug. But I also appreciate that Java was envisaged as a pure object-oriented language which places some significant constraints on what the language can do, and in particular excludes first-class functions. Rather than saying anything too negative about Java, the lesson I’d take from this is that it’s good to be pragmatic rather than ideological when designing a language.

Conclusion

Don’t get me wrong – I haven’t written this from the perspective of a saint. I’ve been on both sides of the scenarios I’ve mentioned. The key is that I’ve learned from those experiences and hope by sharing that others can learn too.

Finally, don’t forget the lesson of self-deprecating humor: criticizing your own code in negative terms is fine; criticizing someone else’s code in those terms is not.