Automated Testing Priorities

There’s a common theme in automated software testing that the greatest effort should go into unit tests, lesser into integration tests, and least into UI tests. This is known as the testing pyramid. However I’m not convinced this is the best use of automated test developers for web applications, and I believe this is because the nature of web standards and a commonly used web application architecture invalidate some of the assumptions behind the pyramid.

Testing Goals

Before we continue we need to state the goals of automated testing. In my mind they are:

  1. Validate the correctness of the thing under test
  2. Allow refactoring of the thing under test

Note that correctness includes ‘non-functional’ things, like authorization security.

From a business perspective, we want to know that the product works as intended. Working as intended means that the application performs correctly when used through its interfaces. This would suggest that UI tests are the most important, which is the opposite of conventional wisdom.

The reason often given for placing less focus on UI tests is that they have been considered notoriously fragile. However I posit that this has been due to the nature of the interfaces being tested, which have tended to make identifying and interacting with UI elements automatically very hard; e.g. having to use control ids with WinForms applications. I’m also suspicious that less focus on UI tests is a consequence of Agile methodologies that insist on jamming all testing into the same cycle as development, resulting in automation trying to be written against a UI in an extreme state of flux.

Unit Test Fragility

One problem I have with unit testing is that developers are encouraged to test the internals of the unit. This happens when mock objects are checked to see if certain methods were called on the mock.

The purpose of functions and classes are that they expose some contract and hide the details of how that contract is fulfilled. Testing how a unit is doing its work means examining inside the black box, which defeats the purpose of using testing to support refactoring because now we can’t make a change to the implementation of a unit without breaking its tests.

UI Test Fragility

In his 2012 article Fowler says:

An enhancement to the system can easily end up breaking lots of such tests, which then have to be re-recorded. You can reduce this problem by abandoning record-playback tools, but that makes the tests harder to write. Even with good practices on writing them, end-to-end tests are more prone to non-determinism problems, which can undermine trust in them. In short, tests that run end-to-end through the UI are: brittle, expensive to write, and time consuming to run.

I believe that some of these assumptions are less valid in modern web test automation.

Automated web testing tends to be hand-written because (in my experience) the recording tools can create quite fragile paths, usually because they don’t know what the least variant information is. It is straight-forward to hand-write UI tests thanks to CSS selectors which are easy to use, well-supported, and when done simply (i.e. via id and class selectors rather than paths) aren’t hugely prone to change. These selectors are usually wrapped into page objects that further insulate the tests from changes.

The HTML DOM also exposes an event model which allows tests to mimic the vast the majority of UI actions, removing the complexity of older style tools which involved a lot of mouse-coordinates and button states.

And finally, in web development, UI testing has the added benefit of enabling testing across multiple browsers – something less applicable to downloaded applications.

However I agree that they remain time-consuming to run, and if there are lots of developers committing to the repository then having your continuous integration run on every commit may not be possible, reducing the benefit of the tests for quickly catching problems.

Middle-Ground – Integration Testing the API

It is increasingly common for web applications to be built as a web API and a web (JavaScript) client. This is my personal preference over server-side rendering as it nicely decouples the presentation from the logic and allows the application to more easily integrate with other applications. There is some development overhead in this approach, but given most web pages perform some dynamic interaction with the server thus requiring some level of client richness, this overhead is quite minimal.

Having an API provides an excellent place for automated testing. An API is a contract and will express most, if not all, of the business rules through its inputs and outputs. It also requires basic security, and allows validation and authorization to be checked. It can be easily extended to run more extensive security testing (i.e. by manipulating HTTP headers and sending malicious data) and performance tests.

Integration testing the API doesn’t mean a full environmental setup is required. It is still reasonable to use mocks for calls that are slow or resources that aren’t available. For instance my integration tests use .NET Core’s TestServer rather than running a web server, EF’s UseInMemoryDatabase rather than instantiating a database, and stub out AWS service calls. These are reasonable compromises because I’m confident those areas will perform to their contracts.


This is my ‘testing pyramid’ from highest to lowest priority:

  1. API integration tests
  2. Integration or unit tests for things that can’t be reached from the API
  3. UI tests for success scenarios

In my current application I have 98% code coverage and 90% branch coverage (largely ruined by not throwing exceptions inside all the C# using statements) of my business layer using the first two items on my list, and it has enabled considerable evolution and refactoring of the code-base over the last six months.

Generalists and Specialists

While I’m on my theme of people value, there is a group of technology professionals who are often quite undervalued – the generalists. Until the last couple of years there had been a trend of increasing specialization of technology development roles, notably the back-end/front-end split which is now being replaced again by that great generalist role, the full-stack developer. And here’s the thing – overall, the cost of generalists and specialists doesn’t vary heavily, with subject specialists – e.g. InfoSec, DBAs, Authentication Architects (I saw that one advertised in Wellington) – costing a bit more, and platform specialists – e.g. Java dev, UI dev – a little less. In this continuum of generalists to specialists, generalists represent an absolute bargain.

The Generalist Developer

An experienced developer can do 90% of what a specialist can do in their field.

Need DevOps? Why not get your developer to do it? They can research potential solutions, read and understand API documentation, pickup Bash or Powershell pretty quickly, and setup basic configurations based on recommended best-practice from vendors. Plus when they’re done, they can go back to development rather than twiddling their thumbs.

Need QA automation? Need requirements analysis? Need basic network setup? Need Project management? Need customer support? Need internal IT? Need architecture? Need a DBA? These are all things I’ve done to a production level in my 14 years primarily as a ‘developer’.

The vast majority of software out there is about understanding, automating, and transforming processes, and generalists are amply qualified to solve these problems. And where they can’t solve a problem from experience, they are expected to go out and research the plethora of alternatives, running a huge gamut of potential technologies (and therefore specializations), and pick a solution.
Sure, they may not create the algorithm that founds the next Google, but those companies represent a minuscule segment of the field and require a level of specialization more likely found in academia than in industry anyway.

In software product development you want generalists. These are people who know that, for instance, information security is important so they pick technologies, tools, and solutions that promote good security practice. And because they’re not as sure of themselves they are more likely to test and verify their assumptions and designs. They also have a wide view of the world, so can much more effectively evaluate trade-offs between different solutions and solve a wider range of problems than a specialist can. And typically, for little difference in price!

The Specialist

I’m not suggesting we don’t need specialists at all. I’ve found their value to be acting in advisory or consultancy roles where they are the checks-and-balances that warn people about traps and pitfalls and educate the generalists on best practices. I freely acknowledge I wouldn’t have some of the knowledge I have today without the support specialists have been able to provide.

However this very act of education decreases the relative value of the specialist because, by receiving more knowledge the generalists ‘level-up’ and reduce their knowledge gap in the specialist’s field. That makes the need for the specialist more tenuous, and some people find it challenging to overcome the instinct to protect one’s space. This assumes that specialists are static creatures, and I would expect they too are continually learning and trying to level-up, but within one organization the usefulness of their knowledge may be limited.

Another problem with specialists in a small organization, is that they effectively constrain your solutions. The company thinks, “well, I’ve got an Oracle DBA so we’d better use Oracle and stored procedures” even if it’s not the best solution. Whereas, a generalist will evaluate a range of solutions based on their experience, the company’s accumulated knowledge and environment, industry trends, future hiring needs, relative costs, etc. etc. to inform the solution choice.


If you’re a five person development shop, a specialist doesn’t make sense. If you need that expertise you should hire a consultant. If you’re a five hundred or five thousand person development enterprise, then those specialists should have plenty to do to make it worth having them on the payroll.