Musings on Unit Testing and Architecture in .NET Core

One of the challenges I’ve found in architecture has been how to effectively mock the data layer for unit testing domain classes. I’ve worked with various combinations of tiers and repositories, and what I consider the optimum approach is to take a domain-first approach. In a domain-first approach we construct operations in terms of the models and interfaces needed to fulfill the operation, then rely on an overseer, the ‘dependency injector’ or ‘composite root’, to serve up objects that implement those interfaces. The nice thing about this approach is it allows for very granular operations which at their extreme can be single operation classes in the style used by the command pattern. This granularity fits well with SOLID design principles because a single operation should have clear responsibilities, we are injecting the dependencies, and we can define highly specific interfaces giving us excellent interface segmentation.

Typically a good chunk of these interfaces will be for accessing data, and the result of this approach would be a repository class something like

public class SomethingRepository : IGetSomethingByName, IGetSomethingByGuid, IGetSomethingByNameAndType, ...

This is often somewhat confusing because we’re encouraged to create repository classes as generic as possible into order to avoid repetition.

// A classic generic repository interface
public interface IRepository<Something> {       
  IEnumerable<Something> Get();
  Something GetById(int id);
  IEnumerable<Something> Find(Expression<Func<Something, bool>> predicate);
  void Add(Something something);
  void Delete(Something something);
  void Edit(Something something);
  void Save();

Already there is a mismatch. The domain behaviour expressed by the interfaces acts in terms of targeted methods like IGetSomethingByName.Get(string name) while the generic repository uses a more general Find(predicate). Some comprise must be made – either we let the domain know more about the data layer by getting it to specify predicates, thus reducing our domain-first approach and interface segmentation; or we extend the generic repository for Something with the tailored methods.

Then we get to more complex operations that involve multiple data-sources and we either have to get units of work involved, which now means sharing context between repositories which in turn makes creation (by injection) awkward; or we create wider scoped repositories more suitable for the whole bounded context which tends to reduce cohesion. And then we have to consider how to deal with transactions.

The point is, that after all this we’ve created a very extensive plumbing layer to fulfil two purposes: to get a gold star for architectural design; and to allow the domain to be effectively tested.

How do we implement the repository behemoth layer? If we’re dealing with a database then the default today is to get out Entity Framework because writing raw SQL comes with maintenance penalties. And here is where it all goes a little wrong…
Here is the opening paragraph on the Repository pattern from P of EAA:

A system with a complex domain model often benefits from a layer, such as the one provided by Data Mapper, that isolates domain objects from details of the database access code. In such systems it can be worthwhile to build another layer of abstraction over the mapping layer where query construction code is concentrated.

This is what Entity Framework is. When we use Entity Framework (and I’m thinking code-first here) we define a domain model and then we tell EF how to map that data to a schema e.g. how to map inheritance, keys, constraints, etc. The repositories are each DbSet in the DbContext, and the DbContext itself is a unit of work.

So if we create a custom repository layer that calls EF we’re basically trying to re-implement EF using EF, which is not a very good use of time. If instead we expressed our domain behavior in terms that EF understands, like IQueryable, then we could just use EF.

At this point you could argue that using DbContext as a dependency is not a well segregated interface at all, and overall I’d agree as EF doesn’t map to our domain interfaces. But the granularity of its methods allows us to express domain behavior in terms of domain objects and limited tools for manipulating those, so I feel satisfied it is a good clean boundary. And of course, we’re in business, so let’s not waste valuable time and mental resources on extra layers whose only purpose is to earn an architecture award.

But this lack of a concise interface is a problem for testing, because adequately mocking something of the scope of EF is an enormous challenge. And historically this is where having that extra layer wrapping EF was beneficial; even necessary.

Finally we’ve reached the tool that inspired this post.

In Entity Framework 7 there is a new feature, an in-memory database provider. To quote the docs:

InMemory is designed to be a general purpose database for testing, and is not designed to mimic a relational database.

With this tool our testing problem has gone. We can now effectively mock a DbContext by setting it up with pre-canned data, just as we would have via manual repository mocks, and then inject it into the test. It’s that simple: the same DbContext class used for production can be used in test by giving it a different database provider.

Here is the builder my unit tests use. NewGuid() is used to give the instance a unique name because, by default, the same in-memory database instance will be shared by all instances of a given context type.

var dbContextOptions = new DbContextOptionsBuilder<SqlContext>()
  .UseInMemoryDatabase(databaseName: "UnitTest" + Guid.NewGuid()).Options;

var ctx = new SqlContext(dbContextOptions);

One thought on “Musings on Unit Testing and Architecture in .NET Core

  1. Pingback: Mocking MySqlException | winter limelight

Comments are closed.