Rate Limited Async Loop

A recent project included some modest load testing. For this we created a small console application to hit our API over HTTPS. A key metric in load testing is the number of requests an endpoint can handle per second, so it’s useful to be able to control and configure the rate at which requests are made.

This in itself is not difficult: a basic sleep wait of duration 1/requests-per-sec will achieve this. However we had an additional constraint that called for a slightly more complex solution.

The application uses Auth0, an authentication-as-a-service provider, and it rate limits use of its API. Exceeding the rate results in failed HTTP requests, and if frequent enough, can result in users being blocked. Furthermore, it is a remote and relatively slow API, with round-trip times in the order of 3 seconds (i.e. fetching 100 users serially would take 5 minutes), so it’s important that we access it concurrently, up to our limit. Additionally, the token received from calling it is cachable until its expiry, and if we can get the token from our cache then we want to skip any sleep-wait in order to minimize running time.

This leads to the goal: to maximize the number of concurrent requests made to an API up to a fixed number of requests per second; and to use cached data (and therefore not use a request) where possible. To solve this I want a rate-limited concurrent loop.

Implementation

A little searching on the internet resulted in either extensive libraries that implemented a different paradigm, like Reactive, or things that didn’t quite meet my requirements. I therefore – having taking the appropriate remedies to treat potential Not-Invented-Here Syndrome – went ahead and put something together myself.

public class RateLimitedTaskProperties
{
    public bool IgnoreRateLimit { get; set; }
}

public static async Task RateLimitedLoop(int perSec, IEnumerable enumerable, Func<T, Task> action)
{
    int periodMs = 1000 / perSec;
    var tasks = new List();
    foreach(T item in enumerable)
    {
        T capture = item;
        Task task = action(capture);
        tasks.Add(task);

        if (task.IsCompleted && task.Result.IgnoreRateLimit)
            continue;

        System.Threading.Thread.Sleep(periodMs);
    }

    await Task.WhenAll(tasks);
}

The loop starts a new task every periodMs. Concurrency is achieve by using tasks, which are non-blocking, and waiting for their completion outside the loop with await Task.WhenAll(tasks). The case where something has been retrieved from a cache is handled by the task returning synchronously and setting the IgnoreRateLimit flag. This combination causes the loop to skip the sleep and move straight onto triggering the next task.

The following is an example of its use, where MyOperation() is a method that returns a flag indicating whether or not it performed a fetch from the rate-limited API.

const int tokenReqsPerSec = 5;
await RateLimitedLoop(tokenReqsPerSec, items, async(item) =>
{
    bool requiredFetch = await item.MyOperation();
    // don't rate limit if I got it from the cache (fetch wasn't required)
    return new RateLimitedTaskProperties { IgnoreRateLimit = !requiredFetch };
});

django ms-sql datetimeoffset

My current project has me dealing with python, which is a language I’ve dabbled with for many years, but I think this is the first time using it professionally. It’s quite interesting seeing where the language has evolved: I recall having previously been enamored with the quasi-Lisp approach of processing lists with map and filter, but have found list and dictionary comprehensions to be the current standard.

The project is a small Django API (if I were selling it to a VC then it’d be called a microservice :rolleyes:) and as the back-end is predominately in the Microsoft stack, it references a SQL Server database. This database includes some DateTimeOffset columns. I’m not sure I see the need for this type – dates should always be stored as UTC for maximum portability, and clients can display the local time based on client settings. If it’s necessary for a service to work with those dates, then the database should store a user timezone name or offset, but that is specific to the user and not the date. Anyway, I digress… Unfortunately DateTimeOffset columns are not natively supported by the common python ODBC connectors, and thus something of a workaround was required.

This was made extra challenging by Django, which intermediates the database relationship via its models, and therefore thwarted some early attempts to treat the columns as bytes. What it does expose is a connection_created signal which allows the connection to be intercepted before it is used, and that connection includes a method add_output_converter for handling ODBC types. In this case the type is -155 and using a little struct magic we can construct a python datetime.

One area of concern was ensuring that the signal handling was tidied up, even if exceptions were thrown. To handle this, the DateTimeOffset handling code was wrapped into a class that supports the with statement.

Apologies in advance if some of this python code is highly naive – as already noted – it’s my first professional python foray 🙂

import struct
import datetime
from django.db.backends.signals import connection_created

class DjangoSqlDateTimeOffset(object):

    def __enter__(self):
        connection_created.connect(self.on_connection_created)

    def __exit__(self, exc_type, exc_value, traceback):
        # to see connection info, including queries, uncomment and look at cnx with settings.py DEBUG = True 
        # cnx = connections['qs-sql']
        connection_created.disconnect(self.on_connection_created)

    def on_connection_created(self, sender, **kwargs):
        conn = kwargs['connection']
        conn.connection.add_output_converter(-155, self.handle_datetimeoffset)

    def handle_datetimeoffset(self, dto_value):
        tup = struct.unpack("<6hI2h", dto_value)  # e.g., (2017, 3, 16, 10, 35, 18, 0, -6, 0)
        tweaked = [tup[i] // 10000 if i == 6 else tup[i] for i in range(len(tup))]
        dto_string = "{:04d}-{:02d}-{:02d} {:02d}:{:02d}:{:02d}.{:05d} {:+03d}{:02d}".format(*tweaked)
        return datetime.datetime.strptime(dto_string, '%Y-%m-%d %H:%M:%S.%f %z') 

With that class available, querying DateTimeOffset columns becomes nice and simple:

with DjangoSqlDateTimeOffset():
  item = ModelName.objects.raw('SELECT ... ')

PowerShell History

I do like PowerShell, but sometimes find myself pressing the up-arrow a lot to find commands made in previous sessions. Unfortunately the F8 search shortcut only works with the current session, so I wanted a way to find older commands more easily.

Knowing that PowerShell can retrieve history from older sessions, I assumed it must be stored on disk, and after a bit of guessing found this file: %appdata%\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt

To make it a bit more useful, I’ve removed common commands and duplicates using the following script.

$patterns = @("^cls", "^cd.*", "^\w:", "^exit", "^mkdir")

Get-Content "$env:APPDATA\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt" | 
    Select-String -pattern ($patterns -join "|") -notmatch | 
    Select -Unique |
    Out-File commands.txt

Auth0 Mock

Auth0 is a well-known authentication-as-a-service provider. Its database connection storage option allows organizations to reference a custom database, which is very useful if you want to store your user information with your business data and maintain integrity between those using foreign key constraints. You can do this in Auth0 by setting up a connection that accesses your hosted database (with appropriate firewall restrictions!) to add, update, and remove users.

A challenge with this is that each new environment requires a new database and Auth0 setup. This is particularly difficult if that environment is a developer’s machine and isn’t accessible to a connection string from the internet (due to Firewalls/NAT). One option is for each developer to have their own cloud database, but that gets expensive quickly, and adds unrealistic latency to database calls from their machine, making development more difficult.

I was faced with this problem while building integration tests using Auth0 and .NET Core, and opted to create a mock object.

Implementation

The top level interface for Auth0 in C# is IManagementApiClient. This consists of a number of client interface properties, and it’s these that I found most appropriate to mock using Moq. This leads to a basic structure as follows:

using Auth0.Core;
using Auth0.Core.Collections;
using Auth0.Core.Http;
using Auth0.ManagementApi;
using Auth0.ManagementApi.Clients;
using Auth0.ManagementApi.Models;
using Moq;

public class Auth0Mock : IManagementApiClient
{
  Mock _usersClient = new Mock();
  Mock _ticketsClient = new Mock();

  public Auth0Mock()
  {
    // setup for _usersClient and _ticketsClient methods
  }

  public IUsersClient Users => _usersClient.Object;
  public ITicketsClient Tickets => _ticketsClient.Object;

  public IBlacklistedTokensClient BlacklistedTokens => throw new NotImplementedException();
  // etc. for ClientGrants, Clients, Connections, DeviceCredentials,  EmailProvider, Jobs, Logs, ResourceServers, Rules, Stats, TenantSettings, UserBlocks
  public ApiInfo GetLastApiInfo()
  {
    throw new NotImplementedException();
  }
}

In this project only a small number of Auth0 methods were used (something I expect would be true for most projects), so only a few Auth0 client methods actually needed to be mocked. However it is quite important, for integration testing, that these methods replicate the key behaviours of Auth0, including writing to a database, and storing user metadata (which isn’t always in the database). To support these, the mock class includes some custom SQL, and a small cache, which are used by the mocked methods. The following code illustrates this using two methods. They are set up in the constructor, and implemented in separate methods.

using System.Collections.Generic;
using System.Data.SqlClient;
using Dapper;

private string _sql;

// local cache storing information that our sql table doesn't
private Dictionary _users = new Dictionary();

public Auth0Mock(/* injection for _sql connection string */)
{
  _usersClient.Setup(s => s.CreateAsync(It.IsAny())).Returns((req) => CreateAsync(req));
  _usersClient.Setup(s => s.DeleteAsync(It.IsAny())).Returns((id) => DeleteAsync(id));
}

private async Task CreateAsync(UserCreateRequest request)
{
  int userId = 0;
  using (var conn = new SqlConnection(_sql))
  {
    var rows = await conn.QueryAsync(@"INSERT INTO [MyUserTable] ...", new { ... });
    userId = (int)rows.Single().userId;
  }

  var user = new Auth0.Core.User
  {
    AppMetadata = request.AppMetadata,
    Email = request.Email,
    FirstName = request.FirstName,
    LastName = request.LastName,
    UserId = "auth0|" + userId
  };
  _users[user.UserId] = user;
  return user;
}

private async Task DeleteAsync(string id)
{
  var match = Regex.Match(id, @"auth0\|(.+)");
  string userId = match.Groups.Last().Value;

  using (var conn = new SqlConnection(_connStr))
    await conn.ExecuteAsync(@"DELETE FROM [MyUserTable] ...", new { userId });

  if(_users.ContainsKey(id))
    _users.Remove(id);
}

Being a mock object there are limitations. For instance, in this example the cache only includes users added via CreateAsync, not all the users in the test database. However where these limitations lie depends entirely your testing priorities, as the sophistication of the mock is up to you.

One downside to this approach is that Moq doesn’t support optional parameters, so the signatures for some methods can get quite onerous:

_usersClient.Setup(s => s.GetAllAsync(0, 100, null, null, null, null, null, It.IsAny(), "v2"))
  .Returns((i1, i2, b3, s4, s5, s6, b7, q, s9) => GetAllAsync(i1, i2, b3, s4, s5, s6, b7, q, s9));

private Task<IPagedList> GetAllAsync(int? page, int? perPage, bool? includeTotals, string sort, string connection, string fields, bool? includeFields, string query, string searchEngine)
{
  // regex to match query and fetch from SQL and/or _users cache
}

Authorization

The Auth0 mock class provides authentication, but not authorization, and it would be nice if any integration tests could also check authorization policies. The run-time system is expecting to process a cookie or token on each request and turn that into a UserPrincipal with a set of claims. Therefore our tests must also populate the UserPrincipal, and do so before authorization is checked.

For this we need a piece of middleware that goes into the pipeline before authorization (which is part of UseMvc()). My approach was to place the call to UseAuthentication() into a virtual method in Startup and override that method in the test’s Startup:

public class TestStartup : Startup
{
  protected override void SetAuthenticationMiddleware(IApplicationBuilder app)
  {
    app.UseMiddleware();
  }
  
  protected override void SetAuthenticationService(IServiceCollection services)
  {
    // This is here to get expected responses on Authorize failures.
    // Authentication outcomes (user /claims) will be set via TestAuthentication middleware,
    // hence there are no token settings.
    services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer();
  }
}

The middleware, TestAuthentication, remembers the last user that was set. It must be registered as a singleton with the dependency-injection framework so that the user is remembered between service calls. Testing code can set the user at any time by calling SetUser().

When a request is made TestAuthentication‘s InvokeAsync method applies claims based on that user. These claims will be processed as policies in the normal way so that Authorize attributes work as intended.

public class TestAuthentication : IMiddleware
{
  private string _userId;
  private string _roleName;

  public async Task InvokeAsync(HttpContext context, RequestDelegate next)
  {
    if (_userId > 0)
    {
      var identity = new ClaimsIdentity(new List
      {
        new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", "auth0|" + _userId),
        new Claim("http://myuri/", $"Role:{_roleName}")
      });

      var principal = new ClaimsPrincipal(identity);
      context.User = principal;
    }
    await next(context);
  }

  public void SetUser(string userId, string roleName)
  {
    _userId = userId;
    _roleName = roleName;
  }
}

With this combination we are able to successfully mock Auth0 while retaining our ability to work with our database, test non-Auth0 functionality, and test authorization.

Sharing Test Dependencies with Startup

An issue I’ve had while developing integration tests in .NET Core is sharing information between my TestContext and the Startup class.

The documented approach looks something like this:

var hostBuilder = new WebHostBuilder().UseStartup()
_server = new TestServer(hostBuilder);

The problem is that Startup is called from deep within new TestServer making it impossible to pass a reference from the calling context. This is particularly a problem with integration tests on an API, where we need the an HttpClient to be made from the TestServer instance in order to call API methods.

_client = _server.CreateClient();

Dependency Injection into Startup

What I hadn’t originally appreciated is that Startup class accepts dependencies defined by the host. Therefore anything already configured in the services, which is the container for ASP.NET’s dependency injection system, is available for injection into Startup.

For instance, to pass a reference to the current TestContext we register the current instance as a singleton before calling UseStartup:

var hostBuilder = new WebHostBuilder()
  .ConfigureServices(s => { s.AddSingleton(this); })
  .UseStartup()

Now, a the TestContext in the following Startup class will be populated:

public class Startup {
  private TestContext _ctx;
  public Startup(IConfiguration config, TestContext ctx) {
     _ctx = ctx;
  }
...

Passing a Shared Object

A more cohesive approach is to place mutual dependencies in another class and make it available via much the same approach. The following is an example allowing any class access to the TestServer’s client.

public interface ITestDependencies {
  public TestContext Context {get;}
  // also various Mock objects...
}

public class TestDependencies : ITestDependencies {
  public TestContext Context {get; private set;}

  public TestDependencies(TestContext ctx) {
    Context = ctx;
  }
}

public class Startup {
  private readonly ITestDependencies _testDependencies;
  public Startup(IConfiguration configuration, ITestDependencies testDependencies) {
    _testDependencies = testDependencies;
  }
  // other methods - use _testDependencies.Context.Client
}

public class TestContext {
  public HttpClient Client {get; private set;}
  private readonly TestServer _server;

  public TestContext() {
    var builder = new WebHostBuilder()
      .ConfigureServices((IServiceCollection services) => {
        services.AddSingleton(typeof(ITestDependencies), new TestDependencies(this));
      })
      .UseStartup();
    _server = new TestServer(builder);
    Client = _server.CreateClient();
  }
}

Measurement and Agile Software Development

Introduction

I’m going to start this politically, but I promise it’ll get to software development. The trigger for this scribbling of thoughts was an article discussing the under-funding of many areas of the public sector and the quote from the finance spokesperson for New Zealand’s recently-ousted opposition party: “… the government should be thanking [the] National [party] for inheriting such a strong economy.” And it struck me that economic performance was the sole benchmark by which they gauged success. In reality, the country is vastly more complex than one set of economic indicators, and different people have very different perspectives on what constitutes success.

The ‘duh’ disclaimer

As I’ve said in some previous articles, none of this will be new to anyone who has spent, studied, or even thought about management. And it certainly isn’t the first time I’ve thought about it, but the above article engaged some dormant mental spirit to write things down 🙂

You are what you measure

Different people’s values mean that what they consider important and unimportant will vary and that is fine and healthy. The challenges with measurement are the consequences of measuring and how people’s behavior changes in response to the measure.

To take a non-software example, the New Zealand education system places strong emphasis on success at NCEA achievement, which has translated into students being encouraged to take easier courses or teachers being encourage to teach towards the tests. In this case the goal of giving students the best high school education has been subverted by a measurement which effectively demands certain pass rates.

The classic example in software development is measuring lines of code. Lines of code is a basic metric for measuring the overall size and therefore likely cost of learning and maintaining a code base. It is an appalling measure of programmer productivity: good programmers will write less code through reuse; refactoring may end up removing code altogether; and on the other hand, readability is far more important than concision.

Thankfully I believe the industry is well past measuring productivity by LoC, or even the highly amorphous function points. However the beast is far from slain, for instead we have story points and velocity.

Agile Software Development

Agile Software Development, according to Dave Thomas, author of The Pragmatic Programmer and co-author of The Manifesto for Agile Software Development, can be summarized by this process:

  • find out where you are
  • take a small step towards your goal
  • adjust your understanding based on what you’ve learned
  • repeat

And when faced with alternatives that deliver similar value, take the path that makes future changes easier.

This is very idealistic and quickly crashes into commercial reality where managers, usually on behalf of customers, want to know: when will it be ‘done’ and what will it cost? Of course, this ignores all the benefits of learning-as-we-go, Lean style (which is essentially the same thing as agile software development but applied to business), and that you get much better, albeit far less predictable-at-the-outset, outcomes than any upfront planning based process. But we can’t really ask everyone to be rational can we?

Nevertheless, marketing release dates and the like meant we had to invent ways to measure progress and estimate ‘completion’ (I keep using inverted commas because I think we all know that done or complete are very subjective terms). And so Agile (sorry Dave T, I’m going to be using it as a noun) planning evolved from concepts of managing risk and uncertainty via loose estimation in Agile Estimating and Planning to full blown methodologies that are so militaristic they require specialized commanders like Scrum Masters.

A plague of story points

And here’s where I feel agile software development goes wrong. The people involved are so invested in the process they forget the actual goals of their organization or of agile software development. Having the ‘right’ ceremonies and getting the points right become the focus. More significantly, people become concerned with the consequences of their measurement, so they will avoid having a high-scoring sprint because it’ll increase expectations on their future performance (and by this stage the team probably isn’t feeling all that empowered, but that’s another story).

So now the process is about having accurate estimates, and consistent or slightly growing measurements, regardless of the impact on the delivered product. Because although it might be possible to explain to your manager that your productivity (as measured by story points) has bombed in the last month because you decided to refactor X in order to speed up lots of expected future work, by the time it’s aggregated to their manager and so on, that nuance is lost. And now that manager is getting shafted based on that measurement which doesn’t actually reflect whether or not your team is doing a good job.

My favorite Agile

The first time I ‘did agile’ was almost by accident. We had a three person development team working on a product and a product manager who had a three page Word table with a prioritized list of well broken-down features. And every fortnight, we wrote down on a whiteboard what, from the list, each of us was going to work on and how many days we thought it would take. If something needed re-prioritized the product manager would come in (any time) and we’d change what we were doing and update the whiteboard.

The point is that we were focused on delivering the outcomes that the business wanted almost as soon as it knew it wanted them. Sometimes we’d be asked to have a bit of a guess at how long half a page of priorities might take, leading to a 6-8 week kind of estimate. But all parties also understood that estimates were exactly that and things might change, both in terms of time taken, and in terms of what was critical to get done. Unfortunately I don’t believe this approach really scales, and it requires serious buy-in from stakeholders (despite all the evidence of the value of Agile/Lean approaches).

Conclusion

As is normal for these drawn out discussion posts, I can’t conclude with ‘the answer’ – and there are a lot of people out there who’ve spent a lot of time trying to find ‘the answer’ and haven’t found one.

What I am confident of is that measurements can’t show nuance and they subvert the behavior of what they intend to measure. So it’s incredibly important to continually reflect on whether your measurements, and their driving processes, are serving you well or whether people are now just optimizing for that measurement at the expense of actually achieving things.

I understand that an organization needs to gauge how it’s performing – whether it can be more productive, achieve different goals, eliminate waste. To do this it needs concise explanations of whether it is meeting relevant sub-goals. But the consequence of this concision is a loss of nuance that sands off the random edges that create effectiveness.

Workplace Flexibility

There’s a risk with this post that I’m going to shoot myself in the foot, but I believe in being as open and honest as possible, so I’m going to share my thoughts on good working environments, particularly remote ones.

This comes about because I’m trying to find some flexible, part-time, and remote work. I’ll come to each of these in turn during this post, but to start with, why am I looking?

I’ve been working as the technical partner in a small self-funded team developing a new product, and the level of product development required fluctuates. At present we’re going through a cycle of taking what we’ve learned, realizing our existing strategy isn’t going to work, and changing direction to suit. This is healthy – it takes time to: learn from the market what is really needed, especially with large prospective customers where it’s often a month from scheduling to having a meeting; and understand the costs, risks, and rewards of different commercial options. While I think it’s fair to say we haven’t done a good job of ‘failing fast’ so far, we continue to figure out what we can sell to the market and target the product to suit.

The end result is that I need to periodically attend meetings in another city and push forward with product development, but ultimately have time on my hands that I’d like to put to use somewhere. So I’ve been looking for flexible, part-time, remote work.

Silicon Valley Culture

Nothing turns me off in a job ad more than the term “Silicon Valley Culture”. Silicon Valley is very exciting in terms of getting VC money, but I remain unconvinced it’s a good place to work. If there is one word I’d use to describe what I’ve seen and heard about working in Silicon Valley as a developer, it’s insular.

Insular

What! How could it be insular to be working with so many great technical minds!

When birds of a feather are working together there’s a strong chance they will create a nest to suit their flock. The Silicon Valley stereotype is the cloistered geek. How can someone so insular understand real-world-problems, since being at the office 12 hours a day means they hardly experience the real world? Where do they: cross paths with tradespeople, nurses, children, (non-IT) engineers; or have experiences requiring empathy; or have a multi-faceted political discussion that doesn’t end up with people not speaking to each other (i.e. the real world equivalent of unfriending)?

Face Time

Which brings me to my next point: Silicon Valley Culture values hours in the office – free lunches, and dinners, and we’ll bring a masseuse on site, and… – basically we’ll do anything cheap (relative to your salary) to keep you in the office because (we assume) that if you’re in the office then you’re making progress, and we don’t pay any extra for that.

If you’re a twenty-something with no partner, family, or life, that’s great. If you’re anything else, forget it. Silicon Valley Culture is a big red flag that says “you may not have a family”, “you may not have a life (outside work)”. I’ve already addressed how this makes you an insular person, but it’s also terrible for productivity.

Productivity

Anyone who has done 6 hours of serious development in a day – by which I mean uninterrupted, building or debugging of some significant chunk of code twisting its way through layers of the application – knows that afterwards your brain will be dead. Development was once described to me as sitting two three-hour exams a day, and there have been plenty of days where I’d agree with that. Encouraging (or worse, measuring) time in the office leads means that hours beyond that six are a waste of my time and the organizations, and we both resent having our time wasted. And the more overtime goes on, the more it flows into people’s personal life, and into the next day, and so on until the developer is just burnt-out. I’ve been there – multiple times, and it’s not always easy to swing back from.

Don’t believe me? Then go and read chapter nine of Slack, or search the index of Peopleware for overtime. We’ve known for generations the hours people can repeatedly handle without it being detrimental. I’m not sure why Silicon Valley Culture hasn’t figured it out.

Consequent Demographics

Developers have an average age of 30-32. Assuming a constant working age population between 20 and 70, the average should be 45. Certainly some developers will become managers or analysts, and this is still a new industry so we’d expect to undershoot the average, but by 15 years (60%)? Is it possible that Silicon Valley Culture makes being an ‘old’ developer a problem? The culture is certainly incompatible with having a family. It’s also incompatible with people with more life experience – people who have reached that point where their bottom four levels of Maslow’s Hierarchy are met and realize there’s an awful more to life than work, especially if work isn’t able to allow them to fulfill their potential.

Silicon Valley Culture also implies a boys’ club. Look at the developers in the Silicon Valley TV program: young single guys. There have also been long-standing issues with sexism. So it’s not surprising that the earlier-referenced survey put the percentage of female developers at around 10%. Given the stereotypes, the sexism, the family-unfriendliness (at the risk of being lambasted, mums are still more likely to stay at home with the kids than dads), we shouldn’t be surprised at that figure.

In short, Silicon Valley Culture is a terrible culture. If that’s how you describe your organization you are not going to get wise or mature developers.

P1020378
Here’s a picture of a baby to break up the text. It in no way reflects how Silicon Valley Culture treats developers.

Part-Time

Why does everyone want full-time staff?

My gut reply to thinking as an employer is that I want commitment. But I’m going to step deeper into this and ask: why do I want commitment? What does being committed mean in an organization?

Commitment

The first commitment given by an employee comes when they sign an agreement saying “I’m committing this much time into your organization in return for compensation which reflects that commitment”.

Hiring someone comes with an on-boarding cost and to maximize their return the organization wants the employee to be useful as quickly as possible, which can only be achieved through time experience. In that scenario being full-time will reach this goal faster (in terms of calendar days), although the cost to reach a certain experience level won’t change. Depending on the role this ‘calendar time to usefulness’ may or may not be a factor. For instance, domain-knowledge intensive roles like architecture and product management often have much longer on-boarding periods than development, making the additional time delay of part-time too big an opportunity cost for the organization.

From another perspective, most new hires describe their early weeks as “drinking from the fire-hose”. I imagine if less is drunk per day, because a part-time day is shorter, more of it will be retained.

The second commitment is level of energy or zeal the employee chooses to bring to their job. This commitment is a function of finding a personality which can engage with the organization and then providing them the environment that makes them want to engage (rather than just turn-up). I imagine that once hours drop considerably the employee may find it hard to really engage; but conversely, working fewer hours they may have more energy to engage with. So I conclude that being part-time (say in the 20+ hours/week range) wouldn’t have a significant impact on this kind of commitment.

Focus Time

Earlier I mentioned that development can be brain-intensive, and that beyond a certain level the productivity of each extra hour diminishes quickly. So why pay for those? My experience has been that people working shorter days tend to plan better and be more focused, and I estimate that someone working 5-hour days probably gets 7 hours worth of work done i.e. you’re paying 25 hours worth of time per week for 35 hours worth of productivity (all else being equal).

At this point hiring full-time by default seems like a tradition more than anything, a tradition that is worth re-examining.

Flexibility

The concept of a two-parent family where only one parent works is history. The rise of day-care and after-school-care have made that quite clear (I’m not entirely comfortable with this concept from a sociological perspective, but each to their own, and I digress…). This means there are an awful lot of experienced workers out there who have to juggle family and work. If you make that juggling difficult for people then they can’t work for you because, like it or not, family ultimately comes first.

For my part, I have pre- and school age children, so between my wife and me someone needs to be home by 1430 to pick them up and look after them. Once we’re home and they’re fed, I can typically resume whatever I was doing earlier. Sometimes they’re sick and someone needs to be home with them, but usually they’re just quietly sleeping and there is little to impede working. A strict “X-hours a week in the office” contract doesn’t cope with these scenarios. It is, like default-to-full-time, a hangover from bygone days. The 21st century workplace requires flexibility, and full credit to New Zealand which does enshrine this concept in law. But regardless of legislation, flexibility is something organizations should do because it’s better for them.

Being flexible with hours:

  • increases the talent pool available to you;
  • tends to result in breaking up the workday, which makes for fresher and more productive minds;
  • allows staff to manage their creativity, meaning you’re not paying for mental downtime;
  • creates a feeling of mutual respect and reciprocity, which means asking for a little urgent work outside of hours is a fair exchange, rather that leaving the employee feeling that they gave their time for free.

When combined with remote work it opens up opportunities to access the global talent pool. For instance I’ve applied for several jobs in the EU, despite being in New Zealand. This would work out nicely for me because my wife is home in the evenings and I can be available from 7pm-midnight three nights a week. That’s 15 hours of overlap a week (give or take daylight saving). Equally if I applied in the Americas then I can work early mornings or Saturday (American Friday) because it’s outside standard NZ working hours.

P1010486
Babies are flexible. Is your organization better than a baby?

Remote

Have you ever worked in an open plan office that nobody complained about? Depending on who you ask they are too hot, cold, noisy, distracting, constrained, or impersonal. What they are is cheap, and they allow poor managers to sit watch upon their domain (micromanagement). It has been clear for years that working environment affects productivity, a result which continues to be reinforced.

If you’re like me and need quiet and a means to control distractions then working remotely is bliss. I recall when I first started working remotely, my productivity immediately doubled. Thankfully that was in an organization with a good remote culture, because having remote staff requires a level of organizational discipline. Having remote staff requires inclusive communication and decision making processes, something that makes everyone happier. For this to work leaders must ensure communication is only happening in public forums and decisions are being reached by an inclusive process. This is good business practice, but with remote people it is more important because it is harder for them to see when they are being left out.

Essentially a remote organization must revolve around a text/audio/video chat application like Slack or Hipchat. People share their thoughts in writing for everyone to think about and provide feedback on. In this shared space, all voices can be heard so people are informed decisions are made inclusively. These tools can also be controlled so that people are not disturbed when they don’t need to be and can thus focus on the task at hand.

Agile Software Development talks about the importance of information ‘convection currents’ i.e. the information accidentally shared by people in proximity hearing each other. This is something that is lost with remote workers. They also talk about ‘drafts’ – the information that wafts about which is completely irrelevant or distracting, and in my experience the drafts tend to outweigh useful information. The beauty of a remote work culture is that the information sharing is recorded in text and available for anyone who is interested, and crucially is searchable. As a result that information is available to everybody, not just those who happened to be in earshot and paying attention at that time.

One concern with a remote team is that remote workers might be slacking around on company time. I don’t buy this excuse: it’s usually pretty obvious if a developer is contributing to the level expected simply by looking at their commits and documentation.

So yes, remote workers require a culture shift, but it’s a positive one, and it opens up huge benefits of being able to access talent pools well beyond your current city.

Conclusion

Despite acknowledging that the nature of work is changing our workplaces seem very slow to catch up, especially given the benefits of wider talent pools and increased staff happiness and productivity that part-time, flexible, and remote work (both independently and together) create.

So if you are interested… I’m flexible if you’re flexible. I can legally work in New Zealand, Australia, United Kingdom, and Europe (at least until Brexit goes through, if it goes through), and I’m sure contract terms can be worked out elsewhere. You can get an approximation of my expertise from this blog and the about page, and I will say I’m a generalist and proud of it! 🙂

You can contact me through the contact page.