‘Professionalism’

I can’t stand the term ‘professionalism’. Whenever I hear it, it my BS-detector jumps immediately to red-alert because it’s a way of shutting people down with a power play or an appeal to tradition rather than positively engaging with them.

Unlike say law, where there are defined professional standards, professionalism in the software industry is highly subjective. Take for example a relatively simple aspect of development like commenting. A more traditional perspective on commenting is that every (i.e. not just the public ones) method should have comments indicating what the method does and the purposes of its parameters and return values. My concept of quality code is that clear naming, good cohesion and design will provide you that information – the ‘what’ if you will – and comments should be reserved for the ‘why’, i.e. to explain decisions, whether design-level or business-level, that are not intuitive. There are tomes on this subject and ongoing debates about the discipline of maintaining documentation versus letting the current clean code speak for itself. Regardless of where you fall in this debate, swiping it all away by calling one approach ‘professional’ and the other not removes the opportunity for reasoned debate, for improved understanding, and for finding the most appropriate solution specific to the circumstances.

The net result of invoking ‘professionalism’ is disempowerment. If you shut down a discussion on the basis of this power play then you’ve rejected without consideration an opinion of a peer. They will likely resent this lack of consideration, and it certainly won’t encourage them to contribute further opinions. As is well documented in the literature on teamwork, it is the consideration of a combination of perspectives and ideas that enables the best solutions to be found. In effect, by invoking ‘professionalism’ you’re throwing away the investment your company is making into expensive human resources.

This doesn’t mean it should be an anarchic free-for-all. Coding standards are fine: they create consistency that allows code to be more easily understood by unfamiliar parties. There can also be other factors at play like external forces which push decisions in a certain direction, and those should be the answer to questions of “why do we do it this way?” rather than the dismissive ‘professionalism’. Standards and practices that are built on solid reasoning will stand up to scrutiny, and scrutiny of them should be welcomed because if they don’t stand up to the challenge of re-examination then there is an opportunity to make an improvement within the organization.

‘Professional’ needs to become another corporate punchline, like ‘synergy’, and that way we keep our people empowered and can keep improving our organizations.

Free and Open Source

I’ve never quite understood the appeal of FOSS (free and open source software) as a developer. There’s ego to be sure; it also functions as self-marketing in many cases; it may be a chance to work on something more interesting that your day job; but often is just outright altruism in the form of sharing your solution to a hard problem.

The challenge is that once software is out there, whether as a library or an application, people may use it and even come to depend on it. This is particularly so with libraries that form parts of library ecosystems, such as Json.NET in the .NET world. At that point the software becomes ‘alive’, and like living creatures needs ongoing attention. However, for many a developer there was never any intention on becoming a parent and instead they look for ways to forward on the child they are proud of, but would rather not have to bring up (I might be stretching the analogy a little here…).

This is something that was highlighted recently when the maintainer of a heavily used library opted to retire and a consequential volunteer allowed malicious dependencies into the library. Said maintainer raised a number of excellent points in his follow up.

We have to eat

Commerce exists because giving things away means starving. Seriously! We’ve developed complex and highly specialized societies that enable people to focus their talents for significant gain for all of us, but we still ultimately need to fulfill basic human needs. In this context FOSS doesn’t seem rational. However our actual contexts are more complex than that: we’ve generally met our basic needs through other means, and we’re instead searching for a way to achieve self-actualization and FOSS is one way of achieving this.

Altruism, or the appearance of it, is hardly a new phenomenon. People have been giving away the fruits of their day jobs throughout history. So what is different with FOSS, if anything?

There’s two analogies that spring to mind: working pro-bono or establishing charities. Working pro-bono implies a short-term service and doesn’t tend to have consequences after the transaction is finished, which is not analogous to putting software on the internet. Whereas establishing a charity is a more permanent endeavor.

To be charitable to a great many people you have to commit to running a non-profit organization with all its regulatory and societal demands. That’s a high bar to cross and demands significant organizational skills. By comparison, FOSS allows you to achieve that kind of scale of charity without any bar to entry at all, and so perhaps the problem is the unpreparedness of the creators.

FOSS can work, commercially

Continuing the charity analogy, an alternative to establishing a charity is to donate to an existing organization who manages your generosity for you (taking a cut). This is where FOSS has been most successful, ironically, as a tool for commercial organizations.

Open source is a lovely way of increasing demand for your product by reducing the cost of its complements. If say I’m Microsoft, and I want to sell operating systems, then giving away the software for free that enables people to build products for said operating system is a solid business strategy. Furthermore, because Microsoft want their developers to be as happy as possible, they’ll also enable them to make their own fixes to that software (think .NET libraries) rather than be bound to the priorities and release cycle of Microsoft. On top of this Microsoft now get the benefit of a whole lot of free development and QA. (I’ll admit I don’t understand Microsoft’s current strategy in this area with cross-platform .NET Core as now they’re making it easier to run on their competitors products, whether that competitor is Linux or AWS).

Something similar can be said for freely giving away APIs for Facebook and Google services, all of which solidify their market position and/or drive traffic to their products. It’s not quite so clear what the benefits of providing libraries like React have, although I imagine one could make business cases around brand management via goodwill, or control of a complementary market.

There’s also the smaller, but at times successful, model of selling support to FOSS products, as is the approach of Red Hat. This arises because enterprises tend to demand a strong level of support and well-known life-cycle models as part of making any significant investment and they are quite happy to pay for that.

In all these cases an organization exists to tangentially profit from the existence of the charity, FOSS. Non-profits fit the charity model even more aptly, requiring some revenue to promote or maintain their charitable activities (I’ve effectively been paying $100 a year to use Firefox for quite some time now).

Alternatives

It’s clear that any piece of software requires ongoing support. So how can the supporter be supported?

One option is to take the F out of FOSS, and acknowledge the cost of providing the software. After all, FOSS is not just about lack of upfront cost. With open source the consumer has the option to fix or improve the software for their specific needs and not be at the mercy of a commercial organization’s release schedule. However, paying for open-source software would be a sociological change, creating a barrier to entry that hasn’t existed for several decades in the software world, which would entrench incumbents, severely inhibit the proliferation of ideas, and remove the ability of people to lift themselves up without the provision of a great deal of capital.

Even if it were possible to make FOSS cost something, it wouldn’t deal with the underlying problem that sometimes people just want to move on. In commercial practice this means ‘selling the business’, but FOSS encompasses many vital but tiny pieces of software which are too small to create a single business around. Take for instance the library made by the maintainer in the introduction: it’s currently 352 lines of code plus packaging and test artifacts, and over five years (before 2018) it had around 30 issues logged. It’s hard to imagine a business built around just that.

However, you could have a business around lots of these. Is it possible to have a business that would take on well-used libraries if their creator no longer wishes to maintain them? Such a business might be able to generate paid enterprise support contracts supporting its FOSS suite, and that enterprise oversight would also prevent them from ‘going rogue’ – there may already be organizations like this out there. I’m quite sure there are lots of software consultancies that will gladly enhance a library for a particular client – how big a stretch would it be for them to take over a library when a creator didn’t want to maintain it? Perhaps Github can create a marketplace for passing on control of libraries and thus make it easier for players on both sides of the library maintenance business to come together?

There needs to be some industry maturity in the lifetime management of FOSS, particularly for small libraries, and my impression is the best option would be the formation of businesses that specialize in this, paid for through support contracts.

No Pure Ideology

Welcome to another essay reaching Brooks’ “No Silver Bullet” conclusion.

Genesis from Analogy

I’ve been reading Klein’s somewhat harrowing The Shock Doctrine and I found an interesting (to my twisted mind) analogy between the application of pure capitalism and the application of pure software concepts/paradigms/technologies, or as I have titled this, ideology.

Around 1990 the Cold War was over and capitalism had emerged as the triumphant economic ideology. But this capitalism, which had underpinned the great scientific and social achievements of the 1950s-1970s, was very different – vastly more state controlled – than the free-market approach of ‘pure’ capitalism espoused by Friedman, and with which we are more familiar today. During the 1990s there were major attempts to achieve this pure capitalism in Russia and East Asia, and both were dramatic failures. The attempted conversion of Russia to capitalism was a disaster for all but a few scavenger capitalists and even the IMF admitted that its approach to the East Asian financial crisis late in the decade significantly exacerbated what was a fairly minor problem. Since then, such unfettered free-market reforms, ‘pure’ capitalism, have become viewed with much more suspicion and caution, and it’s quite ironic that in the demise of its ideological enemy, communism, capitalism should lose such face.

My supposition is that no ideology when enacted in its purest form will be successful, as there will always be aspects of the target domain that the ideology doesn’t address effectively. Oddly enough most economic-political ideologies fail in the same area, which is the negative impact they have on the population, a population which typically has no input into the selection of the ideology. The most successful outcomes arise when ideologies are mixed and moderated, as in ‘Swedish socialism’ or the aforementioned success of the tightly controlled United States of the third quarter of the 1900s. I believe that this mixing of concepts to achieve success applies just as much to software development as it does to nation building.

Back to software development

When I started software development, object-orientation was the great new hope, heralded through its flagship, the recently released Java. Java, with its fairly pure object-oriented implementation, solved the problems of imperative languages like C and Pascal through encapsulation – the idea that data and the actions that manipulated it should be bound together and hidden behind some contract.

Except that Java sometimes made things more difficult. One aspect that has consistently bugged me is event handling, where Java demanded an extra object to implement an event interface separately to the object that wanted to respond to the events. By comparison, C was happy with a function pointer (albeit hidden behind some typedef because the type signatures for function pointers were not exactly intuitive).

Another example of Java’s purity, although not to object-orientism, was the need to state all the exceptions which a function could possibly receive. This meant that aggregating functions either needed to have a ridiculously unmanageable list of exception types appended to them or that every function was decorated with the base exception, making the whole thing pointless.

C#, some five years later, was much more pragmatic about both of these cases, supporting function references (delegates) and keeping exception handling in the code body rather than in declarations. C# is a much less pure object-oriented language than Java, but (in my experience at least) is better for ‘getting things done’.

Another example

Functional programming as a pure ideology has struggled get to traction outside of academia. For all the expressive power of being able to write ‘what’ a program needs to do rather than ‘how’ to do it, in its purest form functional programming makes it surprisingly hard to do the basic computational task: modify data. I’ll admit it’s been a while since I last got my head fully stuck into monads and lenses, but I recall genuinely understanding these to be fairly challenging compared to understanding, say, manual memory management and pointer manipulation.

In its attempt to bring functional programming more mainstream, F# takes the non-pure approach of allowing explicit mutability (while defaulting to a functional style). Naturally this weakens the benefits of the pure paradigm, but it also allows people to get things done! Again, the pragmatic compromise provides a more workable solution, while the pure concept remains a curiosity.

Conversely, functional concepts, like list-processing, have made their way into other paradigms where their expressive power is a welcome respite from pure imperative constructs, like nested loops.

Conclusion

Whether it’s micro-services, CQRS, REST, TDD, or any other of the myriad of software concepts, the costs of a pure implementation of a concept always outweighs the benefits. This is because, like with our political ideologies, each concept focuses on a specific type of problem – its core domain – and simply can’t handle all the situations that real software requirements will throw at it. Every concept has its pros and cons, problems where it is a good fit and places where it isn’t. To solve problems effectively we must understand what options best fit in a given situation and how a combination of concepts can be applied to achieve the desired end in a quick, inexpensive, and good-quality fashion. Or as Brooks’ put it, there is no silver bullet.

Criticizing the Past

Hindsight is supposedly 20/20, yet when it comes to appraising existing code and solutions we often run our mouths with the blinders on.

Existing code nearly always seems poor because when people look at code they look with their current perspective and not with the perspective of the time it was written. As the writer, we’re likely to have learned new coding techniques or learned more about the domain, and had we known those things we’d have chosen different approaches. As the reader, we seldom understand the constraints that were in place when that code was created i.e considerations like time-to-market, customers demanding fixes, and the time and resources available to developers for research.

Human Factors

It is important that we evaluate past decisions – we can’t learn without analysis – and very often it is necessary in order to allow changes to the solution. Trouble arises because we’re human and have feelings, and don’t particularly enjoy being criticized, even in the pursuit of learning.

One solution to the human problem, egoless programming (summarized here), seems like a pretty healthy concept. However, it essentially asks people to be detached from what they’ve achieved, which to me seems like a recipe for mediocrity. I’m from the software craftsman camp, where people take pride in the work they’ve done (while still accepting that nobody’s perfect); and people take better care of the things they have pride in.

Feelings exist, so how do we manage that? Egoless programming highlights a key message that is surprisingly often forgotten: “Critique code instead of people”; or to use the sporting metaphor I prefer “play the ball not the player”. At times it seems like people forget that other people are in the room when evaluating old code – rather than “this doesn’t make sense to me”, it’s “this is spaghetti-tangled rubbish” or “what idiot put this together”. At some point our frustration at trying to understand something turns into an utter lack of tact, and as peers we shouldn’t tolerate that – we should be quick to point out the potential for hurting feelings and our own limitations.

Reader’s Perspective

As mentioned in the introduction, anyone reading code or analyzing solutions brings with them their perspectives based on their knowledge and experience. When that experience doesn’t include the relevant business constraints, their evaluation can sound horrible – like the coding equivalent of saying “you’d be much better off if your parents had paid for a better school” to someone from a poor background – circumstances matter!

Our experience can also bias us towards solutions that we understand, and make us uncertain of different approaches. It seems anachronistic to have to say this in the era of equality, but just because something is different doesn’t mean it’s wrong! And we don’t learn anything by sticking to what we know!

Instead, solutions should be considered and appraised for what they are, while being considerate of circumstances. For instance, I find the stack depth that results from the design patterns heavy approach used in Java quite frustrating to debug. But I also appreciate that Java was envisaged as a pure object-oriented language which places some significant constraints on what the language can do, and in particular excludes first-class functions. Rather than saying anything too negative about Java, the lesson I’d take from this is that it’s good to be pragmatic rather than ideological when designing a language.

Conclusion

Don’t get me wrong – I haven’t written this from the perspective of a saint. I’ve been on both sides of the scenarios I’ve mentioned. The key is that I’ve learned from those experiences and hope by sharing that others can learn too.

Finally, don’t forget the lesson of self-deprecating humor: criticizing your own code in negative terms is fine; criticizing someone else’s code in those terms is not.

Hype

I like shiny new things.

It’s pretty universal that we humans are attracted to what is new, the latest-and-greatest, whether out of the likes of curiosity, restlessness, or envy. On top of this is layered a positive feedback cycle – our desire to be part of the group and be in on the topics of conversation – that reinforces the shiniest and most popular objects out there. These can be called fads or trends, and at times they are rightfully the center of attention, but when the reality can’t live up to the expectation then we’ve entered the world of hype.

It has been very clear right from my first days at university that ongoing professional development is essential to maintaining employability, and few industries move as fast software development. Learning about new technologies by playing with them a little is both healthy and important, and gives us an ability to make better choices about them in our professional careers. What I want to focus on, however, is when businesses succumb to the hype.

A business becomes a victim of hype when it adopts a new technology for the sake of adopting it rather than considering whether it adds business value. Lest I sound like a stuck-in-the-mud, there are a myriad of ways that new technologies add business value: they make development faster, more flexible, or interact more smoothly with other enabling products; they may simply have lower costs, whether by virtue of licensing, labour, or lifetime/support costs; they may make the company more attractive to potential employees.

But a new technology also comes with costs: time for individuals to come up to speed with the technology and ecosystem; reduced ability to deliver product and therefore respond to organization needs and competitive threats (Joel Spolsky described rewriting your codebase as the “single worst strategic mistake that any software company can make”); staff turnover as specialists opt to take their existing expertise elsewhere, and replacements require extensive training in the organization’s domain.

Failing to evaluate the costs and benefits of adopting new technology, and/or planning how it is adopted, can seriously damage a business. And yet it still happens, and I posit this is partially to do with our pursuit of the new and shiny, but equally important is the ‘resume factor’, which brings us to the next section…

Recruitment

Hype is driven by recruitment. This is more so in IT than many other industries where professional standards or outright experience tend to carry far more weight. In IT, there remains a prejudice that, because the industry moves so fast, existing practitioners are more likely to be lagging behind the forefront of the industry.

Firstly, this is a poor assumption. Whilst I have seen some individuals flat-line their professional development, the vast majority of people I’ve worked with are doing what they can to keep up with the changes and trends in the industry, both in their work and spare time. Secondly, with experience comes wisdom, and we need that to grow as an industry or we will keep making the same mistakes.

Finally, what behaviour is this going to drive in a mature developer? They’re going to put their weight behind adopting the new technology. As altruistic as we like to try and be, there is always a tension between career needs and business needs, and if there is an opportunity to improve career needs employees are likely to favor that over overall business value because they can move on more easily.

Is Newer Better?

In situations where you’re talking about replacing some part of an existing codebase, then a cost-benefit analysis has to compare the existing technology with the new option(s). And the first question that should come up is, is it better?

I know that sounds both obvious and a little stupid, but hype has a way of making the decision-making process turn a bit stupid at times. Perhaps I should clarify by adding the implied rest of the sentence: is it better for your organization?

At present my web framework of choice is Aurelia. Whilst it isn’t the best known, it has solid support, ongoing development, and just makes things easy. But I need to keep up with what is going on in the industry so I had a play with React, and have subsequently met it more (reading rather than writing) in my current job. Based on my needs, I feel like React is hype. Why (in my eyes)?… The way state and event management works is a big step back from the fairly seamless data-binding of Aurelia or Angular or most non-web UI frameworks, and it requires quite particular data flows for props. Whilst it doesn’t require Redux, I’m not sure I’ve ever seen a job ad for React that doesn’t include Redux, and Redux is massive overkill. I’m not sure I’ve ever needed to centralize state like that in a single page application, nor manage it in a pseudo-CQRS style. If I was building something as interactive as a spreadsheet then Redux might be handy, but the majority of web applications are still focused on one UI element at a time and can fetch data on demand without it really interrupting the UX. What React does very well is componentize – it certainly feels more natural to make components in React than Aurelia.

In short, my cost-benefit analysis says that Aurelia is better for my purposes than React. Decisions are always made with limited information, and people with more knowledge of React would certainly disagree with my assessment. And this is the point – my environment, background, and needs are different from theirs, so we should reach different conclusions.

Conclusion

I want to work with things that are new because I like to learn things, but I also work for a business that needs sound decision making that considers far more than ‘what’s cool’. As software professionals we have to keep on learning so we can provide and evaluate all the options to the businesses that we are involved with, but we also have to be able to step back from the hype and make sound technology decisions based on our environment and the good of the business.

Measurement and Agile Software Development

Introduction

I’m going to start this politically, but I promise it’ll get to software development. The trigger for this scribbling of thoughts was an article discussing the under-funding of many areas of the public sector and the quote from the finance spokesperson for New Zealand’s recently-ousted opposition party: “… the government should be thanking [the] National [party] for inheriting such a strong economy.” And it struck me that economic performance was the sole benchmark by which they gauged success. In reality, the country is vastly more complex than one set of economic indicators, and different people have very different perspectives on what constitutes success.

The ‘duh’ disclaimer

As I’ve said in some previous articles, none of this will be new to anyone who has spent, studied, or even thought about management. And it certainly isn’t the first time I’ve thought about it, but the above article engaged some dormant mental spirit to write things down 🙂

You are what you measure

Different people’s values mean that what they consider important and unimportant will vary and that is fine and healthy. The challenges with measurement are the consequences of measuring and how people’s behavior changes in response to the measure.

To take a non-software example, the New Zealand education system places strong emphasis on success at NCEA achievement, which has translated into students being encouraged to take easier courses or teachers being encourage to teach towards the tests. In this case the goal of giving students the best high school education has been subverted by a measurement which effectively demands certain pass rates.

The classic example in software development is measuring lines of code. Lines of code is a basic metric for measuring the overall size and therefore likely cost of learning and maintaining a code base. It is an appalling measure of programmer productivity: good programmers will write less code through reuse; refactoring may end up removing code altogether; and on the other hand, readability is far more important than concision.

Thankfully I believe the industry is well past measuring productivity by LoC, or even the highly amorphous function points. However the beast is far from slain, for instead we have story points and velocity.

Agile Software Development

Agile Software Development, according to Dave Thomas, author of The Pragmatic Programmer and co-author of The Manifesto for Agile Software Development, can be summarized by this process:

  • find out where you are
  • take a small step towards your goal
  • adjust your understanding based on what you’ve learned
  • repeat

And when faced with alternatives that deliver similar value, take the path that makes future changes easier.

This is very idealistic and quickly crashes into commercial reality where managers, usually on behalf of customers, want to know: when will it be ‘done’ and what will it cost? Of course, this ignores all the benefits of learning-as-we-go, Lean style (which is essentially the same thing as agile software development but applied to business), and that you get much better, albeit far less predictable-at-the-outset, outcomes than any upfront planning based process. But we can’t really ask everyone to be rational can we?

Nevertheless, marketing release dates and the like meant we had to invent ways to measure progress and estimate ‘completion’ (I keep using inverted commas because I think we all know that done or complete are very subjective terms). And so Agile (sorry Dave T, I’m going to be using it as a noun) planning evolved from concepts of managing risk and uncertainty via loose estimation in Agile Estimating and Planning to full blown methodologies that are so militaristic they require specialized commanders like Scrum Masters.

A plague of story points

And here’s where I feel agile software development goes wrong. The people involved are so invested in the process they forget the actual goals of their organization or of agile software development. Having the ‘right’ ceremonies and getting the points right become the focus. More significantly, people become concerned with the consequences of their measurement, so they will avoid having a high-scoring sprint because it’ll increase expectations on their future performance (and by this stage the team probably isn’t feeling all that empowered, but that’s another story).

So now the process is about having accurate estimates, and consistent or slightly growing measurements, regardless of the impact on the delivered product. Because although it might be possible to explain to your manager that your productivity (as measured by story points) has bombed in the last month because you decided to refactor X in order to speed up lots of expected future work, by the time it’s aggregated to their manager and so on, that nuance is lost. And now that manager is getting shafted based on that measurement which doesn’t actually reflect whether or not your team is doing a good job.

My favorite Agile

The first time I ‘did agile’ was almost by accident. We had a three person development team working on a product and a product manager who had a three page Word table with a prioritized list of well broken-down features. And every fortnight, we wrote down on a whiteboard what, from the list, each of us was going to work on and how many days we thought it would take. If something needed re-prioritized the product manager would come in (any time) and we’d change what we were doing and update the whiteboard.

The point is that we were focused on delivering the outcomes that the business wanted almost as soon as it knew it wanted them. Sometimes we’d be asked to have a bit of a guess at how long half a page of priorities might take, leading to a 6-8 week kind of estimate. But all parties also understood that estimates were exactly that and things might change, both in terms of time taken, and in terms of what was critical to get done. Unfortunately I don’t believe this approach really scales, and it requires serious buy-in from stakeholders (despite all the evidence of the value of Agile/Lean approaches).

Conclusion

As is normal for these drawn out discussion posts, I can’t conclude with ‘the answer’ – and there are a lot of people out there who’ve spent a lot of time trying to find ‘the answer’ and haven’t found one.

What I am confident of is that measurements can’t show nuance and they subvert the behavior of what they intend to measure. So it’s incredibly important to continually reflect on whether your measurements, and their driving processes, are serving you well or whether people are now just optimizing for that measurement at the expense of actually achieving things.

I understand that an organization needs to gauge how it’s performing – whether it can be more productive, achieve different goals, eliminate waste. To do this it needs concise explanations of whether it is meeting relevant sub-goals. But the consequence of this concision is a loss of nuance that sands off the random edges that create effectiveness.

Workplace Flexibility

There’s a risk with this post that I’m going to shoot myself in the foot, but I believe in being as open and honest as possible, so I’m going to share my thoughts on good working environments, particularly remote ones.

This comes about because I’m trying to find some flexible, part-time, and remote work. I’ll come to each of these in turn during this post, but to start with, why am I looking?

I’ve been working as the technical partner in a small self-funded team developing a new product, and the level of product development required fluctuates. At present we’re going through a cycle of taking what we’ve learned, realizing our existing strategy isn’t going to work, and changing direction to suit. This is healthy – it takes time to: learn from the market what is really needed, especially with large prospective customers where it’s often a month from scheduling to having a meeting; and understand the costs, risks, and rewards of different commercial options. While I think it’s fair to say we haven’t done a good job of ‘failing fast’ so far, we continue to figure out what we can sell to the market and target the product to suit.

The end result is that I need to periodically attend meetings in another city and push forward with product development, but ultimately have time on my hands that I’d like to put to use somewhere. So I’ve been looking for flexible, part-time, remote work.

Silicon Valley Culture

Nothing turns me off in a job ad more than the term “Silicon Valley Culture”. Silicon Valley is very exciting in terms of getting VC money, but I remain unconvinced it’s a good place to work. If there is one word I’d use to describe what I’ve seen and heard about working in Silicon Valley as a developer, it’s insular.

Insular

What! How could it be insular to be working with so many great technical minds!

When birds of a feather are working together there’s a strong chance they will create a nest to suit their flock. The Silicon Valley stereotype is the cloistered geek. How can someone so insular understand real-world-problems, since being at the office 12 hours a day means they hardly experience the real world? Where do they: cross paths with tradespeople, nurses, children, (non-IT) engineers; or have experiences requiring empathy; or have a multi-faceted political discussion that doesn’t end up with people not speaking to each other (i.e. the real world equivalent of unfriending)?

Face Time

Which brings me to my next point: Silicon Valley Culture values hours in the office – free lunches, and dinners, and we’ll bring a masseuse on site, and… – basically we’ll do anything cheap (relative to your salary) to keep you in the office because (we assume) that if you’re in the office then you’re making progress, and we don’t pay any extra for that.

If you’re a twenty-something with no partner, family, or life, that’s great. If you’re anything else, forget it. Silicon Valley Culture is a big red flag that says “you may not have a family”, “you may not have a life (outside work)”. I’ve already addressed how this makes you an insular person, but it’s also terrible for productivity.

Productivity

Anyone who has done 6 hours of serious development in a day – by which I mean uninterrupted, building or debugging of some significant chunk of code twisting its way through layers of the application – knows that afterwards your brain will be dead. Development was once described to me as sitting two three-hour exams a day, and there have been plenty of days where I’d agree with that. Encouraging (or worse, measuring) time in the office leads means that hours beyond that six are a waste of my time and the organizations, and we both resent having our time wasted. And the more overtime goes on, the more it flows into people’s personal life, and into the next day, and so on until the developer is just burnt-out. I’ve been there – multiple times, and it’s not always easy to swing back from.

Don’t believe me? Then go and read chapter nine of Slack, or search the index of Peopleware for overtime. We’ve known for generations the hours people can repeatedly handle without it being detrimental. I’m not sure why Silicon Valley Culture hasn’t figured it out.

Consequent Demographics

Developers have an average age of 30-32. Assuming a constant working age population between 20 and 70, the average should be 45. Certainly some developers will become managers or analysts, and this is still a new industry so we’d expect to undershoot the average, but by 15 years (60%)? Is it possible that Silicon Valley Culture makes being an ‘old’ developer a problem? The culture is certainly incompatible with having a family. It’s also incompatible with people with more life experience – people who have reached that point where their bottom four levels of Maslow’s Hierarchy are met and realize there’s an awful more to life than work, especially if work isn’t able to allow them to fulfill their potential.

Silicon Valley Culture also implies a boys’ club. Look at the developers in the Silicon Valley TV program: young single guys. There have also been long-standing issues with sexism. So it’s not surprising that the earlier-referenced survey put the percentage of female developers at around 10%. Given the stereotypes, the sexism, the family-unfriendliness (at the risk of being lambasted, mums are still more likely to stay at home with the kids than dads), we shouldn’t be surprised at that figure.

In short, Silicon Valley Culture is a terrible culture. If that’s how you describe your organization you are not going to get wise or mature developers.

P1020378
Here’s a picture of a baby to break up the text. It in no way reflects how Silicon Valley Culture treats developers.

Part-Time

Why does everyone want full-time staff?

My gut reply to thinking as an employer is that I want commitment. But I’m going to step deeper into this and ask: why do I want commitment? What does being committed mean in an organization?

Commitment

The first commitment given by an employee comes when they sign an agreement saying “I’m committing this much time into your organization in return for compensation which reflects that commitment”.

Hiring someone comes with an on-boarding cost and to maximize their return the organization wants the employee to be useful as quickly as possible, which can only be achieved through time experience. In that scenario being full-time will reach this goal faster (in terms of calendar days), although the cost to reach a certain experience level won’t change. Depending on the role this ‘calendar time to usefulness’ may or may not be a factor. For instance, domain-knowledge intensive roles like architecture and product management often have much longer on-boarding periods than development, making the additional time delay of part-time too big an opportunity cost for the organization.

From another perspective, most new hires describe their early weeks as “drinking from the fire-hose”. I imagine if less is drunk per day, because a part-time day is shorter, more of it will be retained.

The second commitment is level of energy or zeal the employee chooses to bring to their job. This commitment is a function of finding a personality which can engage with the organization and then providing them the environment that makes them want to engage (rather than just turn-up). I imagine that once hours drop considerably the employee may find it hard to really engage; but conversely, working fewer hours they may have more energy to engage with. So I conclude that being part-time (say in the 20+ hours/week range) wouldn’t have a significant impact on this kind of commitment.

Focus Time

Earlier I mentioned that development can be brain-intensive, and that beyond a certain level the productivity of each extra hour diminishes quickly. So why pay for those? My experience has been that people working shorter days tend to plan better and be more focused, and I estimate that someone working 5-hour days probably gets 7 hours worth of work done i.e. you’re paying 25 hours worth of time per week for 35 hours worth of productivity (all else being equal).

At this point hiring full-time by default seems like a tradition more than anything, a tradition that is worth re-examining.

Flexibility

The concept of a two-parent family where only one parent works is history. The rise of day-care and after-school-care have made that quite clear (I’m not entirely comfortable with this concept from a sociological perspective, but each to their own, and I digress…). This means there are an awful lot of experienced workers out there who have to juggle family and work. If you make that juggling difficult for people then they can’t work for you because, like it or not, family ultimately comes first.

For my part, I have pre- and school age children, so between my wife and me someone needs to be home by 1430 to pick them up and look after them. Once we’re home and they’re fed, I can typically resume whatever I was doing earlier. Sometimes they’re sick and someone needs to be home with them, but usually they’re just quietly sleeping and there is little to impede working. A strict “X-hours a week in the office” contract doesn’t cope with these scenarios. It is, like default-to-full-time, a hangover from bygone days. The 21st century workplace requires flexibility, and full credit to New Zealand which does enshrine this concept in law. But regardless of legislation, flexibility is something organizations should do because it’s better for them.

Being flexible with hours:

  • increases the talent pool available to you;
  • tends to result in breaking up the workday, which makes for fresher and more productive minds;
  • allows staff to manage their creativity, meaning you’re not paying for mental downtime;
  • creates a feeling of mutual respect and reciprocity, which means asking for a little urgent work outside of hours is a fair exchange, rather that leaving the employee feeling that they gave their time for free.

When combined with remote work it opens up opportunities to access the global talent pool. For instance I’ve applied for several jobs in the EU, despite being in New Zealand. This would work out nicely for me because my wife is home in the evenings and I can be available from 7pm-midnight three nights a week. That’s 15 hours of overlap a week (give or take daylight saving). Equally if I applied in the Americas then I can work early mornings or Saturday (American Friday) because it’s outside standard NZ working hours.

P1010486
Babies are flexible. Is your organization better than a baby?

Remote

Have you ever worked in an open plan office that nobody complained about? Depending on who you ask they are too hot, cold, noisy, distracting, constrained, or impersonal. What they are is cheap, and they allow poor managers to sit watch upon their domain (micromanagement). It has been clear for years that working environment affects productivity, a result which continues to be reinforced.

If you’re like me and need quiet and a means to control distractions then working remotely is bliss. I recall when I first started working remotely, my productivity immediately doubled. Thankfully that was in an organization with a good remote culture, because having remote staff requires a level of organizational discipline. Having remote staff requires inclusive communication and decision making processes, something that makes everyone happier. For this to work leaders must ensure communication is only happening in public forums and decisions are being reached by an inclusive process. This is good business practice, but with remote people it is more important because it is harder for them to see when they are being left out.

Essentially a remote organization must revolve around a text/audio/video chat application like Slack or Hipchat. People share their thoughts in writing for everyone to think about and provide feedback on. In this shared space, all voices can be heard so people are informed decisions are made inclusively. These tools can also be controlled so that people are not disturbed when they don’t need to be and can thus focus on the task at hand.

Agile Software Development talks about the importance of information ‘convection currents’ i.e. the information accidentally shared by people in proximity hearing each other. This is something that is lost with remote workers. They also talk about ‘drafts’ – the information that wafts about which is completely irrelevant or distracting, and in my experience the drafts tend to outweigh useful information. The beauty of a remote work culture is that the information sharing is recorded in text and available for anyone who is interested, and crucially is searchable. As a result that information is available to everybody, not just those who happened to be in earshot and paying attention at that time.

One concern with a remote team is that remote workers might be slacking around on company time. I don’t buy this excuse: it’s usually pretty obvious if a developer is contributing to the level expected simply by looking at their commits and documentation.

So yes, remote workers require a culture shift, but it’s a positive one, and it opens up huge benefits of being able to access talent pools well beyond your current city.

Conclusion

Despite acknowledging that the nature of work is changing our workplaces seem very slow to catch up, especially given the benefits of wider talent pools and increased staff happiness and productivity that part-time, flexible, and remote work (both independently and together) create.

So if you are interested… I’m flexible if you’re flexible. I can legally work in New Zealand, Australia, United Kingdom, and Europe (at least until Brexit goes through, if it goes through), and I’m sure contract terms can be worked out elsewhere. You can get an approximation of my expertise from this blog and the about page, and I will say I’m a generalist and proud of it! 🙂

You can contact me through the contact page.