Measurement and Agile Software Development

Introduction

I’m going to start this politically, but I promise it’ll get to software development. The trigger for this scribbling of thoughts was an article discussing the under-funding of many areas of the public sector and the quote from the finance spokesperson for New Zealand’s recently-ousted opposition party: “… the government should be thanking [the] National [party] for inheriting such a strong economy.” And it struck me that economic performance was the sole benchmark by which they gauged success. In reality, the country is vastly more complex than one set of economic indicators, and different people have very different perspectives on what constitutes success.

The ‘duh’ disclaimer

As I’ve said in some previous articles, none of this will be new to anyone who has spent, studied, or even thought about management. And it certainly isn’t the first time I’ve thought about it, but the above article engaged some dormant mental spirit to write things down 🙂

You are what you measure

Different people’s values mean that what they consider important and unimportant will vary and that is fine and healthy. The challenges with measurement are the consequences of measuring and how people’s behavior changes in response to the measure.

To take a non-software example, the New Zealand education system places strong emphasis on success at NCEA achievement, which has translated into students being encouraged to take easier courses or teachers being encourage to teach towards the tests. In this case the goal of giving students the best high school education has been subverted by a measurement which effectively demands certain pass rates.

The classic example in software development is measuring lines of code. Lines of code is a basic metric for measuring the overall size and therefore likely cost of learning and maintaining a code base. It is an appalling measure of programmer productivity: good programmers will write less code through reuse; refactoring may end up removing code altogether; and on the other hand, readability is far more important than concision.

Thankfully I believe the industry is well past measuring productivity by LoC, or even the highly amorphous function points. However the beast is far from slain, for instead we have story points and velocity.

Agile Software Development

Agile Software Development, according to Dave Thomas, author of The Pragmatic Programmer and co-author of The Manifesto for Agile Software Development, can be summarized by this process:

  • find out where you are
  • take a small step towards your goal
  • adjust your understanding based on what you’ve learned
  • repeat

And when faced with alternatives that deliver similar value, take the path that makes future changes easier.

This is very idealistic and quickly crashes into commercial reality where managers, usually on behalf of customers, want to know: when will it be ‘done’ and what will it cost? Of course, this ignores all the benefits of learning-as-we-go, Lean style (which is essentially the same thing as agile software development but applied to business), and that you get much better, albeit far less predictable-at-the-outset, outcomes than any upfront planning based process. But we can’t really ask everyone to be rational can we?

Nevertheless, marketing release dates and the like meant we had to invent ways to measure progress and estimate ‘completion’ (I keep using inverted commas because I think we all know that done or complete are very subjective terms). And so Agile (sorry Dave T, I’m going to be using it as a noun) planning evolved from concepts of managing risk and uncertainty via loose estimation in Agile Estimating and Planning to full blown methodologies that are so militaristic they require specialized commanders like Scrum Masters.

A plague of story points

And here’s where I feel agile software development goes wrong. The people involved are so invested in the process they forget the actual goals of their organization or of agile software development. Having the ‘right’ ceremonies and getting the points right become the focus. More significantly, people become concerned with the consequences of their measurement, so they will avoid having a high-scoring sprint because it’ll increase expectations on their future performance (and by this stage the team probably isn’t feeling all that empowered, but that’s another story).

So now the process is about having accurate estimates, and consistent or slightly growing measurements, regardless of the impact on the delivered product. Because although it might be possible to explain to your manager that your productivity (as measured by story points) has bombed in the last month because you decided to refactor X in order to speed up lots of expected future work, by the time it’s aggregated to their manager and so on, that nuance is lost. And now that manager is getting shafted based on that measurement which doesn’t actually reflect whether or not your team is doing a good job.

My favorite Agile

The first time I ‘did agile’ was almost by accident. We had a three person development team working on a product and a product manager who had a three page Word table with a prioritized list of well broken-down features. And every fortnight, we wrote down on a whiteboard what, from the list, each of us was going to work on and how many days we thought it would take. If something needed re-prioritized the product manager would come in (any time) and we’d change what we were doing and update the whiteboard.

The point is that we were focused on delivering the outcomes that the business wanted almost as soon as it knew it wanted them. Sometimes we’d be asked to have a bit of a guess at how long half a page of priorities might take, leading to a 6-8 week kind of estimate. But all parties also understood that estimates were exactly that and things might change, both in terms of time taken, and in terms of what was critical to get done. Unfortunately I don’t believe this approach really scales, and it requires serious buy-in from stakeholders (despite all the evidence of the value of Agile/Lean approaches).

Conclusion

As is normal for these drawn out discussion posts, I can’t conclude with ‘the answer’ – and there are a lot of people out there who’ve spent a lot of time trying to find ‘the answer’ and haven’t found one.

What I am confident of is that measurements can’t show nuance and they subvert the behavior of what they intend to measure. So it’s incredibly important to continually reflect on whether your measurements, and their driving processes, are serving you well or whether people are now just optimizing for that measurement at the expense of actually achieving things.

I understand that an organization needs to gauge how it’s performing – whether it can be more productive, achieve different goals, eliminate waste. To do this it needs concise explanations of whether it is meeting relevant sub-goals. But the consequence of this concision is a loss of nuance that sands off the random edges that create effectiveness.

Workplace Flexibility

There’s a risk with this post that I’m going to shoot myself in the foot, but I believe in being as open and honest as possible, so I’m going to share my thoughts on good working environments, particularly remote ones.

This comes about because I’m trying to find some flexible, part-time, and remote work. I’ll come to each of these in turn during this post, but to start with, why am I looking?

I’ve been working as the technical partner in a small self-funded team developing a new product, and the level of product development required fluctuates. At present we’re going through a cycle of taking what we’ve learned, realizing our existing strategy isn’t going to work, and changing direction to suit. This is healthy – it takes time to: learn from the market what is really needed, especially with large prospective customers where it’s often a month from scheduling to having a meeting; and understand the costs, risks, and rewards of different commercial options. While I think it’s fair to say we haven’t done a good job of ‘failing fast’ so far, we continue to figure out what we can sell to the market and target the product to suit.

The end result is that I need to periodically attend meetings in another city and push forward with product development, but ultimately have time on my hands that I’d like to put to use somewhere. So I’ve been looking for flexible, part-time, remote work.

Silicon Valley Culture

Nothing turns me off in a job ad more than the term “Silicon Valley Culture”. Silicon Valley is very exciting in terms of getting VC money, but I remain unconvinced it’s a good place to work. If there is one word I’d use to describe what I’ve seen and heard about working in Silicon Valley as a developer, it’s insular.

Insular

What! How could it be insular to be working with so many great technical minds!

When birds of a feather are working together there’s a strong chance they will create a nest to suit their flock. The Silicon Valley stereotype is the cloistered geek. How can someone so insular understand real-world-problems, since being at the office 12 hours a day means they hardly experience the real world? Where do they: cross paths with tradespeople, nurses, children, (non-IT) engineers; or have experiences requiring empathy; or have a multi-faceted political discussion that doesn’t end up with people not speaking to each other (i.e. the real world equivalent of unfriending)?

Face Time

Which brings me to my next point: Silicon Valley Culture values hours in the office – free lunches, and dinners, and we’ll bring a masseuse on site, and… – basically we’ll do anything cheap (relative to your salary) to keep you in the office because (we assume) that if you’re in the office then you’re making progress, and we don’t pay any extra for that.

If you’re a twenty-something with no partner, family, or life, that’s great. If you’re anything else, forget it. Silicon Valley Culture is a big red flag that says “you may not have a family”, “you may not have a life (outside work)”. I’ve already addressed how this makes you an insular person, but it’s also terrible for productivity.

Productivity

Anyone who has done 6 hours of serious development in a day – by which I mean uninterrupted, building or debugging of some significant chunk of code twisting its way through layers of the application – knows that afterwards your brain will be dead. Development was once described to me as sitting two three-hour exams a day, and there have been plenty of days where I’d agree with that. Encouraging (or worse, measuring) time in the office leads means that hours beyond that six are a waste of my time and the organizations, and we both resent having our time wasted. And the more overtime goes on, the more it flows into people’s personal life, and into the next day, and so on until the developer is just burnt-out. I’ve been there – multiple times, and it’s not always easy to swing back from.

Don’t believe me? Then go and read chapter nine of Slack, or search the index of Peopleware for overtime. We’ve known for generations the hours people can repeatedly handle without it being detrimental. I’m not sure why Silicon Valley Culture hasn’t figured it out.

Consequent Demographics

Developers have an average age of 30-32. Assuming a constant working age population between 20 and 70, the average should be 45. Certainly some developers will become managers or analysts, and this is still a new industry so we’d expect to undershoot the average, but by 15 years (60%)? Is it possible that Silicon Valley Culture makes being an ‘old’ developer a problem? The culture is certainly incompatible with having a family. It’s also incompatible with people with more life experience – people who have reached that point where their bottom four levels of Maslow’s Hierarchy are met and realize there’s an awful more to life than work, especially if work isn’t able to allow them to fulfill their potential.

Silicon Valley Culture also implies a boys’ club. Look at the developers in the Silicon Valley TV program: young single guys. There have also been long-standing issues with sexism. So it’s not surprising that the earlier-referenced survey put the percentage of female developers at around 10%. Given the stereotypes, the sexism, the family-unfriendliness (at the risk of being lambasted, mums are still more likely to stay at home with the kids than dads), we shouldn’t be surprised at that figure.

In short, Silicon Valley Culture is a terrible culture. If that’s how you describe your organization you are not going to get wise or mature developers.

P1020378
Here’s a picture of a baby to break up the text. It in no way reflects how Silicon Valley Culture treats developers.

Part-Time

Why does everyone want full-time staff?

My gut reply to thinking as an employer is that I want commitment. But I’m going to step deeper into this and ask: why do I want commitment? What does being committed mean in an organization?

Commitment

The first commitment given by an employee comes when they sign an agreement saying “I’m committing this much time into your organization in return for compensation which reflects that commitment”.

Hiring someone comes with an on-boarding cost and to maximize their return the organization wants the employee to be useful as quickly as possible, which can only be achieved through time experience. In that scenario being full-time will reach this goal faster (in terms of calendar days), although the cost to reach a certain experience level won’t change. Depending on the role this ‘calendar time to usefulness’ may or may not be a factor. For instance, domain-knowledge intensive roles like architecture and product management often have much longer on-boarding periods than development, making the additional time delay of part-time too big an opportunity cost for the organization.

From another perspective, most new hires describe their early weeks as “drinking from the fire-hose”. I imagine if less is drunk per day, because a part-time day is shorter, more of it will be retained.

The second commitment is level of energy or zeal the employee chooses to bring to their job. This commitment is a function of finding a personality which can engage with the organization and then providing them the environment that makes them want to engage (rather than just turn-up). I imagine that once hours drop considerably the employee may find it hard to really engage; but conversely, working fewer hours they may have more energy to engage with. So I conclude that being part-time (say in the 20+ hours/week range) wouldn’t have a significant impact on this kind of commitment.

Focus Time

Earlier I mentioned that development can be brain-intensive, and that beyond a certain level the productivity of each extra hour diminishes quickly. So why pay for those? My experience has been that people working shorter days tend to plan better and be more focused, and I estimate that someone working 5-hour days probably gets 7 hours worth of work done i.e. you’re paying 25 hours worth of time per week for 35 hours worth of productivity (all else being equal).

At this point hiring full-time by default seems like a tradition more than anything, a tradition that is worth re-examining.

Flexibility

The concept of a two-parent family where only one parent works is history. The rise of day-care and after-school-care have made that quite clear (I’m not entirely comfortable with this concept from a sociological perspective, but each to their own, and I digress…). This means there are an awful lot of experienced workers out there who have to juggle family and work. If you make that juggling difficult for people then they can’t work for you because, like it or not, family ultimately comes first.

For my part, I have pre- and school age children, so between my wife and me someone needs to be home by 1430 to pick them up and look after them. Once we’re home and they’re fed, I can typically resume whatever I was doing earlier. Sometimes they’re sick and someone needs to be home with them, but usually they’re just quietly sleeping and there is little to impede working. A strict “X-hours a week in the office” contract doesn’t cope with these scenarios. It is, like default-to-full-time, a hangover from bygone days. The 21st century workplace requires flexibility, and full credit to New Zealand which does enshrine this concept in law. But regardless of legislation, flexibility is something organizations should do because it’s better for them.

Being flexible with hours:

  • increases the talent pool available to you;
  • tends to result in breaking up the workday, which makes for fresher and more productive minds;
  • allows staff to manage their creativity, meaning you’re not paying for mental downtime;
  • creates a feeling of mutual respect and reciprocity, which means asking for a little urgent work outside of hours is a fair exchange, rather that leaving the employee feeling that they gave their time for free.

When combined with remote work it opens up opportunities to access the global talent pool. For instance I’ve applied for several jobs in the EU, despite being in New Zealand. This would work out nicely for me because my wife is home in the evenings and I can be available from 7pm-midnight three nights a week. That’s 15 hours of overlap a week (give or take daylight saving). Equally if I applied in the Americas then I can work early mornings or Saturday (American Friday) because it’s outside standard NZ working hours.

P1010486
Babies are flexible. Is your organization better than a baby?

Remote

Have you ever worked in an open plan office that nobody complained about? Depending on who you ask they are too hot, cold, noisy, distracting, constrained, or impersonal. What they are is cheap, and they allow poor managers to sit watch upon their domain (micromanagement). It has been clear for years that working environment affects productivity, a result which continues to be reinforced.

If you’re like me and need quiet and a means to control distractions then working remotely is bliss. I recall when I first started working remotely, my productivity immediately doubled. Thankfully that was in an organization with a good remote culture, because having remote staff requires a level of organizational discipline. Having remote staff requires inclusive communication and decision making processes, something that makes everyone happier. For this to work leaders must ensure communication is only happening in public forums and decisions are being reached by an inclusive process. This is good business practice, but with remote people it is more important because it is harder for them to see when they are being left out.

Essentially a remote organization must revolve around a text/audio/video chat application like Slack or Hipchat. People share their thoughts in writing for everyone to think about and provide feedback on. In this shared space, all voices can be heard so people are informed decisions are made inclusively. These tools can also be controlled so that people are not disturbed when they don’t need to be and can thus focus on the task at hand.

Agile Software Development talks about the importance of information ‘convection currents’ i.e. the information accidentally shared by people in proximity hearing each other. This is something that is lost with remote workers. They also talk about ‘drafts’ – the information that wafts about which is completely irrelevant or distracting, and in my experience the drafts tend to outweigh useful information. The beauty of a remote work culture is that the information sharing is recorded in text and available for anyone who is interested, and crucially is searchable. As a result that information is available to everybody, not just those who happened to be in earshot and paying attention at that time.

One concern with a remote team is that remote workers might be slacking around on company time. I don’t buy this excuse: it’s usually pretty obvious if a developer is contributing to the level expected simply by looking at their commits and documentation.

So yes, remote workers require a culture shift, but it’s a positive one, and it opens up huge benefits of being able to access talent pools well beyond your current city.

Conclusion

Despite acknowledging that the nature of work is changing our workplaces seem very slow to catch up, especially given the benefits of wider talent pools and increased staff happiness and productivity that part-time, flexible, and remote work (both independently and together) create.

So if you are interested… I’m flexible if you’re flexible. I can legally work in New Zealand, Australia, United Kingdom, and Europe (at least until Brexit goes through, if it goes through), and I’m sure contract terms can be worked out elsewhere. You can get an approximation of my expertise from this blog and the about page, and I will say I’m a generalist and proud of it! 🙂

You can contact me through the contact page.

Value of Data

“Big data is the future” or so we are told. With enough data we can create models that provide us with good outputs for unknown inputs using an array of techniques like: using probabilities to estimate likely relationships; regression to find trends and interpolate answers; or by training general purpose learning algorithms.

In particular, Machine Learning (ML) is in vogue, and although the underpinning concepts aren’t new (I had assignments combining computer vision and artificial neural networks at university back in 2001), the capabilities of machines and easy access to massive levels of computing power now allow much more practical application of these concepts.

Regardless of the technology or the hype, there are universal concepts that are paramount to the successful application of a technology. For instance it is important to understand what the technology can and can’t do, and what properties are intrinsic and what are variable. Continuing with ML as an example, it is more effective to pre-process an image and extract key attributes and feed those into a neural network than to give it a million pixels per data entry.

One universal concept is that the technology needs to solve a real problem, or to use business terms, needs to ‘add value’. There is a cost to using a technology – for big data, collecting data can be expensive, notably in mitigating the risk of failing to manage the data i.e. ensuring it is secure and compliant. To offset this cost we need to establish value, which means asking:

  • How does having this give us a competitive advantage?
  • How can I monetize this?

For some of the big and famous organizations the answers to these are fairly clear: Amazon wants shopping data to provide better choices than competitors, drawing more customers and therefore more sales; Google and Facebook want information that targets their adverts to more of the right people, resulting in more buying per advert, incentivizing customers to buy more adverts.

One strategy for answering these questions is to create data which is so much better than competitors’ data, that customers will pay to access the data. This is not a new concept as software products have been up-selling reporting since time immemorial, but recently there seems to be more inclination to answer modelling questions rather than just provide charts. This is where the business questions need to be applied. For instance, if it is possible to mine data to answer questions like “what impact does doing X have on Y”, then ask yourself whether these answers are something that customers will pay for and competitors don’t have. If so, then you’re onto an excellent strategy. If not, then is having that data valuable?

React-Redux

I’ve been trying to get a clear picture of the data flows in React-Redux, and I thought I’d share that mental flowering. I’m still quite new with React-Redux so I’d suggest reading it with a note of caution but knowing that the end result was working software.

Redux

In Redux itself the data flow seems relatively straight-forward:
1. When something happens we dispatch an action to the store i.e. store.dispatch({ type: "TYPE_KEY", ... }).
2. Redux then calls the reducer which returns a new state back to Redux.
3. Functions that were previously registered using store.subscribe(fn) are called and those functions call store.getState() to get the new state.
This seems like a fairly classic observer pattern around an event-store.

If I were translating this to React then I’d expect that within my component I’d register a listener against the store that would call this.setState({ ... }). Then setState() would trigger the framework to re-render that component. The problem with this plan and react-redux is subscribe() is nowhere to be seen. Nor for that matter is store – so where did they go?

The answer is in the connect() method.

Connect()

When connect() is called on a component used inside a <Provider> tag, connect() automatically picks up the Provider’s store. This store is configured at the highest level of the application, something like this:

import { createStore } from "redux";
import { Provider } from "react-redux";
import reducer from "./state/reducers";

let store = createStore(reducer); // reducer is the outcome of combineReducers()

ReactDOM.render(
    <Provider store={store}>
        <Router />
    </Provider>,
    document.getElementById("example")
);

Because connect() encapsulates the store, it needs to provide an interface to allow the component to (implicitly) access the store. It does this via two arguments, which the documentation calls mapStateToProps and mapDispatchToProps.

mapStateToProps

mapStateToProps is effectively a filter that selects the bits of the global state from the store that are of interest to the component. The bits of state returned from this method are added to the this.props object.

This was the slightly counter-intuitive part in migrating from storing the data in React state. React documentation indicates that the data owned by a component should be stored in this.state. When it’s transferred to the store, that component is no longer strictly the owner and shouldn’t be changing it without informing the store, so it makes sense for it to be read-only, and therefore it moves to this.props.

mapStateToProps is the replacement for store.subscribe(). With it in place, any updates made to the store are filtered by the object returned from this method, and are then applied to the component’s this.props, causing the standard component updating lifecycle including the render.

mapDispatchToProps

mapDispatchToProps allows us to access the store.dispatch(). It provides the dispatch function as an argument, and should return an object containing methods that call dispatch with various actions. e.g.

const mapActionToProps = (dispatch) => {
    return {
        requestUsers: () => dispatch({ type: "REQUEST_USERS" }),
        updateUser: (user) => dispatch({ type: "UPDATE_USER", user: user })
    }
}

This enables the component to trigger actions by calling this.props.methodname(args).

connect(mapStateToProps, mapDispatchToProps) returns a function which should be passed the class or function of the component using the store. Convention also has that class returned as the default export so it can be directly imported. e.g.

// in component file
export default connect(mapStateToProps, mapActionToProps)(ComponentName)

// in file that is using the component
import ComponentName from "./ComponentName"

React-Redux Cycle

Putting it altogether we get a data flow something like this:
1. When something happens we dispatch an action to the store by calling an action property. i.e. this.props.requestUsers().
2. This calls dispatch(action-object) inside the method used in step 1, whose definition is found in mapActionToProps.
3. Redux then calls the reducer which returns a new state back to Redux.
4. The component receives the new state, filters it according to mapStateToProps and applies the filtered object to this.props triggering the normal React component update lifecycle and render.

Starting React

I’ve been dabbling with the concept of a remote part-time job to provide a bit of personal funding as our business venture slowly gets off the ground, and this has highlighted the current popularity of React. It’s not like I’ve had my head completely in the sand, but the number of roles requiring a specific JS framework, rather than just JavaScript (plus a framework preference) did surprise me a little.

With all that in mind I decided to take a proper look at React, and compare it to my current front-end framework of choice, Aurelia. The fruits of this exploration are in GitHub and hopefully not too embarrassing :).

Philosophy

There is a continuum in current web libraries between being a full framework, like Angular or Aurelia, and a view library, where non-rendering activities like routing and dependency injection are not included. React sits at the view library end of this spectrum, but with well established and supported choices for the excluded activities. In this respect I’ve found React easier to learn than Aurelia because it enables the ecosystem to be learned step-at-a-time – although it could also be said that significant experience with any of these frameworks would make learning the next easier.

React is strong on conventions, although at times it has felt like these conventions obscure what is going on and make ‘magic happen’, which is not a feeling I enjoy when developing. This is particularly so with react-redux.

It also leans more towards JavaScript’s dynamic typing than a static typing style, emphasized in the event handler example which does this.setState({ [name]: value });. However it also encourages the use of typed systems like TypeScript which requires some trawling through the index.d.ts file to come to grips with.

Components

React is driven by the notion of small and composable components, and it does make creating and working with these very easy. The use of JSX/TSX files allows advanced logic to be mixed with markup giving complete flexibility over the creation of elements. There is a risk to this, which is that a component gets bloated instead of being split into smaller parts, so good code maintenance practices are important.

Styling is fairly flexible, allowing anything from basic CSS use, referenced from the index.html, through to compiled and bundled SCSS using a packing system like webpack.

Data

My basic understanding is that data that is owned by a component goes into this.state and is modified by setState(), while data that is used by a component goes into this.props and is read-only. In a basic scenario (i.e. without Redux) data flows down via properties on this.props, and up via callback methods on this.props. e.g.

// in parent component
private handleChange(data) {
  this.setState({ filtered: data});
}

render() {
  <Subcomponent all={this.all} onChange={(data) => this.handleChange(data)} />
}

// in a subcomponent method
this.props.onChange(this.props.all.filter(f => f.something > 0));

This is certainly more laborious than Aurelia’s two-way binding, something React’s documentation acknowledges: “It can sometimes be tedious to use controlled components, because you need to write an event handler for every way your data can change and pipe all of the input state through a React component.” However this approach does make data passing and ownership very explicit and therefore understandable. Going to react-redux changes all this, and that’s something I’ll leave for a future post.

So there you have it – my first couple of days building something with React.

Using aurelia-ux v0.6

aurelia-ux is the pre-release UI component kit for aurelia. It has been developed throughout 2017 including a significant re-organization between v0.3 and v0.4. Due to this reorganization and its pre-release status, there is limited information out there on how to use it, with the best example at present being aurelia-store-todo. There is an indication that documentation is on the way and hopefully as part of that the team’s app-ux-showcase will be updated past v0.3. In the meantime I decided to dig into the source a bit and build a little something to share here.

Setup

This setup assumes you have aurelia-cli running, so that you can go to a command line and type au new to create the project. Open the project folder and add the following dependencies to the package.json then run npm install (or the yarn equivalent) to get the aurelia-ux packages into your node_modules folder:

  "@aurelia-ux/components": "^0.6.0",
  "@aurelia-ux/core": "^0.6.0",

aurelia-store-todo uses WebPack as the bundler, however I will be sticking to the aurelia-cli. This means editing the aurelia.json to include the dependencies. The first dependency is @aurelia-ux/core which contains the core capabilities. Then each component needs to be added individually. There is also an @aurelia-ux/components which may be intended to include all components, but including this causes build errors as the require exports for all the referenced components are not able to be resolved. The additions to the vendor-bundle dependencies in aurelia.json look something like this:

{
  "name": "@aurelia-ux/core",
  "path": "../node_modules/@aurelia-ux/core/dist/amd",
  "main": "index",
  "resources": ["**/*.{css,html}"]
},
{
  "name": "@aurelia-ux/input",
  "path": "../node_modules/@aurelia-ux/input/dist/amd",
  "main": "index",
  "resources": ["**/*.{css,html}"]
},
{
  "name": "@aurelia-ux/button",
  "path": "../node_modules/@aurelia-ux/button/dist/amd",
  "main": "index",
  "resources": ["**/*.{css,html}"]
},
{
  "name": "@aurelia-ux/form",
  "path": "../node_modules/@aurelia-ux/form/dist/amd",
  "main": "index",
  "resources": ["**/*.{css,html}"]
}

Using Components

The components are added as plugins to main.ts, like so:

aurelia.use
    .plugin(PLATFORM.moduleName('@aurelia-ux/core'))
    .plugin(PLATFORM.moduleName('@aurelia-ux/input'))
    .plugin(PLATFORM.moduleName('@aurelia-ux/button'))
    .plugin(PLATFORM.moduleName('@aurelia-ux/form'));

This makes them available everywhere, without needing further require tags.

Each component has a tag and properties which can be used in a template file. For instance the button is <ux-button> and can be bound to a view model variable using value.bind e.g. <ux-input value.bind="rows"></ux-input>. The elements appear unmodified in the output HTML and can be styled using CSS. Here is an example of using three components in a view model.
app.html:

<template>
  <require from="./app.css"></require>
  <ux-form>
    Rows:<ux-input value.bind="rows"></ux-input>
    Columns: <ux-input value.bind="cols"></ux-input>
    <ux-button type="raised" click.trigger="generate()">Generate</ux-button>
  </ux-form>
</template>

app.ts:

export class App {
    protected rows: number = 5; // arbitrary default
    protected cols: number = 5; // arbitrary default
    protected generate() { ... }
}

app.css (you may need to create this):

ux-form {
    padding: 2px;
    max-width: 300px;
    border: 1px solid transparent;
    border-color: var(--ux-design--primary-light)
}

CSS Variables

You may have noticed in the CSS above the use of a variable: var(--ux-design--primary-light). This is a mechanism for globally styling aurelia-ux, and is setup in the app constructor as follows:

import { autoinject } from 'aurelia-framework';
import { AureliaUX } from '@aurelia-ux/core';

@autoinject
export class App {
    protected rows: number = 5; // arbitrary default
    protected cols: number = 5; // arbitrary default

    // ux.design members: 
    //     appBackground, appForeground, controlBackground, controlForeground, 
    //     primary, primaryForeground, primaryLight, primaryLightForeground, primaryDark, primaryDarkForeground, 
    //     accent, accentForeground, accentLight, accentLightForeground, accentDark, accentDarkForeground
    constructor(private ux: AureliaUX) {
        ux.design.primary = '#DD0088';
        ux.design.accent = '#CC88FF';
        // unfortunately we don't seem to be able to change the tints by just setting the base color, so set them as well
        ux.design.primaryLight = '#DB9BBF';
        ux.design.accentLight = '#E7C9FF';
    }

    protected generate() { ... }
}

The name of the CSS variable is --ux-design--<property-name> where the property name is one of the ux.design members above. You can see them all in your browser’s CSS viewer:
CSS Design Variables
It is worth noting that CSS variables do not work in IE11 so if you expect to support that browser then aurelia-ux may not be suitable for you.

More

This is as far as I’ve gone with aurelia-ux so far, but there’s certainly more out there.

There are 11 components as it stands: button; checkbox; chip-input (a tag list); datepicker; form; input; input-info; list; radio; textarea; and switch. To find out what properties they expose download and run app-ux-showcase, or open your node_modules/@aurelia-ux/<name>/dist/amd/ux-<name>.d.ts and look at the public properties and methods.

There is also a themeing system which enables you to define your own component themes by extending implementations of the UxTheme class. Any properties in those classes can be used as CSS variables. To fully grasp this at present I’d suggest downloading the aurelia-ux source.

Automated Testing Priorities

There’s a common theme in automated software testing that the greatest effort should go into unit tests, lesser into integration tests, and least into UI tests. This is known as the testing pyramid. However I’m not convinced this is the best use of automated test developers for web applications, and I believe this is because the nature of web standards and a commonly used web application architecture invalidate some of the assumptions behind the pyramid.

Testing Goals

Before we continue we need to state the goals of automated testing. In my mind they are:

  1. Validate the correctness of the thing under test
  2. Allow refactoring of the thing under test

Note that correctness includes ‘non-functional’ things, like authorization security.

From a business perspective, we want to know that the product works as intended. Working as intended means that the application performs correctly when used through its interfaces. This would suggest that UI tests are the most important, which is the opposite of conventional wisdom.

The reason often given for placing less focus on UI tests is that they have been considered notoriously fragile. However I posit that this has been due to the nature of the interfaces being tested, which have tended to make identifying and interacting with UI elements automatically very hard; e.g. having to use control ids with WinForms applications. I’m also suspicious that less focus on UI tests is a consequence of Agile methodologies that insist on jamming all testing into the same cycle as development, resulting in automation trying to be written against a UI in an extreme state of flux.

Unit Test Fragility

One problem I have with unit testing is that developers are encouraged to test the internals of the unit. This happens when mock objects are checked to see if certain methods were called on the mock.

The purpose of functions and classes are that they expose some contract and hide the details of how that contract is fulfilled. Testing how a unit is doing its work means examining inside the black box, which defeats the purpose of using testing to support refactoring because now we can’t make a change to the implementation of a unit without breaking its tests.

UI Test Fragility

In his 2012 article Fowler says:

An enhancement to the system can easily end up breaking lots of such tests, which then have to be re-recorded. You can reduce this problem by abandoning record-playback tools, but that makes the tests harder to write. Even with good practices on writing them, end-to-end tests are more prone to non-determinism problems, which can undermine trust in them. In short, tests that run end-to-end through the UI are: brittle, expensive to write, and time consuming to run.

I believe that some of these assumptions are less valid in modern web test automation.

Automated web testing tends to be hand-written because (in my experience) the recording tools can create quite fragile paths, usually because they don’t know what the least variant information is. It is straight-forward to hand-write UI tests thanks to CSS selectors which are easy to use, well-supported, and when done simply (i.e. via id and class selectors rather than paths) aren’t hugely prone to change. These selectors are usually wrapped into page objects that further insulate the tests from changes.

The HTML DOM also exposes an event model which allows tests to mimic the vast the majority of UI actions, removing the complexity of older style tools which involved a lot of mouse-coordinates and button states.

And finally, in web development, UI testing has the added benefit of enabling testing across multiple browsers – something less applicable to downloaded applications.

However I agree that they remain time-consuming to run, and if there are lots of developers committing to the repository then having your continuous integration run on every commit may not be possible, reducing the benefit of the tests for quickly catching problems.

Middle-Ground – Integration Testing the API

It is increasingly common for web applications to be built as a web API and a web (JavaScript) client. This is my personal preference over server-side rendering as it nicely decouples the presentation from the logic and allows the application to more easily integrate with other applications. There is some development overhead in this approach, but given most web pages perform some dynamic interaction with the server thus requiring some level of client richness, this overhead is quite minimal.

Having an API provides an excellent place for automated testing. An API is a contract and will express most, if not all, of the business rules through its inputs and outputs. It also requires basic security, and allows validation and authorization to be checked. It can be easily extended to run more extensive security testing (i.e. by manipulating HTTP headers and sending malicious data) and performance tests.

Integration testing the API doesn’t mean a full environmental setup is required. It is still reasonable to use mocks for calls that are slow or resources that aren’t available. For instance my integration tests use .NET Core’s TestServer rather than running a web server, EF’s UseInMemoryDatabase rather than instantiating a database, and stub out AWS service calls. These are reasonable compromises because I’m confident those areas will perform to their contracts.

Conclusion

This is my ‘testing pyramid’ from highest to lowest priority:

  1. API integration tests
  2. Integration or unit tests for things that can’t be reached from the API
  3. UI tests for success scenarios

In my current application I have 98% code coverage and 90% branch coverage (largely ruined by not throwing exceptions inside all the C# using statements) of my business layer using the first two items on my list, and it has enabled considerable evolution and refactoring of the code-base over the last six months.