What does it mean for a software application to be good? Here’s my ranked ordered list of “good software” attributes:

  1. Solution: Provides a service or solves a problem, and does it well
  2. Design: Cohesive, intuitive interface
  3. Quality: Minimal bugs and defects; good performance
  4. Delivery: Continual, helpful updates

Solution and Design: What we don’t control

The Solution and Design are the most visible and obvious parts of “good software”, and when the chips are down, they can make up for tons of issues to a shocking degree. Users will put up with a hell of a lot of ugly, stale, slow, and buggy software if it does something they vitally need (see caveat in next section).

In the same way, a beautiful design can paper over a lot of defects and performance problems. In the inverse case, few users will appreciate a 3x speed improvement, an objectively phenomenal improvement, if it’s made to the same ugly, tired UI; and nobody cares if an application is beautiful and defect-free if nobody needs, uses, or wants.

As much as we’d like to take credit for these first two attributes, often the credit really lies with the designers and product managers exercising experience and judgment while working tirelessly to observe customer behavior and collect feedback; which they then communicate to us. Engineers are usually downstream of the Solution and Design process.

Does that mean we ignore Solution and Design as an external concern?

God forbid.

Because so much of that work happens outside of our view, it represents a huge risk of miscommunication and failed hand-offs. One of the good themes elevated by Agile project management was the need to improve communication and coordination between Engineering and other teams.

To cope with these risks, I recommend building proactive and reactive mechanisms into your process for improving coordination.

  • Proactive mechanisms: how can you align on goals, approach, and outcomes before the knowledge is needed?
  • Reactive mechanisms: how can you get feedback on in-progress or newly delivered work to both check your success and also prepare your follow-up adjustments?

This area of software development is hugely important but largely outside what typical programmers are directly influencing, so I won’t spend much additional time discussing it here.

Quality and Delivery: What we do control

Quality and Delivery, are where Engineers get to shine. We have direct control over both of them, and they are vital. I’ll now reverse what I said earlier about users putting up with shitty software if it looks good enough or if it solves an important enough problem, which is all still theoretically true but practical reality has something else to say:

The software ecosystem is so saturated with competition that it is exceptionally unlikely that any one application is the best or only solution to any particular problem.

Most applications aren’t solving important or unique enough problems to get away with all that much. This is especially true in consumer software; less true in business software, but the gap between business and consumer software is continually reducing. In this way, competition is doing exactly what a market enthusiast would expect: driving down customer tolerance for poor quality and the length of time they are willing to tolerate it.

Quality and it’s nuances

Quality is a fixed standard set by the people building the product. It can be objectively measured by quantifying all known defects, bugs, security vulnerabilities, and performance bottlenecks. This doesn’t mean that a team must choose to block all work until absolutely positive that it’s bug and defect free. It means that the people building the app should set a standard of what is and isn’t acceptable quality in their application, and stick to it. In the same way manufactured parts have acceptable tolerances and spec-compliant variation, so too can software have it’s WONTFIX bugs and put off certain performance improvements because the current behavior is good enough.

Often, when Engineers talk Quality they think of bugs, but they do not always think of security vulnerabilities and poor performance. These are also defects and addressing them is fundamental to Quality.

Quality in a software application can be improved by (a) better defect discovery mechanisms and (b) increasing the standard (i.e. lowering the threshold of tolerable defects). Better discovery often involves better reporting; chances are good that most defects have been discovered by somebody

Formal testing is certainly one way to discover bugs and defects. Manual testing is the prime way to do this because of it’s looser and often exploratory nature. Automated testing is largely not useful for discovering new bugs, but very useful catching regressions. I will argue that automated testing makes a relatively small impact on Quality, but a very large impact on Delivery in the next section.

In some project management interpretations, Quality is considered to be one of a project’s available interrelated constraint levers with the big 3: time, resources, and scope. Adjust one, and the others are affected. Increase resources, and you can accomplish more project scope, or do the same scope in less time. Lower the scope, you can also lower resources or lower the time required. But can you lower Quality and cause time, resources, or scope to change? Hell no. This is a foolish notion and Software Engineers (and especially their managers), should expunge it from their brains so fast that it dislodges all the neighboring bad ideas that nested nearby.

Do not sacrifice quality by shipping a feature you know is broken. There is no upside, but plenty of downside. It will only upset your users when they realize it doesn’t work. Instead, delay the feature’s release; i.e. reduce the scope. In practice this can take many forms: delay the whole release, continue the release but with that feature removed, or hiding the feature behind a flag or UI mechanism.

The idea that cutting Quality can be advantageous for short-term gains in scope/time/resources is ridiculous.

When someone thinks Quality is on the table, the lever they should actually be adjusting is Scope.

Notice something I have intentionally elided to this point:

Quality has no connection with source code structure, abstractions, or architectures (design patterns).

They are exceptionally important, but not for Quality. Remember, Quality is an objective measure, but the relevance of the different patterns of organizing your source code is subjective. Instead, I posit that design patterns are key drivers of Delivery.

Delivery: The Magic Unicorn

Delivery is an extra special magic unicorn of an attribute. Users care the least about it in the short-term, but the most about it in the long-term. On our side, a good Delivery is what can fix problems in any and all attributes above it.

Delivery includes two aspects: time and value. What is delivered must be the right thing and arrive soon enough to matter. Low value changes delivered quickly and high value changes delivered slowly are two sides of the same coin and result in the same negative consequence: the user becomes frustrated and abandons the application.

First, design patterns. They exist as mechanisms to help squishy-brained humans make sense of the highly rigid logic of computer programs. Our brains are evolved to work well with patterns, and when we can map a difficult problem to an existing pattern, or break it down into a collection of patterns, we are substantially better able to reason about it; and to alter it to create new desired behaviors. The machine does not care about our puny abstractions and architectures. The CPU, compiler, and virtual machine are indifferent to the best laid plans of IDE’s and men. Patterns exist for people.

Code being well-designed is highly context dependent and directly tied to the subjective interpretation of the team working on that system; what may be easily maintainable for one group of engineers may be unworkable for another group. Human brains don’t have identical preferences for patterns, so we should expect engineers to come up with different patterned approaches.

Second, automated testing. Remember, I said that it somewhat impacts Quality, but largely impacts Delivery. Why? Because automated testing becomes a prime way to verify, get feedback, and grow confidence on the work being done now.

The tenants of Test Driven Development (TDD) tells us that there’s a difference in kind between “testing for defects” and “testing to drive code design”. They are not the same thing, even though they may use the same tooling and involve similar code. Unit tests in TDD are valuable not only because they catch defects, but because of when they can catch defects: in a tight feedback loop of the development process at the moment of the pertinent change. Because they are used this way, unit tests enable many things:

  • API experimentation: immediate feedback while taking the viewpoint of the consumer of the API, rather than only as it’s author.
  • Correctness: the code empirically passes the assertions of the given test cases
  • Isolation: your change maintains the existing assertions of the rest of the system (i.e. your new feature doesn’t blow up something in another module)

Next question: what makes well-designed code valuable? We can attack the same question backwards by asking: what does it mean for code to be “well-designed”? Let’s make another list:

  • Maintainable
  • Changeable
  • Discoverable
  • Understandable
  • Efficient (-able?)

Are there objective measures for these things? None that I know of. Generally, these tend to be subjective things that Engineers feel about their code base. They also have very weak connections to the objective measurements of Quality, which are independent of an Engineer’s feelings about their code base. I’ve never met a Senior Engineer who could keep a straight place while claiming their application’s Superior Architecture is why their code base has so few bugs. These qualities do however have incredibly strong connections to the speed and ease of development. That’s Delivery, baby! We care about software design so that we can easily and quickly deliver predictable changes to the code.

Now, don’t be fooled by all that talk about Delivery being subjective means it can’t be objectively measured. That’s untrue. Delivery can be objectively measured, but it’s with a different set of tools than the average engineer is likely to employ. In fact, Managers (product or engineering) have an array of metrics available to measure delivery. Here’s just a few:

  • Sprint velocity
  • Lead time to production
  • Cycle time by developer
  • Downtime during deployment

My favorite is “lead time to production” i.e. “length of time between when a feature is requested and when it’s available in production”. If lead time for the average task on your team is very low (e.g. small number of days), Delivery is probably doing very well.

While both Delivery and Quality have objective measurements, Delivery’s metrics are primarily driven by underlying subjective components like code design and maintainability. You might be able to argue that Quality is as well, but you would have to argue that there’s strong subjectivity in the classification of a bug or defect, but I don’t think it’s particularly strong.

Notice that the objective measurements of Delivery are largely gathered and tracked by managers, not by engineers, specifically not by those technicians performing the actual work of writing software source code. This disconnect is a big risk, and is the primary reason why software projects will often devolve into overrun timelines, poor outcomes, and huge expense. The subjective decisions of the engineers regarding code design, maintainability, changeability, understandability, discoverability, etc. combine to drive the objective measurements. Because of this, engineers should be actively involved in gathering, tracking, and monitoring those “project managey” measurements.

Conclusion

Every effort to improve the process of creating good software should map to one of the following efforts:

  1. Indirect Improvements to Solutions and Design
    1. Better communication and coordination mechanisms
    2. Proactive integration of external work and feedback
    3. Quick reactions to external changes
  2. Direct Improvements to Quality and Delivery
    1. Improve defect discovery mechanisms
    2. Increase the standard of quality for the project
    3. Decrease lead time of tasks by improving the software design so that changes take less time

So yeah. Give this a think over. Reach me at @TommyGroshong on Twitter.