Admitting shortcomings

Question: Why do we test software?
Answer: Because history tells us that there will be defects. If software always worked, we wouldn’t verify that it did.

Question: Why then, are there defects in software?
Answer: The current (known-to-man) process of developing software is defective.

We can pretend the process isn’t defective, but reality tells us otherwise. Its end result; software, is defective.

The agile manifesto

Individuals and interactions over processes and tools.
(processes and tools aren’t good enough)

Working software over comprehensive documentation.
(software can’t be accurately documented)

Customer collaboration over contract negotiation.
(we need to constantly query the customer for accurate information)

Responding to change over following a plan.
(the plan doesn’t work when the environment changes)

These rules are not the perfect way of developing software, though they could very well be the perfect answer(s) to dealing with the current reality of software development and customer requirements.

Dealing with uncertainty

An agile team knows that they don’t know. An agile team knows that no one knows. They are agile because they need to deal with not knowing.

An agile team is telling you:
“We know what to do this month, and will evaluate what to do next month depending on what next month looks like. Its weird, we know, but there you have it.”
“Until you figure out what you really need, we will remain agile so we can react to your changes.”

Sounds reactive? It is.

It is short-term thinking. It is the kind of short-term thinking that remains a total necessity in a changing environment. It is also expensive (wasteful) thinking.

In those statements you also find the a big reason why agile can be a hard sell. How far up in an organization can you sell the truth that the project really is out of control?

Iteration length and efficiency

The most efficient team is one developer writing code for herself. The second most efficient teamis a customer and a developer/designer pair-programming-requesting. Some development work is well suited for this, for example prototyping and GUI development.

This is about as short an iteration cycle you can get:

Customer says: Looks bad.
Developer tweaks.
Customer says: Looks good.

What about really long iteration lengths? A six month iteration would need very detailed specs (description) to complete. It seems there exists some kind of correlation between iteration length and specification detail.

The manifesto tells us, “customer collaboration”. In the iteration cycle example above you have a 50% shared time investment between figuring out what to do, and doing it. So, in an iteration where you have four coders doing 40h/week each, say 640 hours a month, and given an iteration (sprint) length of four weeks, how much customer collaboration do you need?

We don’t know the answer to that question, but its significantly more than 4 hours of monthly sprint planning work. Given an iteration cycle time, there is an optimum ratio between figuring out what to do and doing it.

Most of the times it is grossly understated in favor of the doing. On average, teams need to spend more time discussing what to do with the customer, and spend less time writing code.


Waterfall is a more efficient way of developing software, provided there is accuracy in describing what should be done in the long-term. However, experience tells us otherwise; that we can’t predict the long-term.

Why? Why do we have to develop software in short increments?

  1. Complexity

    Designing the logic circuits of a complete system is a near impossible task. Agile admits this, and prefers to develop empirically in short-term increments.

  2. Change

    If you design for the long-term and then need to change something, the system becomes subject to the butterfly effect; in Chaos theory “sensitive dependence on initial conditions”. That means a small change in a complex system can have large effects elsewhere in the system. A change in requirements can have devastating effects on overall design.

    Customer: “The print function doesn’t work”
    Developer (shrugging): “You wanted us to change the date format in the product import, and that change seems to have broken the whole print function. Sorry.”
    Customer: “Its okay. I got more $$$.”

No one can accurately predict the effect of any change. We can minimize the risk, but we can’t remove it.

An agile team is in effect telling the customer: “If you’re gonna keep changing stuff, we’re gonna deal with it by developing in short-term steps.”
An agile team is also admitting to itself: “We don’t know how to design complex systems except by short empirical steps.”

Perfect systems

If we remove complexity and change from the equation, there is no need for iterative development. We would be dealing with a closed system that could be designed, coded, and be done with. Unfortunately, complexity and change exists. What we want to do is minimize them, and use a method that minimizes their effects.

The waterfall model had a few faulty assumptions:

  1. The team can deal with any level of complexity
  2. Requirements never change

Since both statements are false we remain “stuck” with Agile as the so-far best (but yet imperfect) way of doing it.


Design for total cost

Approximated cost in hours to find a software defect:

Requirement Review 10 Minutes
Code Review 20 Minutes
Unit Level Testing 1 Hour
Automated Tests 10 Hours
Manual Tests 15 Hours
User Testing 20 Hours
Customer Finds 30+ Hours
(from “Quality through Change-Based Test Management” — IBM)

Most software projects think they are doing well with a process for user testing in place. That is on the hour cost of 20 hours per defect. Sounds good to you?

Spending money in the maintenance ball is spending it wrong. Spending it right would be to focus on the little requirements ball. There is where you find real leverage (10 minutes to find a defect vs. 30+ hours if an end user finds it).

Cost to user

The cost to user is difficult to estimate, but very important. Sometimes the cost is visible, for example a software defect that prevents 1000 users from working would be quite noticeable.

Sometimes the cost is hidden and more insidious. A design flaw that makes something difficult to accomplish carries a cost to user as well, but it in the form of stress. Imagine a bad design flaw staying with the application for five years, stressing out thousands of users that has to deal with it on a daily basis.

  • Aim for user friendliness
  • Thorough testing before release
  • Easy way to report bugs
  • Knowledgeable help-desk

Avoid maintenance cost through increased reliability

Software should be designed with maintenance goals in mind. Most of the time we design for unneeded complexity. Use a simple infrastructure, and make it complex only if you need to. Every item you add to the chain of items needed for operation increases the risk of failure of said chain.

  • 24h service, or office hours?
  • Automated surveillance reduces downtime
  • Avoid complexity in server infrastructure!
  • Surveillance tools for server admins

Minimize developer hours spent on data repair

Sometimes we need to alter the data (or metadata) of the database. The need can come from a requirements change, from a database crash, from badly designed user input validation, or from external data-load. If you are using very complex data structures in the database, these changes will be very costly to carry out.

  • Easier to salvage database designed for enough normalization
  • Easier to repair non-dynamic data structures
  • Establish data contracts for integration early
  • Use strict database validation. Never allow input or import of erroneous data

Turnover costs

The developer team will experience turnover sooner or later. These costs can be reduced by relevant documentation and use of standards and standard technology. If the technology is special-special, and no one knows how to do anything except for the guy who is leaving, replacing him will obviously be very costly.

  • Good specifications makes team turnover and knowledge transfer less costly
  • Standard technology makes it easier replace team members.
  • Standard guidelines makes it easier to replace team members.

Software design can increase maintainability

It is no surprise that a good design increases maintainability. What does that really mean? It means making changes to the application will be cheaper. Changes can be both developing something new, and repairing something that is broken. The challenge here is to keep the design working while adding complexity. Shortcuts introduce software rot and must be avoided.

  • Domain driven design will reduce complexity
  • If two things can be separate they should be separate. DRY (don’t repeat yourself) is a double edged sword.
  • Good design is not an awesome ball of yarn that can do anything
  • Good design is as simple as possible, not simpler.

Be aware of software entropy

The development team is in a constant fight against software entropy. Every change to the codebase carries with it the chance to mess something up. It can be by introducing new bugs, or simply by destroying the design. Alot of team turnover coupled with feature creep will guarantee software rot.

The team must be made aware of this entropy, because the application is constantly moving towards it. If it sets in too far, you will have an unmaintainable application where changes will be so costly and risky that its not worth doing them.

When you get the feeling that developers are very hesitant to add any new features, that they prefer to poke at the application from afar (with a tall rod), you are probably looking at the putrid pile of an unmaintainable application.

That is why every new feature must not only be weighed against the direct cost to code it, but also against the hidden cost of adding complexity to the application as a whole, making it more expensive to maintain.

  • Allocate time for redesign and the fight of software rot
  • Be extremely wary of feature creep
  • Keep conceptual integrity in mind when adding new features