Wednesday, January 19, 2011

Can we really control the quality/cost trade-off?

An old rule in software project management (also known as the Project triangle is that you can have a system that:
- has low development cost (Cheap)
- has short development time (Fast)
- has good quality (Good)
except you can only choose two :-). That seem to suggest that if you want to go fast, and don't want to spend more, you can only compromise on quality. Actually, you can also compromise on feature set, but in many real-world cases, that happens only after cost, time and quality have already been compromised.

So, say that you want to compromise on quality. Ok, I know, most engineers don't want to compromise on quality; it's almost like compromising on ethics. Still, we actually do, more often that we'd like to, so let's face the monster. We can:

- Compromise on external quality, or quality as perceived by the user.
Ideally, this would be simply defined as MTBF (Mean Time Between Failures), although for many users, the perceived "quality" is more akin to the "user experience". In practice, it's quite hard to predict MTBF and say "ok, we won't do error checking here because that will save us X development days, and only decrease MTBF by Y". Most companies don't even measure MTBF. Those who do, use it just as a post-facto quality goal, that is, they fix bugs until they reach the target MTBF. We just don't know enough, in the software engineering community, to actually plan the MTBF of our product.

- Compromise on internal quality, or quality as perceived by developers.
Say goodbye to clarity of names, separation of concerns, little or no duplication of logic, extendibility in the right places, low coupling, etc. In some cases, poor internal quality may resurface as poor external quality (say, there is no error handling in place, so the code just crashes on unexpected input), but in many cases, the code can be rather crappy but appear to work fine.

- Compromise on process, or quality as perceived by the Software Engineering Institute Police :-).
What usually happens is that people start bypassing any activity that may seem to slow them down, like committing changes into a version control system :-) or being thorough about testing. Again, some issues may resurface as external quality problems (we don't test, so the user gets the thrill of discovering our bugs), but users won't notice if you cut corners in many process areas (well, they won't notice short-term, at least).

Of course, a "no compromises" mindset is dangerous anyway, as it may lead to an excess of ancillary activities not directly related to providing value (to the project and to users). So, ideally, we would choose to work at the best point in the quality/time and/or quality/cost continuum (the "best point" would depend on a number of factors: more on this later). Assuming, however, that there is indeed a continuum. Experience taught me the opposite: we can only move in discrete steps, and those are few and rather far apart.

Quality and Cost
Suppose I'm buying a large quantity of forks, wholesale. Choices are almost limitless, but overall the real choice is between:

- the plastic kind (throwaway)

it works, sort of, but you can't use it for all kinds of food, you can't use it while cooking, and so on.

- the "cheap steel" kind.

it's surely more resistant than plastic: you can actually use it. Still, the tines are irregular and feel uncomfortable in your mouth. Weight isn't properly balanced. And I wouldn't bet on that steel actually being stainless.

- the good stainless steel kind.

Properly shaped, properly balanced, good tasting :-), truly stainless, easy to clean up. You may or may not have decorations like that: price doesn't change so much (more on this below).

- the gold-plated kind.

it shows that you got money, or bad taste, or both :-). Is not really better than the good stainless fork, and must be handled more carefully. It's a real example of "worse is better".

Of course, there are other kind of forks, like those with resin or wood handles (which ain't really cheaper than the equivalent steel kind, and much worse on maintenance), or the sturdy plastic kind for babies (which leverage parental anxiety and charge a premium). So, the quality/cost landscape does not get much richer than that. Which brings us to prices. I did a little research (not really exhaustive – it would take forever :-), but overall it seems like, normalizing on the price of the plastic kind, we have this price scale:

Plastic: 1
Cheap steel: 10
Stainless steel: 20
Gold-plated: 80

The range is huge (1-80). Still, we have a big jump from throwaway to cheap (10x), a relatively small jump from cheap to good (2x), and another significant jump between good and gold-plated (4x). I think software is not really different. Except that a 2x jump is usually considered huge :-).

Note that I'm focusing on building cost, which is the significant factor on market price for a commodity like forks. Once you consider operating costs, the stainless steel is the obvious winner.

The Quality/Cost landscape for Software
Again, I'll consider only building costs here, and leave maintenance and operation costs aside for a moment.

If you build throwaway software, you can get really cheap. You don't care about error handling (which is a huge time sink). You copy&paste code all over. You proceed by trial and error, and sometimes patch code without understanding what is broken, until it seems to work. You keep no track of what you're doing. You use function names like foo and bar. Etc. After all, you just want to run that code once, and throw it away (well, that's the theory, anyway :-).
We may think that there is no place at all for that kind of code, but actually, end-user programming often falls into that case. I've also seen system administration scripts, lot of scientific code, and quite a bit of legacy business code that would easily fall into this category. They were written as throwaways, or by someone who only had experience writing throwaways, and somehow they didn't get thrown away.

The next step is very far apart: if you want your code to be any good, you have to spend much more. It's like the plastic/"cheap steel" jump. Just put decent error handling in place, and you'll spend way more. Do a reasonable amount of systematic testing (automated or not) and you're gonna spend more. Aim for your code to be reasonably readable, and you'll have to think more, go back and refactor as you learn more, etc. At this stage, internal quality is still rather low; code is not really stainless. External quality is acceptable. It doesn't feel good, but it gets [some] work done. A lot of "industrial" code is actually at this level. Again, maintenance may not be cheap, but I'm only considering building costs here; a 5x to 10x jump compared to throwaway code is quite reasonable.

The next step is when you invest more in -ilities, as well as in external quality through more systematic and deeper testing. You're building something that you and your users like. It's truly stainless. It feels good. However, it's hard to be "just a little stainless". It's hard to be "somehow modular" or "sort of thread-safe". Either you're on one side, or you're on the other. If you want to be on the right side, you're gonna spend more. Note that just like with fork, long-term cost is lower for good, stainless code, but as far as building cost is concerned, a 2X increase compared to "cheap steel" code is not surprising.

Gold-plating is obvious: grandiose design that serves no purpose (and that would appear as gratuitous complexity to a good designer), coding standards that add no value (like requiring a comment on each and every parameter of every function), etc. I've seen systems spending a lot of energy trying to be as scalable as Amazon, even though they only had to serve a few hundred concurrent clients. Goldplated systems spend more on building, and may also incur higher maintenance costs afterward (be careful not to scratch the gold :-). Of course, it takes experience to recognize goldplating. To someone that has never designed a system for testability, interfaces for mocking may seem like a waste. To an experienced designer, they tell a different story.

But software is... soft!
As usual, a metaphor can be stretched only so far. Software quality can be easily altered over time. Plastic can't be transformed into gold (if you can, give me a call :-), but we can release a bug-ridden version and fix issues later. We may incur intentional technical debt and pay it back in the next release. This could be the right strategy in the right market at the right time, but that's not always the case.

Indeed, although start-ups, customer discovery, innovative products, etc seem to get all the attention lately, if you happen to work in a mature market you may want to focus on early-on quality. Some customers are known to be forgiving about quality issues, in exchange for early access to innovative products. It's simply not the general case.

Modular Quality?
Theoretically, we could invest more in some "critical" components and less in others. After all, we don't need the same quality (external and internal) in every single portion. We can accept quirks here and there. Due to the fractal nature of software, this can be extended to the entire hierarchy of artifacts. It's surely possible. However, it takes a lot of maturity to develop on the edge of quality. Having to constantly reason about the quality/cost trade off can easily cost you more than you're saving: I'd rather spend that reasoning to create quality code.

Once again, however, we may have a plan to start with plastic and grow into stainless steel over time. We must be aware that it is easier said than done. I've seen it many times over my long career. Improving software quality is just another case of moving our software in the decision space. We decided to make software out of plastic early on, and now we want to move it to another place, say the cheap steel spot. Some work is required. Work is proportional to the distance (which is large – see above), but also to the opposing force. Whatever form that force takes, a significant factor is always the mass to be moved, and here lies the key. It's hard to move a monolithic mass of code into another place (in the decision space). It's easier if you can move it a module at time.

Within a small modular unit, you can easily improve local quality. Change a switch/case into polymorphism. Add an interface to decouple two concrete classes. It's easy and it's cheap, because mass is small. Say, however, that you got your modular structure wrong: an important concept has not been identified, and is now scattered all around. You have to tweak a lot of code. While doing so, you'll find that you're not just fighting the mass you have to move: you have to fight the gravitational attraction of the surrounding code. That may reveal more scattered concepts. It's a damn lot of work, possibly more than it's worth. Software is not necessarily soft.

So, can we really move software around? Yes, in the small. Yes, if you got the modular structure right, and you're only cleaning up within that modular structure. Once again, the fractal nature of software can be our best friend or our worst enemy. Get the right partitioning between applications / services, and you only have to work inside. Get the right partitioning between components, and you only have to work inside. And so on. It's fine to have a plastic component, inasmuch as the interface is right. Get the modular structure wrong, and sooner or later you'll need to make wide-range changes, or accept a "cheap steel" quality. If we got it wrong at a small scale, it's no big deal. At a large scale, it can be a massacre.

Conclusions
Once again, modularity is the key. Not just "any" modularity, but a modularity aligned with the underlying forcefield. If you get that right, you can build plastic components and evolve them to stainless steel quality over time. Been there, done that. Get your modular structure wrong, and you'll lack locality of action. In other words: if you plan on cutting corners inside a module, you have to invest at the interface level. That's simpler if you design top-down than if you're in the "emergent design" camp, because the module will be underengineered inside, and we can't expect a clean interface to emerge from underengineered code.

5 comments:

Unknown said...

Modularity aligned with the underlying forcefield is equal to find the least entangled design? I recently thought about entanglement a bit, but it's somehow difficult to really understand where a change in a module can effectively bring modifications to the behaviour of other ones. Shape the forcefield can give a help, but this sounds much like what are you tryng to do for the dichotomy design/forcefield, that can bring us to introduce another meta-level to shape the forcefield :S
It's just some toughts and I know that a designer have to find the point when he only rely on intuition, knowledge and experience, at least if he wants to do his homework on time and on budget ;)

Carlo Pescio said...

I was thinking more in terms of centers, like: build your classes/modules/services around centers as they emerge from the forcefield.
On the other hand, entanglement is one of the forces shaping centers, as entangled centers on one level want to be a center on the next level.
We shape the forcefield through decisions. That's indeed the meta-level, the decision space.
About the difficulty of understanding entanglement in practice: there is still much to be said. I've also been thinking about a short post about intensional vs. extensional knowledge, which is an important concept in understanding the sources of change. It's not a new concept, but I'm not so sure everyone is familiar with it.
Most likely, it's going to be the next in the NOSD series. However, my forthcoming post will be on software design literature (I guess :-)

Unknown said...

concetto non nuovo, ma molto, molto ben espresso, valido e di sicuro interesse per chi di costruire forchette ha fatto il proprio mestiere. complimenti

Unknown said...
This comment has been removed by a blog administrator.
Unknown said...
This comment has been removed by a blog administrator.