Showing posts with label article reference. Show all posts
Showing posts with label article reference. Show all posts

Saturday, May 21, 2011

The CAP Theorem, the Memristor, and the Physics of Software

Where I present seemingly unrelated facts that are, in fact, not unrelated at all :-).

The CAP Theorem
If you keep current on technology, you're probably familiar with the proliferation of NoSQL , a large family of non-relational data stores. Most NoSQL stores have been designed for internet-scale applications, where large data stores are preferably deployed using an array of loosely connected low-cost servers. That brings a set of well-known issues to the table:

- we'd like data to be consistent (that is, to respect all the underlying constraints at any time, especially replication constraints). We call this property Consistency.

- we'd like fast response time, even when some server is down or under heavy load. We call this property Availability.

- we'd like to keep working even if the system gets partitioned (a connection between servers fails). We call this property Partition tolerance.

The CAP Theorem (formerly the Brewer's Conjecture) says that we can only choose two properties out of three.

There is a lot of literature on the CAP Theorem, so I don't need to go in further details here: if you want a very readable paper covering the basics, you can go for Brewer's CAP Theorem by Julian Browne, while if you're more interested in a formal paper, you can refer to Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services by Seth Gilbert and Nancy Lynch.

There is, however, a relatively subtle point that is often brought up and seldom clarified, so I'll give it a shot. Most people realize that there is some kind of "connection" between the concept of availability and the concept of partition. If a server is partitioned away, it is by definition not available. In that sense, it may seem like there is some kind of overlapping between the two concepts, and overlapped concepts are bad (orthogonal concepts should be preferred). However, it is not so:

- Availability is an external property, something the clients of the data store can see. They either can get data when they need it, or they can't.

- Partitioning is an internal property, something clients are totally unaware of. The data store has to deal with that on the inside.

Of course, given the fractal nature of software, in a system of systems we may see lack of availability at one level, because of the lack of partition tolerance at a lower level (or vice-versa, as an unavailable node may not be distinguishable from a partitioned node).

To sum up, while the RDBMS world has usually approached the problem by choosing Consistency and Availability over Partition tolerance, the NoSQL world has often chosen Availability and Partition tolerance, leading to the now famous Eventually Consistent model.

The Memristor, or the Value of a Theory
As a kid, I spent quite a bit of time hacking electronic stuff. I used to scavenge old TV sets and the like, and recover resistors, capacitors, and inductors to use on very simple projects. Little I knew that, theoretically, there was a missing passive element: the memristor.

Leon Chua developed the theory in 1971. It took until 2008 before someone could finally create a working memristor, using nanoscale techniques that would have looked like science fiction in 1971. Still, the theory was there, long before, pointing toward the future. At the bottom of the theory was the realization that the three known passive elements could be defined by a relationship between four concepts (current, voltage, charge, and flux-linkage). By investigating all possible relationships, he found a missing element, one that was theoretically possible, but yet undiscovered. He used the term completeness to indicate this line of reasoning, although we often call this kind of property symmetry.

Software Entanglement
In Chapter 12 of my ongoing series on the nature of software, I introduced the concept of Entanglement, further discusses in Chapter 13 and Chapter 14 (for the artifact world).

Here, I'll reuse my early definition from Chapter 12: Two clusters of information are entangled when performing a change on one immediately requires a change on the other.

Entanglement is constantly at work in the artifact world: change a class name, and you'll have to fix the name everywhere; add a member to an enumeration, and you'll have to update any switch/case statement over the enumeration; etc. It's also constantly at work in the run-time world.

Although I haven't formally introduced entanglement for the run-time world (that would/will be the subject of the forthcoming Chapter 15), you can easily see that maintaining a database replica creates a run-time entanglement between data: update one, and the other must be immediately (atomically, as we use to say) updated. Perhaps slightly more subtle is the fact that referential integrity is strongly related to run-time entanglement as well. Consistency in the CAP theorem, therefore, is basically Entanglement satisfaction.

Once we understand that, it's perhaps easier to see that the CAP theorem applies well beyond our traditional definition of distributed system. Consider a multi-core CPU with independent L1 caches. Cache contents become entangled whenever they map to the same address. CPU designers usually go with CA, forgetting P. That makes a lot of sense, of course, because we're talking about in-chip connections.

That's sort of obvious, though. Things get more interesting when we start to consider the run-time/artifact symmetry. That's part of the value of a [good] theory.

A CAP Theorem for Artifacts
My perspective on the Physics of Software is strongly based on the run-time/artifact dualism. Most forces apply in both worlds, so it is just natural to bring ideas from one world to another, once we know how to translate concepts. Just like symmetry allowed Chua to conceive the memristor, symmetry may allow us to extend concepts from the run-time world to the artifact world (and vice-versa). Let's give the CAP theorem a shot, by moving each run-time concept to the corresponding notion in the artifact world:

Consistency: just like in the run-time world, it's about satisfying Entanglement. While change in the run-time world is a C/U/D action on data, here is a C/U/D action on artifacts. If you add a member to an enumerated type, your artifacts are consistent when you have updated every portion of code that was enumerating over the extension of that type, etc.

Availability: just like in the run-time world, is the ability to access a working, up-to-date system. If you can't deploy your new artifacts, your system is not available (as far as the artifact world is concerned). The old system may be up and running, but your new system is not available to the user.

Partition tolerance: just like in the run-time world, it means you want your system to be usable even if some changes can't be propagated. That is, some artifacts have been partitioned away, and cannot be reached. Therefore, some sort of partial deployment will take place.

Well, indeed, you can only have two :-). Consider the artifacts involved in the usual distributed system:

- one or more servers
- one or more clients
- a contract in between

The clients and the server (as artifacts – think source code not processes) are U/D entangled over the contract: change a method signature (in RPC speak), or the WSDL, or whatever represents your contract, and you have to change both to maintain consistency.

What happens if you update [the source code of] a server (say, to fix a serious bug), and by doing so you need to update the contract, but cannot update [the source code of] a client [timely]? This is just like a partition. Think of a distributed development team: the team working on the client, for a reason or another, has been partitioned away.

You only have two choices:

- let go of Availability: you won't deploy your new server until you can update the client. This is the artifact side of not having your database available until all the replicas are connected (in the run-time world).

- let go of Consistency: you'll have to live with an inconsistent client, using a different contract.

Consequences
Chances are that you:
- Shiver at the idea of letting go of Consistency.
- Think versioning may solve the problem.

Of course, versioning is a coping strategy, not a solution. In fact, versioning is usually proposed in the run-time world of data stores as well. However, versioning your contract is like pretending that the server has never been updated. It may or may not work (what if your previous contract had major security flaws that cannot be plugged unless you change the contract? What if you changed your server in a way that makes the old contract impossible to keep? etc). It also puts a significant burden on the development team (maintaining more source code than strictly needed) and may significantly slow down development, which is another way to say it's impacting availability (of artifacts).

It is important, however, that we thoroughly understand context. After all, any given function call is a contract, and we surely want to maintain consistency as much as we can. So, when does the CAP Theorem apply to Artifacts? Partition tolerance and Availability must be part of the equation. That requires one of the following conditions to apply:

- Independent development teams. The Facebook ecosystem, for instance, is definitely prone to the artifact side of the CAP Theorem, just like any other system offering a public API to the world at large. You just can't control the clients.

- A large system that cannot be updated in every single part (unless you accept to slow down deployment/forgo availability). A proliferation of clients (web based, rich, mobile, etc) may also bring you into partitioning problems.

If you work on a relatively small system, and you have control of all clients, you're basically on a safe field – you can go with Consistency and Availability, because you just don't have significant partitions. Your development team is more like a multi-core CPU than a large distributed system.

Once we understand that versioning is basically a way to pretend changes never happened on the server side, is there something similar to Eventual Consistency for the artifact side?

TolerantReader may help. If you put extra effort into your clients, you can actually have a working system that will eventually be consistent (when source code for clients will be finally updated) but still be functional during transition times, without delaying the release of server updated. Of course, everything that requires discipline on the client side is rather useless in a Facebook-like ecosystem, but might be an excellent strategy for a large system where you control the clients.

Interestingly, Fowler talks about the virtues of TolerantReader as if it was always warranted for a distributed system. I tend to disagree: it makes sense only when some form of development-side partitioning can be expected. In many other cases, an enforced contract through XSD and code generation is exactly what we need to guarantee Consistency and Availability (because code generation helps to quickly align the syntactical bits – you still have to work on the semantic parts on both sides). Actually, the extra time required to create a TolerantReader will impact Availability on the short term (the server is ready, the client is not). Context, context, context. This is one more reason why I think we need a Physics of Software: without a clear understanding of forces and materials, it's too easy to slip into the fallacy that just because something worked very well for me [in some cases], it should work very well for you as well.

In fact, once you recognize the real forces, it's easy to understand that the problem is not limited to service-oriented software, XML contracts, and the like. You have the same problem between a database schema and a number of different clients. Whenever you have artifact entanglement and the potential for development partitioning (see above) you have to deal with the artifact-side CAP Theorem.

Moving beyond TolerantReader (which is not helping much when your client is sending the wrong data anyway :-), when development partitioning is expected, you need to think about a flexible contract from the very beginning. Facebook, for instance, allows clients to specify which fields they want to receive, but is still pretty rigid in the fields they have to send.

This is one more interesting R&D challenge that is not easily solved with today's technology. Curiously enough, in the physical world we have long understood the need to use soft materials at the interface level to compensate for imperfections and unexpected variations of contact surfaces. Look at any recent, thermally insulated window, and most likely you're gonna find a seal between the sash and the frame. Why are seals still needed in a world of high-precision manufacturing? ;-)

Conclusions
Time for a confession: when I first thought about all the above, I had already read quite a bit on the CAP Theorem, but not the original presentation from Eric Brewer where the conjecture (it wasn't a theorem back then) was first introduced. The presentation is about a larger topic: challenges in the development of large-scale distributed systems.

As I started to write this post, I decided to do my homework and go through Brewer's slides. He identified three issues: state distribution, consistency vs. availability, and boundaries. The first two are about the run-time world, and ultimately lead to the CAP theorem. While talking about Borders, however, Brewers begins with run-time issues (independent failure) and then moves into the artifact world, talking (guess what!) about contracts, boundary evolution, XML, etc. Interestingly, nothing in the presentation suggests any relationship between the problems. Here, I think, is the value of a good theory: as for the memristor, it provides us with leverage, moving what we know to a higher level.

In fact, another pillar of the Physics of Software is the Decision Space. I'm pretty sure the CAP theorem can be applied in the decision space as well, but that will have to wait.

Well, if you liked this, stay tuned for Chapter 15 of my NOSD series, share this post, follow me on twitter, etc.

Monday, October 04, 2010

Notes on Software Design, Chapter 11: Friction in the Artifacts world

When I first "got" the concept of run-time friction, I thought it made sense only in the run-time world. I was thinking of friction as "everything that gets in the way as we process the Function", and since there is no Function in the artifact world, there can be no friction as well. That disturbed me a little, because every other concept was present in both worlds.

Later on, I realized I could extend that notion to the artifact side in two distinct ways. One didn't survive scrutiny. It was too informal, although somehow I'd like to bring back some of the underlying reasoning, probably as part of a different property. The other proved more solid. Since a few people asked me (in real life) how do I get these ideas, I think it might be interesting to tell the story behind the concept; after all, a blog ought to be a... log :-). The story is not really linear, but then, very few things in life are linear.

Step 1
It was an early morning back in July. I was running. At some point (within the first few kilometers) I had a flash that perhaps the notion of mass was not a primitive concept. Perhaps there was a concept of volume (and LOC would give volume, not mass) and a concept of density (after all, lines can be quite different). I spent a few minutes thinking about density (the simplest idea being that perhaps something like cyclomatic complexity could explain density), then thinking that I didn't like volume because it reminded me of Halstead's Software Science, and I didn't want my work to be so disconnected from practice. Then I started to think about a concept of surface; maybe there was an ideal volume / surface ratio too? Then the zen effect of running took over and I blissfully stopped thinking :-)). [most of those ideas were good, and at some point I'll have to reconsider a few things].

Step 2
Days later, I was thinking about giving a name to this stuff I'm writing. I came up with a few ideas, and also run a trademark / domain name search, because you never know. Looking for "Physics of Software", I found a remotely related entry in the C2 wiki: Physical Cues In Software Development.
Now, that stuff seems more concerned with the geometry/topology of code than with the physics of software, but while reading that page I was slightly tantalized by this sentence: "Too dense to refactor easily". Interesting. I did some literature research on density vs. refactoring, but nothing substantial came up.

Step 3
Days later again. I was writing some code (yeah, I write real world code too :-). I tend to write short methods: I practice what I preach. Still, I was in the middle of a rather long function (by my standards, that is, about 100 lines). I was looking at it, trying to understand how it got to be that big. I could see the gravitational effect of having used a massive third party component; that was consistent with my current understanding of the scale-free nature of software (more on this another time).
I could also see a few small, simple improvements, but in the end it was not trivial to refactor that method into a few shorter functions. Sure, it could be done, just not by selecting a few lines and choosing "extract method" from the refactoring menu. I had to create new classes, shuffle responsibilities around. Overall, it was a large effort (given the relatively small mass). I contemplated the idea of leaving that function alone :-). Too dense to refactor easily? Hmm.

Step 4
Perhaps a couple of weeks later. This thing kept bouncing in my head. I always assumed I could refactor every single method into a set of smaller methods, perhaps introducing new classes. Sure, run-time friction could grow as a result, but that's just something to be balanced. But was that actually true? Could I write a function that was basically impossible to refactor, that is, where extracting a few lines required either a huge effort or, even better, where to move N symbols outside, you basically have to add N symbols inside (to call the extracted method)? Having too much to do, I left the question unanswered.

Step 5
Just a few days later. Late evening, but unwilling to call it a day :-), I sat down trying to write a gordian function :-), a simple sequence of lines that couldn't easily be refactored. This is what I end up with:

void f( int a, int b )
{
int c = a + b;
int d = a + c;
int e = b + c;
int f = a + d;
int g = b + d;
int h = c + d;
int i = c + e;
int j = b + e;
int k = a + e;
// ... use f, g, h, i, j, k as above
}

It may be easier to visualize the pattern through a graphical view:



the idea is pretty simple: at every level, I'm using nearby concepts and distant concepts; I'm also creating nodes for subsequent use in lower levels. Now, this function is trivial. Cyclomatic complexity is just 1. Yet is hard to refactor. It is hard to move things (lines, symbols, concepts) around. So I thought perhaps this was "density".

Step 6
Out of nowhere, a few days later I realize that density was not the right name. When you have troubles moving things around, we call it viscosity, not density. That triggered an internal alert: viscosity has already been used in some computing literature, and I hate to redefine existing terms, so let's check literature again.

Step 7
To my knowledge, "viscosity" has been used in:
- The Cognitive Dimensions of Notations literature, where it is defined as "the difficulty of making small changes to the information structure" or "resistance to change", both of which are rather similar to what I'm thinking. Note that although the papers above are about the notation, not the code, I discussed extending those concepts from tools to materials almost three years ago.
- The well known Design Principles and Design Patterns paper from Robert Martin, where it is defined as "When the design preserving methods are harder to employ than the hacks, then the viscosity of the design is high". This is completely unrelated, and honestly I'm not sure that "viscosity" is the right term. Indeed, Martin is using several physics-lookalike properties (immobility, fragility, rigidity), but it seems like they've been adopted on the basis of some vernacular usage of terms, not on the basis of strong correlation between the software world and the physical world (there is certainly no notion of "preserving a design vs. hacks" in viscosity as defined in physics).

In the end, I considered viscosity as a good choice for the difficulty of moving knowledge around in the artifact world.

Step 8
Guess what. Viscosity is basically friction (in fluids). Ok, I got it :-).
Just like run-time friction kicks in when you try to move run-time knowledge around, artifact-side friction kicks in when you try to move artifact-side knowledge around. Code is one way of representing artifact-side knowledge. Diagrams are another way. They both manifest some resistance to change.
Once you get the concept (almost) right, it's time to clean things up, come up with a more precise definition, see if it's useful and what you can learn from it.

Defining viscosity
Why is the artifact in step 5 viscous, that is, what makes it difficult to move knowledge around? The main issue is that we can't find sub-centers, because every line has local interactions (with nearby knowledge) and non-local interaction (with distant knowledge). So although every single line is using just a few symbols, you can't simply find a subset of lines that is relatively isolated from the rest. Every line is also very simple on its own, and unworthy to be moved outside alone.

So, I could define a viscous artifact A like this:

A is viscous when given any subset B of A, the mass of knowledge inside B is not significantly higher than the amount of knowledge exchanged between B and A-B

I crafted this definition rather carefully :-). In fact, the problem with the function above is not just that every line depends on nearby and distant knowledge. It's also that it's doing very little. Otherwise, we could move a significant portion of code (doing "lot" of stuff) outside the function, of course by passing parameters. But then, the mass of knowledge inside the subset B would be higher than the mass of knowledge exchanged (the parameters). That is not the case in function f.

Viscous artifacts resist shear/tensile stress, that is, they resist extraction of knowledge. As we try to move the knowledge in B outside A, the knowledge exchanged with A-B is opposing the movement. The effort we have to spend to reorganize viscous artifacts is the equivalent of friction energy in the run-time world. Only, this time, it's human work, not CPU work.

Note: I could come up with some kind of formula for a viscosity coefficient. I did not because I feel I'm not yet at that stage. Still, the minimum ratio between exchanged and internal knowledge (quantified over the subsets B, not over A) seems like a good candidate.

It is important to understand that viscosity is an internal property of an artifact. It has nothing to do with the artifact interface. It's about the artifact internals. Of course, given the hierarchy of artifacts, an artifact may have low [internal] viscosity, yet be part of a viscous higher-level artifact. For instance, a low-viscosity function can be part of an high-viscosity class. In that case, it would be easy to move some portions of code outside the function, but not outside the class.

Consequences
Once again, we have to resist the temptation to define some property as "bad". In the physical world, viscosity is not "bad". It can be useful, or it can be a problem: it's a matter of context.

Besides, when you look at the definition above, you may see some relationship with a vague notion of cohesion: a viscous artifact is "more cohesive". Cohesion is usually considered a good property. What's wrong here?

My current understanding is that it's fine to be viscous when the mass is small. Actually, it's good to be viscous when the mass is small. A small viscous chunk of knowledge is a good center: it stands on its own, and can't be easily broken. However, viscosity should decrease as mass increases. That gives an opportunity for extraction of knowledge, thereby forming new centers. Note that by saying so, I'm implicitly accepting the existence of high-mass artifacts. Still:

- high-mass, low-viscosity artifacts can be considered as rather innocent underengineering, a form of technical debt that can be easily repaid (see also the latest comments to Chapter 0). The high gravity of the artifact will bring in more stuff: that would be the ideal time to refactor the code, which will be easy, since viscosity is low.

- high-mass, high-viscosity artifacts are a serious design weakness that should be dealt with as soon as you notice, possibly while you're still creating the artifact. Gravity will bring in more stuff, and things can only get worse.

Note: a trivial refactoring of a viscous artifact will significantly increase run-time friction, as we'll have to pass a lot of parameters around. In most cases, we'll have to rethink the artifact and perhaps a significant portion of the surroundings, as we may have chosen the wrong centers.

Conclusions
It's sort of revealing that I started with a notion of density but ended up with a notion of viscosity, and had to reject an existing definition of viscosity in the process. Labeling software phenomena after physical phenomena is simple. Doing so in a meaningful way is not so trivial.

I think that keeping the duality run-time / artifact in mind is helping me a lot in this journey. Things are much easier once you can clearly see that you're dealing with two different worlds. A recent post by Rico Mariani, for instance, raises an interesting point, which is trivially explained within my frame of reasoning, but seems unnecessarily obscure when you simply talk about "coupling" and ignore the artifact/run-time duality.

As I progress in cleaning up some ideas on the decision space, I hope to bring in even more clarity. Which reminds me that I have something really important to say about design and decisions. It's short, and I'll let it preempt :-) the next chapter on tangling.

Sunday, September 12, 2010

Notes on Software Design, Chapter 10: Run-Time Friction

So, here is the story. I keep a lot of notes. Some are text files with relevant links and organized ideas. Most are rather embarrassing scribbles on just about any piece of paper that is lying around when I need it. From time to time, I move some notes from paper to files, discard concepts that didn't prove themselves, and rearrange paragraphs to fake some kind of logical, sequential reasoning over a process that was, in fact, rather chaotic. Not surprisingly (since software is just another way to encode knowledge), David Parnas suggested long ago that we could do the same while documenting software design (see "A rational design process: How and why to fake it").
Well, it's not always easy. Sometimes, I try to approach the storytelling from an angle, see that it doesn't work out so well, and look for another. Sometimes I succeed, sometimes I don't (although, of course, the reader is the ultimate judge). This time, I have to confess, I feel like I couldn't find the right angle, the right way to start, to unfold a concept in a way that makes it look simple and natural. So I'll trust you to be smart enough to make sense of what follows :-). It's a very long post, and you may want to digest it in more than one session.

The physical world
I guess you all had to push some furniture around at one time or another. You have probably felt a stronger resistance in the beginning, followed by a milder form of resistance as soon as you got some movement.
The mild resistance is due to kinetic friction, while the initial, stronger resistance is usually due to static friction, that you have to overcome before moving the object (if you're not familiar with kinetic and static friction, wikipediawill tell you more than you want to know :-).

As you move your stuff around, friction makes you waste some energy, in a way that is basically proportional to the normal force, the distance, and the coefficient of kinetic friction (see the page above for the actual equation). I'll get back to this later, but if you move a constant mass on a flat surface, the energy you waste is proportional to the mass you move, the distance you go, and the magic coefficient of friction.

The beauty of all this is that it's simple and rather unambiguous. Friction is always present in mechanical engineering, but it's a well understood concept (as far as engineering is concerned; it's still blurry at the quantum level, at least for the uninitiated like myself), and there is usually no wishy-washy talking about friction. It's not a broad concept, that is, you won't be able to design the next-generation jet engine if all you have in your conceptual toolbox is friction, yet you won't be able to design an engine at all without an appreciation of friction.

The software world
I'm first and foremost a software design practitioner: I design software, almost every day. Sometimes by myself, most often with other people; therefore, I do a lot of "design talk". In many cases, at one point or another, someone is going to bring in "performance" or "efficiency" to support (or reject) a design decision.
We use those words a lot, with different meaning depending on who's saying it and why he's saying it. It seems like I'm never tired of linking wikipedia, so here is a page on computer performance. Just look at the initial list of different, context-dependent meanings. It's not surprising, then, to find out some people have very peculiar views of performance. "I use arrays because they're more efficient". Sure, except that then you do a linear search because you need multiple indexes; say "efficient" again :-)?

One might expect Computer Science (with capital letters :-) to come to the rescue and define terms more precisely, and hopefully with some relevance for practice. However, computer science is more concerned with computational complexity theory than with the nitty-gritty details of being "fast".
Now, don't get me wrong. You won't get too far as a programmer (and definitely not as a software designer) if you don't get the concept of complexity classes, if you can't see that an algorithm is O( n^2 ) and another is O( n log n ), or if you don't even know what the Big Oh notation is all about. You have to know this stuff, period. In a sense, complexity theory is part of the math of software, and there is little point in investigating a physics of software if you don't get the math first. But math alone won't cut it. However, once we get past the complexity class we get very little assistance from computer science (and I'm purposely ignoring the fact that just because an algorithm is in the O( n log n ) class in the average case doesn't mean I can't beat it with an O( n^2 ) algorithm in my practical cases).

On the "software engineering" side, the usual advice is to build the program and then use a profiler. Yeah, well, sure, beats banging your head against the wall :-), but it's not exactly like knowing what you're doing all along. Still, we make a lot of low-level design decisions while coding, and many of them will ultimately impact "performance". Lacking the basic terminology to think (and talk) about this kind of stuff is rather depressing, so why don't we try to move just a tiny step forward?

Wasting energy in software
So, here I have this piece of software (executable knowledge). For most practical applications, what I need is to get some data (interactively, from a DB, through some kind of device, whatever), transform it in a meaningful way (which could be a complex process encoded in thousands of lines), and spit out some results (which is still data, anyway). The transformation is the Function.

On the artifact side, our software may be using global variables all around, or be based on a nice polymorphic structure, yet the Function doesn't care. The structure we provide on the artifact side is the domain of Form (by now, you're probably familiar with all this stuff).

Now, transformation is a process, and no real-world process is 100% efficient; it's always going to waste something. Perhaps we should look better at that "waste" part. Something I learnt a long time ago, while pondering on principles and patterns, is that overly general concepts (like "performance") must give way to more specialized notions.

Some code, please :-)
Consider this short portion of C code. I'm using C because it's a low-level language, where the implications of any given choice are relatively easy to understand.
double max( double x, double y )
{
if( x > y )
return x;
else
return y;
}

double max3( double x, double y, double z )
{
double d = max( x, y );
d = max( d, z );
return d;
}
The code is pretty obvious. In a common, stack-based CPU architecture, max3 will copy x and y on the stack and call max; then it will copy d and z and call max again. The return value might be stored in a CPU register or in RAM, depending on the compiler.

Copying those values is a waste of energy, of course. I could manually inline max inside max3 and get rid of that waste. I would sacrifice reusability and perhaps clarity for "higher performance" or "higher efficiency" or "reduced waste". Alternatively, the compiler could inline the function on my behalf (see Chapter 6 for the role of languages on balancing the two worlds).

What if I'm working on some large data structure? The Fortran guy down the corner will suggest that by keeping your structures in the common area / global memory, you won't even have to pass parameters around: every function knows exactly where to get input and where to store output! Again, we'll sacrifice reusability, and perhaps duplicate large portions of code, for sake of efficiency. As you go through most literature on High Performance Computing (see, for instance, The Ideal HPC Programming Language, recently reprinted in Communications of ACM), you'll see that the HPC community is constantly facing the problem of wasted cycles, and is wasting a lot of LOC to prevent that.

Moving data around is not the only way to waste energy. Consider this portion of real-world code, written by a (supposedly) performance-conscious "little meritocracy":
static int is_rfc2822_header(char *line)
{
int ch;
char *cp = line;
if (!memcmp(line, "From ", 5) || !memcmp(line, ">From ", 6))
return 1;
while ((ch = *cp++)) {
if (ch == ':')
return cp != line;
if ((33 <= ch && ch <= 57) ||
(59 <= ch && ch <= 126))
continue;
break;
}
return 0;
}
yeah, it's ugly as hell, but it's also wasteful (which is funny, for reasons that are too long to explain here). If you read it carefully, you'll find a way to optimize the "while" body quite a bit, and while you're at it, you can easily make it more readable. Also, the two memcmp in the beginning are wasting cycles (going through the first 5 characters twice), but just like the coefficient of friction must be measured in practice, at this level any alternative should really be measured on a real-world CPU.

Note: as we optimize code, we have to assume that is correct. We don't change Form unless we know that the Function is right. Before posting this, I checked for any update to the codebase (I got that code a few years ago, and it never looked right to me). The code is still the same, but now there is also a comment explaining what the function is intended to do. Unfortunately, it's not what it's doing, which is even more ironic, for the same unspoken reasons above. Anyway, we could easily fix the bug and still optimize the code.

Run-time Friction
Just like in the physical world we move object around, in the run-time world of software we move knowledge around. More exactly, we move data around (we call it data flow) and we move the execution point around (we call it control flow). We move that stuff around to calculate some Function. In the process of calculating the Function, we usually waste some cycles. We waste cycles because we have to copy data on the stack, or from one data structure to another. We waste cycles because we do unnecessary comparison, computations, jumps. We waste cycles because we process the same data more than once. Most often, we waste those cycles because we get something in exchange in the Form (artifact) domain. Sometimes, we waste cycles just because of bad coding.

The energy waste is not a constant: copying an integer is different from copying an array of integers (that's weight, of course). Also, if your array has been swapped out to the paging file, the copy is going to cost you more: that's the contribution of distance, and I'll get back to this later. Right now, remember that wasted energy is a consequence of friction, but is not friction.

Causes and types of software friction
We have already seen a few cases of software friction: when you copy data, you waste cycles. Max3 didn't strictly need to copy data: the Function didn't care about reusing max, only Form did. Before we try do define friction more precisely, it's interesting to see how deep the analogy with real-world friction really is. Indeed, we even have static software friction, and kinetic software friction!

Consider a Java (or .NET) virtual machine. When you hit a function for the first time, the code is compiled just in time. This has nothing to do with Function. It is a byproduct of a technological choice. It will cost you some cycles: that's friction. Also, it happens only once, to "put things in motion": that's static friction. In general, static friction will increase latency, while kinetic friction will reduce throughput. Good: we just sorted out the two main components of "performance".

Consider a web service. Before you can call the server, you go through a relatively lengthy process, from high-level stuff (marshaling your data) to low level stuff (establishing a network connection). This is all friction: the Function is happening on the other side, inside the service code. Here we see both static and kinetic friction at play: establishing a connection adds latency, exchanging data over the network reduces throughput.

Consider stored procedures. The ideal stored procedure takes little data in input, does significant CRUD inside, and returns little. This way, we have minimal waste due to kinetic friction, as we exchange little data with the database. Of course, this is not the only way to minimize energy waste: another approach would be to reduce distance, by bringing the database itself in-process. Interestingly, most real-time databases use the second approach.

So, what is causing friction in software? Friction is caused by:
  • A copy of data from one place to another (e.g. parameter passing, temporary variables, etc), as this adds no meaning to data, and therefore is useless as far as Function is concerned.

  • Syntactical transformation of data (e.g. marshaling) which adds no semantics (as above: this processing is not part of the Function). This includes any form of data transformation needed to talk over a non-native protocol.

  • Unnecessary statements (like those that could be removed in the C function above).

  • Redundant access / processing (some will be removed by the compiler, but some won't)

  • Bookkeeping (allocation, deallocation, reference counting, heap defragmentation, garbage collection, paging, etc). All this adds no semantics, and it's irrelevant for the Function: indeed, a well-written garbage collected program should behave properly under the so-called null garbage collector.

  • Unnecessary indirection. This is a long story and I'll leave for another time, as I've yet to talk about indirection in the physics of software.

  • In general, everything that is not strictly necessary to calculate the Function, but has been added because of Form, or because of the programmer's inability to streamline the code to the mere Function, is a source of friction and will waste run-time energy.


Defining friction
At this stage in my understanding of the physics of software, it's still hard to come up with numbers, coefficients, sometimes even formulas. Actually, I'm usually happy when I get some concept right. Still, let's look at a simplified formula for the energy wasted through friction (in the real world):

Normal Force * Coefficient of Friction * Distance.

That would hold pretty well in the software world as well, both at the qualitative (easier) and probably quantitative (not there yet) level. At the qualitative level, it tells us what we can control and perhaps leverage. I'll explore this in the next paragraph. At the quantitative level, it could help to evaluate low-level choices. First, however, we have to define Normal Force, Coefficient of Friction, and Distance.

I've defined distance in the run-time world in Chapter 9. Unfortunately, it's an ordinal scale, so we can't do math with distance. This sort of rules out any chance to have a quantitative definition of friction, but we can also look at it from the other side: a better understanding of friction energy (like: wasted cycles) could shed light on the right measurement scale for distance!

Assuming a flat world (I have no reason to think otherwise) the Normal Force is just weight. Weight could be easily defined as the number of bytes involved. For instance, the cost of a copy is linear with the number of bytes you copy.

The coefficient of friction is a dimensionless parameter. Interestingly, if we decide to measure energy in cycles (which makes some sense, although we usually think of cycles as time, not energy) that would imply that unit of measurement for Distance is cycles/byte. I'll have to think more about this.

Although the coefficient of friction, in the real world, cannot be predicted but only measured, we have some intuitive grasp of it being related to the materials. As the aforementioned wikipedia page explains, it's a relatively complex "system property", depending on many factors. The same applies in the software world. The cost to move a bunch of bytes from one position to another dependes on a bunch of factors. If we want to raise the abstraction level and think in terms of objects, and not bytes, things become more complex. The exact copy semantics (reference, shallow, deep) kicks in. That's fine: a software material with shallow copy semantics would have a different coefficient of friction than one with reference copy semantics.

Overall, I think we have little control over the coefficient of friction (I might be wrong), so for any practical purpose, distance and weight are the most interesting parameters.

Is it useful, anyway?
A good theory, and a good concept, must have a good explanatory power, that is, we should be able to use them to explain known phenomena, explain why something works, rationalize widespread practice or beliefs, etc.

As I've already discussed, the evolution of programming languages can be largely seen as an attempt to balance the world of artifact / form with the run-time / function world. In this sense, we can look for instance at the perfect forwarding problem, solved by right value references in the next C++ standard, as a further attempt to remove some energy waste, by avoiding unnecessary copy of data. C++ provides many ways to control friction energy, mostly in the area of generic programming and also template metaprogramming. The Curiously Recurring Template Pattern, for instance, provides a form of static polymorphism exactly to avoid some friction due to unnecessary indirection (virtual dispatch).

More generally, the simple equation for energy waste provides a clue on what we can actually control: weight, distance, coefficient of friction. This is it. As we shape software, this is what we can actually change if we want to reduce friction energy.

Consider HTTP compression: distance couldn't be changed, so we had to change weight.

Also, understanding the difference between static and kinetic friction explains a lot of existing practices. Think of the Nagle's algorithm. It works by increasing static friction (therefore latency) in exchange for lower kinetic friction (therefore throughput). Once you get your concepts right, so many things unfold so easily :-).

Finally, the analogy holds to the extremes: just like excessive friction in mechanical systems can lead to jam, excessive friction due to paging can jam a software system. This is commonly known as Trashing.

I think a caveat is in order: friction in the physical world is not necessarily evil. Wasn't it for friction, we couldn't even walk. Mechanical devices have to deal with friction all the time, but they also exploit friction all the time. It's harder to exploit friction in software (although the Nagle's algorithm does). Most often, we must see friction as a trade/off with other properties, mostly in the artifact side. Still, an understanding of the different types of friction, and of the constituents of friction energy, can help evaluate alternatives and even generate new, better ideas in a more systematic and (dare I say it :-) scientific way.

A different angle
I choose friction as a physical analogy because it's a simple, familiar concept. Intuition and everyday experience can easily compensate any lack of engineering knowledge. Still, I've been tempted to use different analogies, like hydraulic or electrical analogies. Indeed, there are several analogies between electrical, mechanical, hydraulic and even acoustic and optical systems (see here for a start), so it's always possible to choose a different reference system.

Anyway, my alternative would have been to model everything after resistance and current. Current would be the equivalent of throughput, or "performance", and resistance would cause thermal dissipation. In the end, I didn't go this way for a number of reasons; for instance, one-shot stuff like JIT would require something like a thermistor (think of a PTC in CRT degaussing), but I would lose a few readers that way :-).

Still, if you followed so far, there is an interesting result I'd like to share. Consider a trivial circuit where we apply 1V to a 1 ohm resistor, resulting in 1A current. Now, I'll replace the resistor with a series of 2, with resistance (1-P) and P ohms. Nothing changes, same current. Resistors represent processes.

Now say that we have this concept of parallel execution, so the process carried out by P can be parallelized. By way of the analogy, to increase throughput (current) I can simply add up to N resistors in parallel. Now the circulating current is obviously 1 / (1-P + P/N) A. Guess what, I just rediscovered Amdahl's Law using Ohm's Law. That's cute :-).

Ok guys, next time I'll have a much shorter post on the artifact-side notion of friction. If we survive that, we'll be ready for tangling.

Thursday, August 19, 2010

Notes on Software Design, Chapter 9. A simple property: Distance (part 2)

What is Distance in the run-time world? As I began pondering on this, it turned out I could define distance in several meaningful ways. Some were redundant with other notions I have yet to present. Some were useless for a theory of software design. In the end, I picked up a very simple definition, with some interesting ramifications.

Just like the notion of distance in the artifact world is based on the hierarchy of artifacts, the notion of distance in the run-time world is based on a hierarchy of locations. These are the locations where executable knowledge is located at any given time. So, given two pieces of executable knowledge P1 and P2, we can define an ordinal scale:

P1 and P2 are inside the CPU registers or the CPU execution pipeline - that's minimum distance.
P1 and P2 are inside the same L1 cache line
P1 and P2 are inside the L1 cache
… etc

for the full scale, see the (updated) Summary at the Physics of Software website.

Dude, I don't care about cache lines!
Yeah, well, but the processor does, and I'm looking for a good model of the real-world, not for an abstract model of computing disconnected from reality (which would be more like the math of software, not the physics).
You may be writing your code in Java or C#, or even in C++, and be blissfully unaware of what is going on under the hood. But the caches are there, and their effect is absolutely visible. For a simple, experimental, and well-written summary, see Igor Ostrovsky's Gallery of Processor Cache Effects (the source code is in C#, but results wouldn't be different in Java or C++).
Interestingly enough, most algorithms and data structures are not optimized for modern processors with N levels of cache. Still, there is an active area of research on cache-oblivious algorithms which, despite the name, are supposed to perform well with any cache line size across any number of cache levels (you can find a few links to specialized algorithms here but you'll have to work around broken links).

What about virtual memory? Again, we can ignore the magic most of the times, but when we're looking for high-performance solutions, we have to deal with it. Unconvinced? Take a look at You're Doing It Wrong, where Poul-Henning Kamp explains (perhaps with a bit too much "I know it all and you don't" attitude :-)) why textbooks are not really talking about real-world computers [anymore].

Consequences
What happens when run-time distance grows? We're bound to see a staircase-like behavior, as in the second picture in the gallery above, just with more risers/treads. When you move outside your process, you have context switching. When you move outside your computer, you have network latency. When you move outside your LAN, you also have name lookup and packet hops. We'll understand all this stuff much better as we get to the concept of friction.

There is more. When talking about artifact distance, I said that coupling (between artifacts) should decrease as distance increases. In the run-time world, coupling to the underlying platform should decrease as distance increases. This must be partially reflected in the artifacts themselves, but it is also a language / platform / transformation concern.
Knowledge at short distance can be tightly coupled to a specific hw / sw platform. For instance, all code inside one component can be tightly bound to:
  • An internal object model, say the C++ object model of a specific compiler version, or the C# or Java object model for a specific version of the virtual machine.

  • A specific operating system, if not virtualized (native code).

  • A specific hardware, if not virtualized.

This is fine at some level. I can even accept the idea that all the components inside a single application have to share some underlying assumptions. Sure, it would be better if components were relatively immune from binary issues (a plague in the C++ world). But overall (depending on the size of the application) I can "control" things and make sure everything is aligned.
But when I'm talking to another service / application over the network, my degree of control is much smaller. If everything is platform-dependent (with a broad definition of platform, mind you: Java is a platform), we're in for major deployment / maintenance issues. Even worse, it would represent a huge platform lock-in (I can't use a different technology for a new service). Things get just worse on a global network scale. This is why XML took the world by storm, why web services have been rather successful in the real world, and so on. This is also why I like technologies / languages that take integration with other technologies / languages seriously, and not religiously.
As usual, the statement above is bi-directional. That is, it makes very little sense to pursue strong decoupling from the underlying platforms at micro-level. Having a class talking to itself in XML is not a brilliant strategy. Again, design is about balance: in this case, balance between efficiency and convenience on one side, and flexibility and evolvability on the other. Balance is obtained when you can depend on your platform locally, and be increasingly independent as you move farther.

Run-time Distance is not a constant
Not necessarily, anyway; it depends on small-scale technical choices. In C++, for instance, once you get two objects in the same cache line, they will stay there for their entire life, because identity in C++ is address-based. In a garbage collected environment, this is not true anymore: objects can move freely during collection.
Moreover, once we move from the cache line to the entire cache, things come and go, they become near and distant along time. This contributes to complex performance patterns, and indeed, modern hardware makes accurate performance prediction almost impossible. I'm pretty sure there are some interesting phenomena to be studied here - a concept of oscillating distance, perhaps the equivalent of a performance beat when two concurrent threads have slightly different oscillating frequency, and so on, but I'm not currently investigating any of this - it's just too early.
At some point, distance becomes more and more "constant". Sure, a local service may migrate to LAN and then to WAN, but usually it does so because of human intervention (decisions!), and may require changes on the artifact side as well. Short-range distance is a fluid notion, changing as executable knowledge is, well, executed :-).

By the way: distance in the artifact world is not a constant, either. It is constant when the code is frozen. As soon as we change it, we change some distance relationships. In other words, when the computer is processing executable knowledge, run-time distance changes. When we process encoded knowledge (artifacts), artifact distance changes.

Distance-preserving transformations
Knowledge encoded in artifacts can be transformed several times before becoming executable knowledge. Most of those transformations are distance-preserving, that is, they map nearby knowledge to nearby knowledge (although with some jumps here and there).

For instance, pure sequences of statements (without choices, iterations, calls) are "naturally" converted into sequential machine-level instructions that not only will likely sit in the same cache line, but won't break the prefetch pipeline either. Therefore, code that is near in the artifact world will end up near in the run-time world as well.
In the C/C++ family, data that is sequentially declared in a structure (POD) is sequential in memory as well. Therefore, there is a good chance of sharing the same cache line if you keep your structures small.
Conversely, data and code in different components are normally mapped to different pages (in virtual memory systems). They won't share the same cache line (they may compete for the same cache line, but won't be present in the same line at once). So distant things will be distant.

Even "recent" advances in processor architecture are increasing the similarity between the artifact and run-time class of distance. Consider predicated execution: it's commonly used to remove branches at machine-level for short sequence of statements in if/else conditionals (see Fig. 1 in the linked paper). In terms of distance, it allows nearby code in the artifact space to stay close in the run-time space, by eliminating branches and therefore maximizing proximity in the execution pipeline.

Some transformations, however, are not distance-preserving. Inlining of code (think of C++ inline functions, for instance) will shrink distance while moving from the artifact world to the run-time world.
Aspect Oriented Programming is particularly interesting from the point of view of distance. On the artifact side, aspects allow to isolate cross-cutting concerns. Therefore, they allow to increase distance between the advice, that is factored out, and the join points. A non-distance-preserving transformation (weaving) brings the two concepts back together as we move toward execution.

Curiously enough, some non-preserving transformations work in the opposite way: they allow things to be near in the artifact space, yet be distant in the run-time world. Consider the numerous technologies (dating back to remote procedure calls) that allow you to code as if you were invoking a local function, or a method of a local object, while in fact you are executing a remote function, or a method of a remote object (through some kind of local proxy). This is creating an illusion of short distance (in the artifact world) while in fact maintaining high distance in the run-time world. As usual, whenever we create software over a thin layer of illusion, there is a potential for problems. When you look at The 8 fallacies of distributed computing , you can immediately recognize that dealing with remote objects as if they were local objects can be rather dangerous. Said otherwise, the illusion of short distance is a leaky abstraction. More on distributed computing when I'll get to the concept of friction.

A closing remark: in my previous post, I said that gravity (in the artifact world) tends to increase performance (a run-time property). We can now understand that better, and say that it is largely (not entirely) because:
- the most common transformations are distance-preserving.
- performance increases as the run-time distance decreases.
Again, friction is also at play here, but I have yet to introduce the concept.

Addenda on the artifact side
Some concepts on the artifact side are meant to give the illusion of a shorter distance, while maintaining separation. Consider extension methods in .NET or the more powerful concept of category in objective C. They both give the illusion of being very close to a class (when you use them) while in fact they are just as distant as any other class. (By the way: I've been playing with extension methods [in C#] as a way to get something like partial specialization in C++; it kinda works, but not inside generics, which is exactly were I would need it).

Distance in the Decision Space
While thinking about distance in the artifact and in the run-time world, I realized that the very first notion of distance I introduced was in the decision space. Still, I haven't defined that notion at the detail level (or lack thereof :-) at which I've defined distance in the artifact and run-time world. I have a few ideas, of course, the simplest definition being "the number of decision that must be undone + the number of [irreversible?] decisions that must be taken to move your artifacts". Those decisions would involve some mass of code. Moving that mass for that "space" would give a notion of necessary work. Anyway, it's probably too early to say more, as I have to understand the decision space better.

Coming soon, I hope, the notion of friction.

Monday, March 08, 2010

Why you should learn AOP

A few days ago, I've spent some time reading a critic of AOP (The Paradoxical Success of Aspect-Oriented Programming by Friedrich Steimann). As often, I felt compelled to read some of the bibliographical references too, which took me a little more (week-end) time.

Overall, in the last few years I've devoted quite some time to learn, think, and even write a little about AOP. I'm well aware of the problems Steimann describes, and I share some skepticism about the viability of the AOP paradigm as we know it.

Too much literature, for instance, is focused on a small set of pervasive concerns like logging. I believe that as we move toward higher-level concerns, we must make a clear distinction between pervasive concerns and cross-cutting concerns. A concern can be cross-cutting without being pervasive, and in this sense, for instance, I don't really agree that AOP is not for singletons (see my old post Some notes on AOP).
Also, I wouldn't dismiss the distinction between spectators and assistants so easily, especially because many pervasive concerns can be modeled as spectators. Overall, the paradigm seems indeed a little immature when you look at the long-term maintenance effects of aspects as they're known today.

Still, I think the time I've spent pondering on AOP was truly well spent. Actually, I would suggest that you spend some time learning about AOP too, even if you're not planning to use AOP in the foreseeable future.

I don't really mean learning a specific language - unless you want/need to try out a few things. I mean learning the concepts, the AOP perspective, the AOP terminology, the effects and side-effects of an Aspect Oriented solution.

I'm suggesting that you learn all that despite the obvious (or perhaps not so obvious) deficiencies in the current approaches and languages, the excessive hype and the underdeveloped concepts. I'm suggesting that you learn all that because it will make you a better designer.

Why? Because it will expand your mind. It will add a new, alternative perspective through which you can look at your problems. New questions to ask. New concepts. New names. Sometimes, all we need is a name. A beacon in the brainstorm, and a steady hand.

As I've said many times now, as designers we're shaping software. We can choose many shapes, and ideally, we will find a shape that is in frictionless contact with the forcefield. Any given paradigm will suggest a set of privileged shapes, at macro and micro-level. Including the aspect-oriented paradigm in your thinking will expand the set of shapes you can apply and conceive.

Time for a short war story :-). In the past months I've been thinking a lot about some issues in a large CAD system. While shaping a solution, I'm constantly getting back to what I could call aspect-thinking. There are many cross-cutting concerns to be resolved. Not programming-level concerns (like the usual, boring logging stuff). Full-fledged application-domain concerns, that tend to cross-cut the principal decomposition.

Now, you see, even thinking "principal decomposition" and "cross-cutting" is making your first step into aspect-thinking. Then you can think about ways to bring those concerns inside the principal decomposition (if appropriate and/or possible and/or convenient) or think about the best way to keep them outside without code-level tangling. Tangling. Another interesting name, another interesting concept.

Sure, if you ain't using true AOP (for instance, we're using plain old C++), you'll have to give up some oblivousness (another name, another concept!), but it can be done, and it works fine (for a small scale example, see part 1 and part 2 of my "Can AOP inform OOP?")

So far, the candidate shape is causing some discomfort. That's reasonable. It's not a "traditional" solution. Which is fine, because so far, tradition didn't work so well :-). Somehow, I hope the team will get out of this experience with a new mindset. Nobody used to talk about "principal decomposition" or "cross-cutting concern" in the company. And you can't control what you can't name.

I hope they will gradually internalize the new concepts, as well as the tactics we can use inside traditional languages. That would be a major accomplishment. Much more important than the design we're creating, or the tons of code we'll be writing. Well, we'll see...

Sunday, January 10, 2010

Delaying Decisions

Since microblogging is not my thing, I decided to start 2010 by writing my longer post ever :-). It will start with a light review of a well-known principle and end up with a new design concept. Fasten your seatbelt :-).

The Last Responsible Moment
When we develop a software product, we make decisions. We decide about individual features, we make design decisions, we make coding decisions, we even decide which bugs we really want to fix before going public. Some decisions are taken on the fly; some, at least in the old school, are somewhat planned.

A key principle of Lean Development is to delay decisions, so that:
a) decisions can be based on (yet-to-discover) facts, not on speculation
b) you exercise the wait option (more on this below) and avoid early commitment

The principle is often spelled as "Delay decisions until the last responsible moment", but a quick look at Mary Poppendieck's website (Mary co-created the Lean Development approach) shows a more interesting nuance: "Schedule Irreversible Decisions at the Last Responsible Moment".

Defining "Irreversible" and "Last Responsible" is not trivial. In a sense, there is nothing in software that is truly irreversible, because you can always start over. I haven't found a good definition for "irreversible decision" in literature, but I would define it as follows: if you make an irreversible decision at time T, undoing the decision at a later time will entail a complete (or almost complete) waste of everything that has been created after time T.

There are some documented definitions for "last responsible moment". A popular one is "The point when failing to decide eliminates an important option", which I found rather unsatisfactory. I've also seen some attempts to quantify that better, as in this funny story, except that in the real world you never have a problem which is that simple (very few ramifications in the decision graph) and that detailed (you know the schedule beforehand). I would probably define the Last Responsible Moment as follows: time T is the last responsible moment to make a decision D if, by postponing D, the probability of completing on schedule/budget (even when you factor-in the hypothetical learning effect of postponing) decreases below an acceptable threshold. That, of course, allows us to scrap everything and restart, if schedule and budget allows for it, and in this sense it's kinda coupled with the definition of irreversible.

Now, irreversibility is bad. We don't want to make irreversible decisions. We certainly don't want to make them too soon. Is there anything we can do? I've got a few important things to say about modularity vs. irreversibility and passive vs. proactive option thinking, but right now, it's useful to recap the major decision areas within a software project, so that we can clearly understand what we can actually delay, and what is usually suggested that we delay.

Major Decision Areas
I'll skip on a few very-high-level, strategic decisions here (scope, strategy, business model, etc). It's not that they can't be postponed, but I need to give some focus to this post :-). So I'll get down to the more ordinarily taken decisions.

People
Choosing the right people for the project is a well-known ingredient for success.

Approach/Process
Are we going XP, Waterfall, something in between? :-).

Feature Set
Are we going to include this feature or not?

Design
What is the internal shape (form) of our product?

Coding
Much like design, at a finer granularity level.

Now, "design" is an overly general concept. Too general to be useful. Therefore, I'll split it into a few major decisions.

Architectural Style
Is this going to be an embedded application, a rich client, a web application? This is a rather irreversible decision.

Platform
Goes somewhat in pair with Architectural Style. Are we going with an embedded application burnt into an FPGA? Do you want to target a PIC? Perhaps an embedded PC? Is the client a Windows machine, or you want to support Mac/Linux? A .NET server side, or maybe Java? It's all rather irreversible, although not completely irreversible.

3rd-Party Libraries/Components/Etc
Are we going to use some existing component (of various scale)? Unless you plan on wrapping everything (which may not even be possible), this often end up being an irreversible decision. For instance, once you commit yourself to using Hibernate for persistence, it's not trivial to move away.

Programming Language
This is the quintessential irreversible decision, unless you want to play with language converters. Note that this is not a coding decisions: coding decisions are made after the language has been chosen.

Structure / Shape / Form
This is what we usually call "design": the shape we want to impose to our material (or, if you live in the "emergent design" side, the shape that our material will take as the final result of several incremental decisions).

So, what are we going to delay? We can't delay all decisions, or we'll be stuck. Sure, we can delay something in each and every area, but truth is, every popular method has been focusing on just a few of them. Of course, different methods tried to delay different choices.

A Little Historical Perspective
Experience brings perspective; at least, true experience does :-). Perspective allows to look at something and see more than it's usually seen. For instance, perspective allows to look at the old, outdated, obsolete waterfall approach and see that it (too) was meant to delay decisions, just different decisions.

Waterfall was meant to delay people decisions, design decisions (which include platform, library, component decisions) and coding decisions. People decision was delayed by specialization: you only have to pick the analyst first, everyone else can be chosen later, when you know what you gotta do (it even makes sense -)). Design decision was delayed because platform, including languages, OS, etc, were way more balkanized than today. Also, architectural styles and patterns were much less understood, and it made sense to look at a larger picture before committing to an overall architecture.
Although this may seem rather ridiculous from the perspective of a 2010 programmer working on Java corporate web applications, most of this stuff is still relevant for (e.g.) mass-produced embedded systems, where choosing the right platform may radically change the total development and production cost, yet choosing the wrong platform may over-constrain the feature set.

Indeed, open systems (another legacy term from late '80s - early '90s) were born exactly to lighten up that choice. Choose the *nix world, and forget about it. Of course, the decision was still irreversible, but granted you some latitude in choosing the exact hw/sw. The entire multi-platform industry (from multi-OS libraries to Java) is basically built on the same foundations. Well, that's the bright side, of course :-).

Looking beyond platform independence, the entire concept of "standard" allows to delay some decision. TCP/IP, for instance, allows me to choose modularly (a concept I'll elaborate later). I can choose TCP/IP as the transport mechanism, and then delay the choice of (e.g.) the client side, and focus on the server side. Of course, a choice is still made (the client must have TCP/IP support), so let's say that widely adopted standards allow for some modularity in the decision process, and therefore to delay some decision, mostly design decisions, but perhaps some other as well (like people).

It's already going to be a long post, so I won't look at each and every method/principle/tool ever conceived, but if you do your homework, you'll find that a lot of what has been proposed in the last 40 years or so (from code generators to MDA, from spiral development to XP, from stepwise refinement to OOP) includes some magic ingredient that allows us to postpone some kind of decision.

It's 2010, guys
So, if you ain't agile, you are clumsy :-)) and c'mon, you don't wanna be clumsy :-). So, seriously, which kind of decisions are usually delayed in (e.g.) XP?

People? I must say I haven't seen much on this. Most literature on XP seems based on the concept that team members are mostly programmers with a wide set of skills, so there should be no particular reason to delay decision about who's gonna work on what. I may have missed some particularly relevant work, however.

Feature Set? Sure. Every incremental approach allows us to delay decisions about features. This can be very advantageous if we can play the learning game, which includes rapid/frequent delivery, or we won't learn enough to actually steer the feature set.
Of course, delaying some decisions on feature set can make some design options viable now, and totally bogus later. Here is where you really have to understand the concept of irreversible and last responsible moment. Of course, if you work on a settled platform, things get simpler, which is one more reason why people get religiously attached to a platform.

Design? Sure, but let's take a deeper look.

Architectural Style: not much. Quoting Booch, "agile projects often start out assuming a given platform and environmental context together with a set of proven design patterns for that domain, all of which represent architectural decisions in a very real sense". See my post Architecture as Tradition in the Unselfconscious Process for more.
Seriously, nobody ever expected to start with a monolithic client and end up with a three-tier web application built around a MVC pattern just by coding and refactoring. The architectural style is pretty much a given in many contemporary projects.

Platform: sorry guys, but if you want to start coding now, you gotta choose your platform now. Another irreversible decision made right at the beginning.

3rd-Party Libraries/Components/Etc: some delay is possible for modularized decisions. If you wanna use hibernate, you gotta choose pretty soon. If you wanna use Seam, you gotta choose pretty soon. Pervasive libraries are so entangled with architectural styles that it's relatively hard to delay some decisions here. Modularized components (e.g. the choice of a PDF rendering library) are simple to delay, and can be proactively delayed (see later).

Programming Language: no way guys, you have to choose right here, right now.

Structure / Shape / Form: of course!!! Here we are. This is it :-). You can delay a lot of detailed design choices. Of course, we always postpone some design decision, even when we design before coding. But let's say that this is where I see a lot of suggestions to delay decisions in the agile literature, often using the dreaded Big Upfront Design as a straw man argument. Of course, the emergent design (or accidental architecture) may or may not be good. If I had to compare the design and code coming out of the XP Episode with my own, I would say that a little upfront design can do wonders, but hey, you know me :-).

Practicing
OK guys, what follows may sound a little odd, but in the end it will prove useful. Have faith :-).
You can get better at everything by doing anything :-), so why not getting better at delaying decisions by playing Windows Solitaire? All you have to do is set the options in the hardest possible way:

now, play a little, until you have to make some decision, like here:

I could move the 9 of spades or the 9 of clubs over the 10 of hearts. It's an irreversible decision (well, not if you use the undo, but that's lame :-). There are some ramifications for both choices.
If I move the 9 of clubs, I can later move the king of clubs and uncover a new card. After that, it's all unknown, and no further speculation is possible. Here, learning requires an irreversible decision; this is very common in real-world projects, but seldom discussed in literature.
If I move the 9 of spades, I uncover the 6 of clubs, which I can move over the 7 of aces. Then, it's kinda unknown, meaning: if you're a serious player (I'm not) you'll remember the previous cards, which would allow you to speculate a little better. Otherwise, it's just as above, you have to make an irreversible decision to learn the outcome.

But wait: what about the last responsible moment? Maybe we can delay this decision! Now, if you delay the decision by clicking on the deck and moving further, you're not delaying the decision: you're wasting a chance. In order to delay this decision, there must be something else you can do.
Well, indeed, there is something you can do. You can move the 8 of aces above the 9 of clubs. This will uncover a new card (learning) without wasting any present opportunity (it could still waste a future opportunity; life it tough). Maybe you'll get a 10 of aces under that 8, at which point there won't be any choice to be made about the 9. Or you might get a black 7, at which point you'll have a different way to move the king of clubs, so moving the 9 of spades would be a more attractive option. So, delay the 9 and move the 8 :-). Add some luck, and it works:

and you get some money too (total at decision time Vs. total at the end)


Novice solitaire players are also known to make irreversible decision without necessity. For instance, in similar cases:

I've seen people eagerly moving the 6 of aces (actually, whatever they got) over the 7 of spades, because "that will free up a slot". Which is true, but irrelevant. This is a decision you can easily delay. Actually, it's a decision you must delay, because:
- if you happen to uncover a king, you can always move the 6. It's not the last responsible moment yet: if you do nothing now, nothing bad will happen.
- you may uncover a 6 of hearts before you uncover a king. And moving that 6 might be more advantageous than moving the 6 of aces. So, don't do it :-). If you want to look good, quote Option Theory, call this a Deferral Option and write a paper about it :-).

Proactive Option Thinking
I've recently read an interesting paper in IEEE TSE ("An Integrative Economic Optimization Approach to Systems Development Risk Management", by Michel Benaroch and James Goldstein). Although the real meat starts in chapter 4, chapters 1-3 are probably more interesting for the casual reader (including myself).
There, authors recap some literature about Real Options in Software Engineering, including the popular argument that delaying decisions is akin to a deferral option. They also make important distinctions, like the one between passive learning through deferral of decisions, and proactive learning, but also between responsiveness to change (a central theme in agility literature) and manipulation of change (relatively less explored), and so on. There is a a lot of food for thought in those 3 chapters, so if you can get a copy, I suggest that you spend a little time pondering over it.
Now, I'm a strong supporter of Proactive Option Thinking. Waiting for opportunities (and then react quickly) is not enough. I believe that options should be "implanted" in our project, and that can be done by applying the right design techniques. How? Keep reading : ).

The Invariant Decision
If you look back at those pictures of Solitaire, you'll see that I wasn't really delaying irreversible decisions. All decisions in solitaire are irreversible (real men don't use CTRL-Z). Many decisions in software development are irreversible as well, especially when you are in a tight budget/schedule, so starting over is not an option. Therefore, irreversibility can't really be the key here. Indeed, I was trying to delay Invariant Decisions. Decisions that I can take now, or I can take later, with little or no impact on the outcomes. The concept itself may seem like a minor change from "irreversible", but it allows me to do some magic:
- I can get rid of the "last responsible moment" part, which is poorly defined anyway. I can just say: delay invariant decisions. Period. You can delay them as much as you want, provided they are still invariant. No ambiguity here. That's much better.
- I can proactively make some decisions invariant. This is so important I'll have to say it again, this time in bold: I can proactively make some decisions invariant.

Invariance, Design, Modularity
If you go back to the Historical Perspective paragraph, you can now read it under a different... perspective :-). Several tools, techniques, methods can be adopted not just to delay some decision, but to create the option to delay the decision. How? Through careful design, of course!

Consider the strong modularity you get from service-oriented architecture, and the platform independence that comes through (well-designed) web services. This is a powerful weapon to delay a lot of decisions on one side or another (client or server).

Consider standard protocols: they are a way to make some decision invariant, and to modularize the impact of some choices.

Consider encapsulation, abstraction and interfaces: they allow you to delay quite a few low-level decisions, and to modularize the impact of change as well. If your choice turn out to be wrong, but it's highly localized (modularized) you may afford undoing your decision, therefore turning irreversible into reversible. A barebone example can be found in my old post (2005!) Builder [pattern] as an option.

Consider a very old OOA/OOD principle, now somehow resurrected under the "ubiquitous language" umbrella. It states that you should try to reflect the real-world entities that you're dealing with in your design, and then in your code. That includes avoiding primitive types like integer, and create meaningful classes instead. Of course, you have to understand what you're doing (that is, you gotta be a good designer) to avoid useless overengineering. See part 4 of my digression on the XP Episode for a discussion about adding a seemingly useless Ball class (that is: implanting a low cost - high premium option).
Names alter the forcefield. A named concept stands apart. My next post on the forcefield theme, by the way, will explore this issue in depth :-).

And so on. I could go on forever, but the point is: you can make many (but not all, of course!) decisions invariant, if you apply the right design techniques. Most of those techniques will also modularize the cost of rework if you make the wrong decision. And sure, you can try to do this on the fly as you code. Or you may want to to some upfront design. You know what I'm thinking.

OK guys, it took quite a while, but now we have a new concept to play with, so more on this will follow, randomly as usual. Stay tuned.

Tuesday, June 09, 2009

Design Rationale

In the past few weeks I've taken a little time to write down more about the concept of frequency; while doing so, I realized I had to explore the concept of forcefield better, and while doing so (yeap :-)) I realized there was a rather large overlap between the notion of forcefield and the notion of design rationale.

Design rationale extends beyond software engineering, and aims to capture design decisions and the reasoning behind those decisions. Now, design decisions are (ideally) taken as trade-offs between several competing forces. Those forces creates the forcefield, hence the large overlap between the two subjects.

The concept of design rationale has been around for quite a few years, but I haven't seen much progress either in tools or notations. Most often, tools fall into the “rationalize after the fact” family, while I'm more interested in reasoning tools and notations, that would help me (as a designer) get a better picture about my own thoughts while I'm thinking. That resonates with the concept of reflection in action that I've discussed in Listen to Your Tools and Materials a few years ago.

So, as I was reading a recent issue of IEEE Software (March/April 2009), I found a list of recent (and not so recent) tools dealing with design rationale in a paper by Philippe Kruchten, Rafael Capilla, Juan Carlos DueƱas (The Decision View’s Role in Software Architecture Practice), and I decided to take a quick ride. Here is a very quick summary of what I've found.

Seurat
Seurat (see also the PDF tutorial on the same website) is based on a very powerful language / model, but the tool (as implemented) is very limiting. It's based on a tree structure, which makes for a nice todo list, but makes visual reasoning almost impossible. Actually, in the past I've investigated on using the tree format myself (and while doing so, I discovered others have done the same: see for instance the Reasoning Tree pattern), but restricting visualization to (hyperlinked) nodes in a tree just does not work when you're facing difficult problems.

Sysiphus
Sysiphus seems to have recently morphed into another tool (UniCase), but from the demo of UniCase it's hard to appreciate any special support for design rationale (so far).

AREL
(see also some papers from Antony Tang on the same page; Antony also had an excellent paper on AREL in the same issue of IEEE Software)
AREL is integrated with Enterprise Architect. Integration with existing case tools (either commercial or free) seems quite a good idea to me. AREL uses a class diagram (through a UML profile) to model design rationale, so it's not limited to a tree format. Still, I've found the results rather hard to read. It seems more like a tool to give structure to design knowledge than a tool to reason about design. As I go through the examples, I have to study the diagram; it doesn't just talk back to me. I have to click around and look at other artifacts. The reasoning is not in the diagram, it's only accessible through the diagram.

PAKME
Honestly, PAKME seems more like an exercise in building a web-based collaboration tool for software development than a serious attempt at providing a useful / usable tool to record design rationale. It does little more than organize artifacts, and it requires so many clicks / page refresh to get anything done that I doubt a professional designer could ever use it (sorry guys).

ADDSS
ADDSS is very much like PAKME, although it adds a useful Patterns section. It's so far from what I consider a useful design tool (see my paper above for more) that I can't really think of using it (sorry, again).

Knowledge Architect
Again, a tool with some good ideas (like Word integration) but far from what I'm looking for. It's fine to create a structured design document, but not to reason about difficult design problems.

In the end, it seems like most of those tools suffer from the same problems:
- The research is good; a nice metamodel is built, some of the problems faced by professional designers seem to be well understood.
- The tool does little more than organize knowledge, would get in the way of the designer thinking about thorny issues, does not help through visualization, and is at best useful at the end of the design process, possibly to fake some rationality, a-la Parnas/Clements.

That said, AREL is probably the most promising tool of the pack, but in the end I've being doing pretty much the same for years now, using (well, abusing :-) plain old use case diagrams to model goals and issues, with a few ideas taken from KAOS and the like.

Recently, I began experimenting with another standard UML diagram (the activity diagram) to model some portion of design reasoning. I'll show an example in my next post, and then show how we can change our perspective and move from design reasoning to the forcefield.

Sunday, February 22, 2009

Notes on Software Design, Chapter 4: Gravity and Architecture

In my previous posts, I described gravity and inertia. At first, gravity may seem to have a negative connotation, like a force we constantly have to fight. In a sense, that's true; in a sense, it's also true for its physical counterpart: every day, we spend a lot of energy fighting earth gravity. However, without gravity, like as we know it would never exist. There is always a bright side :-).

In the software realm, gravity can be exploited by setting up a favorable force field. Remember that gravity is a rather dumb :-) force, merely attracting things. Therefore, if we come up with the right gravitational centers early on, they will keep attracting the right things. This is the role of architecture: to provide an initial, balanced set of centers.

Consider the little thorny problem I described back in October. Introducing Stage 1, I said: "the critical choice [...] was to choose where to put the display logic: in the existing process, in a new process connected via IPC, in a new process connected to a [RT] database".
We can now review that decision within the framework of gravitational centers.

Adding the display logic into the existing process is the path of least resistance: we have only one process, and gravity is pulling new code into that process. Where is the downside? A bloated process, sure, but also the practical impossibility of sharing the display logic with other processes.
Reuse requires separation. This, however, is just the tip of the iceberg: reuse is just an instance of a much more general force, which I'll cover in the forthcoming posts.

Moving the display logic inside a separate component is a necessary step toward [independent] reusability, and also toward the rarely understood concept of a scaled-down architecture.
A frequently quoted paper from David Parnas (one of the most gifted software designers of all times) is properly titled "Designing Software for Ease of Extension and Contraction" (IEEE Transactions on Software Engineering, Vol. 5 No. 2, March 1979). Somehow, people often forget the contraction part.
Indeed, I've often seen systems where the only chance to provide a scaled-down version to customers is to hide the portion of user interface that is exposing the "optional" functionality, often with questionable aesthetics, and always with more trouble than one could possibly want.

Note how, once we have a separate module for display, new display models are naturally attracted into that module, leaving the acquisition system alone. This is gravity working for us, not against us, because we have provided the right center. That's also the bright side of the thorny problem, exactly because (at that point, that is, stage 2) we [still] have the right centers.

Is the choice of using an RTDB to further decouple the data acquisition system and the display system any better than having just two layers?
I encourage you to think about it: it is not necessarily trivial to undestand what is going on at the forcefield level. Sure, the RTDB becomes a new gravitational center, but is a 3-pole system any better in this case? Why? I'll get back to this in my next post.

Architecture and Gravity
Within the right architecture, features are naturally attracted to the "best" gravitational center.
The "right" architecture, therefore, must provide the right gravitational centers, so that features are naturally attracted to the right place, where (if necessary) they will be kept apart from other features at a finer granularity level, through careful design and/or careful refactoring.
Therefore, the right architeture is not just helping us cope with gravity: it's helping us exploit gravity to our own advantage.

The wrong architecture, however, will often conjure with gravity to preserve itself.
As part of my consulting activity, I’ve seen several systems where the initial partitioning of responsibility wasn’t right. The development team didn’t have enough experience (with software design and/or with the problem domain) to find out the core concepts, the core issues, the core centers.
The system was partitioned along the wrong lines, and as mass increased, gravity kicked in. The system grew with the wrong form, which was not in frictionless contact with the context.
At some point, people considered refactoring, but it was too costly, because mass brings Inertia, and inertia affects any attempt to change direction. Inertia keeps a bad system in a bad state. In a properly partitioned system, instead, we have many options for change: small subsystems won’t put up much of a fight. That’s the dream behind the SOA concept.
I already said this, but is worth repeating: gravity is working at all granularity levels, from distributed computing down to the smallest function. That's why we have to keep both design and code constantly clean. Architecture alone is not enough. Good programmers are always essential for quality development.

What about patterns? Patterns can lower the amount of energy we have to spend to create the right architecture. Of course, they can do so because someone else spent some energy re-discovering good ideas, cleaning them up, going through shepherding and publishing, and because we spent some time learning about them. That said, patterns often provide an initial set of centers, balancing out some forces (not restricted to gravity).
Of course, we can't just throw patterns against a problem: the form must be in effortless contact with the real problem we're facing. I've seen too many good-intentioned (and not so experienced :-) software designers start with patterns. But we have to understand forces first, and adopt the right patterns later.

Enough with mass and gravity. Next time, we're gonna talk about another primordial force, pushing things apart.

See you soon, I hope!