Sunday, January 10, 2010

Delaying Decisions

Since microblogging is not my thing, I decided to start 2010 by writing my longer post ever :-). It will start with a light review of a well-known principle and end up with a new design concept. Fasten your seatbelt :-).

The Last Responsible Moment
When we develop a software product, we make decisions. We decide about individual features, we make design decisions, we make coding decisions, we even decide which bugs we really want to fix before going public. Some decisions are taken on the fly; some, at least in the old school, are somewhat planned.

A key principle of Lean Development is to delay decisions, so that:
a) decisions can be based on (yet-to-discover) facts, not on speculation
b) you exercise the wait option (more on this below) and avoid early commitment

The principle is often spelled as "Delay decisions until the last responsible moment", but a quick look at Mary Poppendieck's website (Mary co-created the Lean Development approach) shows a more interesting nuance: "Schedule Irreversible Decisions at the Last Responsible Moment".

Defining "Irreversible" and "Last Responsible" is not trivial. In a sense, there is nothing in software that is truly irreversible, because you can always start over. I haven't found a good definition for "irreversible decision" in literature, but I would define it as follows: if you make an irreversible decision at time T, undoing the decision at a later time will entail a complete (or almost complete) waste of everything that has been created after time T.

There are some documented definitions for "last responsible moment". A popular one is "The point when failing to decide eliminates an important option", which I found rather unsatisfactory. I've also seen some attempts to quantify that better, as in this funny story, except that in the real world you never have a problem which is that simple (very few ramifications in the decision graph) and that detailed (you know the schedule beforehand). I would probably define the Last Responsible Moment as follows: time T is the last responsible moment to make a decision D if, by postponing D, the probability of completing on schedule/budget (even when you factor-in the hypothetical learning effect of postponing) decreases below an acceptable threshold. That, of course, allows us to scrap everything and restart, if schedule and budget allows for it, and in this sense it's kinda coupled with the definition of irreversible.

Now, irreversibility is bad. We don't want to make irreversible decisions. We certainly don't want to make them too soon. Is there anything we can do? I've got a few important things to say about modularity vs. irreversibility and passive vs. proactive option thinking, but right now, it's useful to recap the major decision areas within a software project, so that we can clearly understand what we can actually delay, and what is usually suggested that we delay.

Major Decision Areas
I'll skip on a few very-high-level, strategic decisions here (scope, strategy, business model, etc). It's not that they can't be postponed, but I need to give some focus to this post :-). So I'll get down to the more ordinarily taken decisions.

People
Choosing the right people for the project is a well-known ingredient for success.

Approach/Process
Are we going XP, Waterfall, something in between? :-).

Feature Set
Are we going to include this feature or not?

Design
What is the internal shape (form) of our product?

Coding
Much like design, at a finer granularity level.

Now, "design" is an overly general concept. Too general to be useful. Therefore, I'll split it into a few major decisions.

Architectural Style
Is this going to be an embedded application, a rich client, a web application? This is a rather irreversible decision.

Platform
Goes somewhat in pair with Architectural Style. Are we going with an embedded application burnt into an FPGA? Do you want to target a PIC? Perhaps an embedded PC? Is the client a Windows machine, or you want to support Mac/Linux? A .NET server side, or maybe Java? It's all rather irreversible, although not completely irreversible.

3rd-Party Libraries/Components/Etc
Are we going to use some existing component (of various scale)? Unless you plan on wrapping everything (which may not even be possible), this often end up being an irreversible decision. For instance, once you commit yourself to using Hibernate for persistence, it's not trivial to move away.

Programming Language
This is the quintessential irreversible decision, unless you want to play with language converters. Note that this is not a coding decisions: coding decisions are made after the language has been chosen.

Structure / Shape / Form
This is what we usually call "design": the shape we want to impose to our material (or, if you live in the "emergent design" side, the shape that our material will take as the final result of several incremental decisions).

So, what are we going to delay? We can't delay all decisions, or we'll be stuck. Sure, we can delay something in each and every area, but truth is, every popular method has been focusing on just a few of them. Of course, different methods tried to delay different choices.

A Little Historical Perspective
Experience brings perspective; at least, true experience does :-). Perspective allows to look at something and see more than it's usually seen. For instance, perspective allows to look at the old, outdated, obsolete waterfall approach and see that it (too) was meant to delay decisions, just different decisions.

Waterfall was meant to delay people decisions, design decisions (which include platform, library, component decisions) and coding decisions. People decision was delayed by specialization: you only have to pick the analyst first, everyone else can be chosen later, when you know what you gotta do (it even makes sense -)). Design decision was delayed because platform, including languages, OS, etc, were way more balkanized than today. Also, architectural styles and patterns were much less understood, and it made sense to look at a larger picture before committing to an overall architecture.
Although this may seem rather ridiculous from the perspective of a 2010 programmer working on Java corporate web applications, most of this stuff is still relevant for (e.g.) mass-produced embedded systems, where choosing the right platform may radically change the total development and production cost, yet choosing the wrong platform may over-constrain the feature set.

Indeed, open systems (another legacy term from late '80s - early '90s) were born exactly to lighten up that choice. Choose the *nix world, and forget about it. Of course, the decision was still irreversible, but granted you some latitude in choosing the exact hw/sw. The entire multi-platform industry (from multi-OS libraries to Java) is basically built on the same foundations. Well, that's the bright side, of course :-).

Looking beyond platform independence, the entire concept of "standard" allows to delay some decision. TCP/IP, for instance, allows me to choose modularly (a concept I'll elaborate later). I can choose TCP/IP as the transport mechanism, and then delay the choice of (e.g.) the client side, and focus on the server side. Of course, a choice is still made (the client must have TCP/IP support), so let's say that widely adopted standards allow for some modularity in the decision process, and therefore to delay some decision, mostly design decisions, but perhaps some other as well (like people).

It's already going to be a long post, so I won't look at each and every method/principle/tool ever conceived, but if you do your homework, you'll find that a lot of what has been proposed in the last 40 years or so (from code generators to MDA, from spiral development to XP, from stepwise refinement to OOP) includes some magic ingredient that allows us to postpone some kind of decision.

It's 2010, guys
So, if you ain't agile, you are clumsy :-)) and c'mon, you don't wanna be clumsy :-). So, seriously, which kind of decisions are usually delayed in (e.g.) XP?

People? I must say I haven't seen much on this. Most literature on XP seems based on the concept that team members are mostly programmers with a wide set of skills, so there should be no particular reason to delay decision about who's gonna work on what. I may have missed some particularly relevant work, however.

Feature Set? Sure. Every incremental approach allows us to delay decisions about features. This can be very advantageous if we can play the learning game, which includes rapid/frequent delivery, or we won't learn enough to actually steer the feature set.
Of course, delaying some decisions on feature set can make some design options viable now, and totally bogus later. Here is where you really have to understand the concept of irreversible and last responsible moment. Of course, if you work on a settled platform, things get simpler, which is one more reason why people get religiously attached to a platform.

Design? Sure, but let's take a deeper look.

Architectural Style: not much. Quoting Booch, "agile projects often start out assuming a given platform and environmental context together with a set of proven design patterns for that domain, all of which represent architectural decisions in a very real sense". See my post Architecture as Tradition in the Unselfconscious Process for more.
Seriously, nobody ever expected to start with a monolithic client and end up with a three-tier web application built around a MVC pattern just by coding and refactoring. The architectural style is pretty much a given in many contemporary projects.

Platform: sorry guys, but if you want to start coding now, you gotta choose your platform now. Another irreversible decision made right at the beginning.

3rd-Party Libraries/Components/Etc: some delay is possible for modularized decisions. If you wanna use hibernate, you gotta choose pretty soon. If you wanna use Seam, you gotta choose pretty soon. Pervasive libraries are so entangled with architectural styles that it's relatively hard to delay some decisions here. Modularized components (e.g. the choice of a PDF rendering library) are simple to delay, and can be proactively delayed (see later).

Programming Language: no way guys, you have to choose right here, right now.

Structure / Shape / Form: of course!!! Here we are. This is it :-). You can delay a lot of detailed design choices. Of course, we always postpone some design decision, even when we design before coding. But let's say that this is where I see a lot of suggestions to delay decisions in the agile literature, often using the dreaded Big Upfront Design as a straw man argument. Of course, the emergent design (or accidental architecture) may or may not be good. If I had to compare the design and code coming out of the XP Episode with my own, I would say that a little upfront design can do wonders, but hey, you know me :-).

Practicing
OK guys, what follows may sound a little odd, but in the end it will prove useful. Have faith :-).
You can get better at everything by doing anything :-), so why not getting better at delaying decisions by playing Windows Solitaire? All you have to do is set the options in the hardest possible way:

now, play a little, until you have to make some decision, like here:

I could move the 9 of spades or the 9 of clubs over the 10 of hearts. It's an irreversible decision (well, not if you use the undo, but that's lame :-). There are some ramifications for both choices.
If I move the 9 of clubs, I can later move the king of clubs and uncover a new card. After that, it's all unknown, and no further speculation is possible. Here, learning requires an irreversible decision; this is very common in real-world projects, but seldom discussed in literature.
If I move the 9 of spades, I uncover the 6 of clubs, which I can move over the 7 of aces. Then, it's kinda unknown, meaning: if you're a serious player (I'm not) you'll remember the previous cards, which would allow you to speculate a little better. Otherwise, it's just as above, you have to make an irreversible decision to learn the outcome.

But wait: what about the last responsible moment? Maybe we can delay this decision! Now, if you delay the decision by clicking on the deck and moving further, you're not delaying the decision: you're wasting a chance. In order to delay this decision, there must be something else you can do.
Well, indeed, there is something you can do. You can move the 8 of aces above the 9 of clubs. This will uncover a new card (learning) without wasting any present opportunity (it could still waste a future opportunity; life it tough). Maybe you'll get a 10 of aces under that 8, at which point there won't be any choice to be made about the 9. Or you might get a black 7, at which point you'll have a different way to move the king of clubs, so moving the 9 of spades would be a more attractive option. So, delay the 9 and move the 8 :-). Add some luck, and it works:

and you get some money too (total at decision time Vs. total at the end)


Novice solitaire players are also known to make irreversible decision without necessity. For instance, in similar cases:

I've seen people eagerly moving the 6 of aces (actually, whatever they got) over the 7 of spades, because "that will free up a slot". Which is true, but irrelevant. This is a decision you can easily delay. Actually, it's a decision you must delay, because:
- if you happen to uncover a king, you can always move the 6. It's not the last responsible moment yet: if you do nothing now, nothing bad will happen.
- you may uncover a 6 of hearts before you uncover a king. And moving that 6 might be more advantageous than moving the 6 of aces. So, don't do it :-). If you want to look good, quote Option Theory, call this a Deferral Option and write a paper about it :-).

Proactive Option Thinking
I've recently read an interesting paper in IEEE TSE ("An Integrative Economic Optimization Approach to Systems Development Risk Management", by Michel Benaroch and James Goldstein). Although the real meat starts in chapter 4, chapters 1-3 are probably more interesting for the casual reader (including myself).
There, authors recap some literature about Real Options in Software Engineering, including the popular argument that delaying decisions is akin to a deferral option. They also make important distinctions, like the one between passive learning through deferral of decisions, and proactive learning, but also between responsiveness to change (a central theme in agility literature) and manipulation of change (relatively less explored), and so on. There is a a lot of food for thought in those 3 chapters, so if you can get a copy, I suggest that you spend a little time pondering over it.
Now, I'm a strong supporter of Proactive Option Thinking. Waiting for opportunities (and then react quickly) is not enough. I believe that options should be "implanted" in our project, and that can be done by applying the right design techniques. How? Keep reading : ).

The Invariant Decision
If you look back at those pictures of Solitaire, you'll see that I wasn't really delaying irreversible decisions. All decisions in solitaire are irreversible (real men don't use CTRL-Z). Many decisions in software development are irreversible as well, especially when you are in a tight budget/schedule, so starting over is not an option. Therefore, irreversibility can't really be the key here. Indeed, I was trying to delay Invariant Decisions. Decisions that I can take now, or I can take later, with little or no impact on the outcomes. The concept itself may seem like a minor change from "irreversible", but it allows me to do some magic:
- I can get rid of the "last responsible moment" part, which is poorly defined anyway. I can just say: delay invariant decisions. Period. You can delay them as much as you want, provided they are still invariant. No ambiguity here. That's much better.
- I can proactively make some decisions invariant. This is so important I'll have to say it again, this time in bold: I can proactively make some decisions invariant.

Invariance, Design, Modularity
If you go back to the Historical Perspective paragraph, you can now read it under a different... perspective :-). Several tools, techniques, methods can be adopted not just to delay some decision, but to create the option to delay the decision. How? Through careful design, of course!

Consider the strong modularity you get from service-oriented architecture, and the platform independence that comes through (well-designed) web services. This is a powerful weapon to delay a lot of decisions on one side or another (client or server).

Consider standard protocols: they are a way to make some decision invariant, and to modularize the impact of some choices.

Consider encapsulation, abstraction and interfaces: they allow you to delay quite a few low-level decisions, and to modularize the impact of change as well. If your choice turn out to be wrong, but it's highly localized (modularized) you may afford undoing your decision, therefore turning irreversible into reversible. A barebone example can be found in my old post (2005!) Builder [pattern] as an option.

Consider a very old OOA/OOD principle, now somehow resurrected under the "ubiquitous language" umbrella. It states that you should try to reflect the real-world entities that you're dealing with in your design, and then in your code. That includes avoiding primitive types like integer, and create meaningful classes instead. Of course, you have to understand what you're doing (that is, you gotta be a good designer) to avoid useless overengineering. See part 4 of my digression on the XP Episode for a discussion about adding a seemingly useless Ball class (that is: implanting a low cost - high premium option).
Names alter the forcefield. A named concept stands apart. My next post on the forcefield theme, by the way, will explore this issue in depth :-).

And so on. I could go on forever, but the point is: you can make many (but not all, of course!) decisions invariant, if you apply the right design techniques. Most of those techniques will also modularize the cost of rework if you make the wrong decision. And sure, you can try to do this on the fly as you code. Or you may want to to some upfront design. You know what I'm thinking.

OK guys, it took quite a while, but now we have a new concept to play with, so more on this will follow, randomly as usual. Stay tuned.

Monday, October 05, 2009

A ForceField Diagram

The Design Rationale Diagram I discussed in my previous post is hardly complete, and it could be vastly improved by asking slightly different questions, leading to different decision paths. Still, it's a reasonable first-cut attempt to model the decision process. It can be used to communicate the reasoning behind a specific decision, in a specific context.

That, however, is not the way I really think. Sure, I can rationalize things that way, but it's not the way I store, recall, organize information inside my head. It's not the way I see the decision space.
In the end, software design is about things going together and things staying apart, at all the granularity levels (see also my post on partitioning).
As I progress in my understanding of forces, I tend to form clusters. Clusters are born out of attraction and rejection inside the decision space. I've found that thinking this way helps me reach a better understanding of my design instinct, and to communicate my thoughts more clearly.

Now, although I've been thinking about this for long while (not full-time, lucky me :-), I can't say I have found the perfect representation. The decision space in inherently multi-dimensional, and I always end up needing more dimensions that I can fit either in 2D or 3D. Over time, I tried several notations, inventing things from scratch or borrowing from other domains. Most were dead ends. In the end, I've chosen (so far :-) a very simple representation, based on just 3 concepts (possibly 4 or 5).

- nodes
Nodes represent information, which is our material. Information has fractal nature, and I don't bother if I'm mixing up levels. Therefore, a node may represent a business goal, or the adoption of a tool or library, or a nonfunctional requirement, or a specific component, class, function. While most methods are based on a strict separation of concepts, I find that very limiting.

- an attraction relationship
Nodes can attract each other. For instance, a node labeled "reliable" may attract a node labeled "redundant" when reasoning about the large display problem. I just connect the two nodes using a thick line with little "hands" on the ends. I place attracted nodes close to each other.

- a rejection relationship
Nodes can reject each other. For instance, stateful most clearly reject stateless :-). Some technology might be at odd with another. A subsystem must not depend on another. And so on. Nodes that reject each other are placed at some distance.

It's all very simple and unsophisticated. Here is an example based on the large display problem, inspired by the discussion on design rationale:



and here are two diagrams I've used in real-world projects recently, scaled down to protect the innocent:





The relationship between a node, a cluster, and an Alexandrian center is better left for another time. Still, a node in one diagram may represent an entire cluster, or an entire diagram. Right now I'm tempted to use a slightly different symbol (which would be the fourth) to represent "expandable" nodes, although I'm really trying to keep symbols to a bare minimum. I'm also using colors, but so far in a very informal way.

As simple as it is, I've found this diagram to be very effective as a reasoning device, while too many diagrams end up being mere documentation devices. I can sit in front of my (large :-) screen, think and draw, and the drawing helps me focus. I can draw this on a whiteboard in a meeting, and everyone get up to speed very quickly on what I'm doing.

This, however, is just half the story. We can surely work with informal concepts and diagrams, and that's fine, but what I'm trying to do is to add precision to the diagram. Precision is often confused with details, like "a class diagram is more precise if you show all the parameters and types". I'm not looking for that kind of "precision". Actually, I don't want this diagram to be redundant with code at all; we already have many code-like diagrams, and they all get down the same roads (generate code from diagrams or generate diagrams from code). I want a reasoning device: when I want to code, I'm comfortable with code :-).

I mostly want to add precision about relationships. Why, for instance, is there an attraction between Slow Client and Stateful? Informally, because if we have a stateful system, the slow client can poll on its own terms, or alternatively, because the client may use a sophisticated subscription based on the previous state. Those options, by the way, could be represented on the forcefield diagram itself (adding more nodes, or a nested diagram); but that's still the "informal" reasoning. Can we make it any more formal, precise, grounded on sound principles?

This is where the ongoing work on concepts like gravity, frequency, and so on kicks in. Slow Client and Stateful are attracted because on a finer granularity (another, perhaps better, diagram) "Slow Client" means a publisher and a subscriber operating at different frequencies, and a stateful repository is a well-known strategy (a pattern!) to provide Isolation between systems operating at different frequencies (together with synchronization or transactions).

Now, I haven't introduced the concept of Isolation yet (though I mentioned something on my Facebook page :-), so this is sort of a spoiler :-)), but in the end I hope to come up with a simple reasoning system, where you can start with informal concepts and refine nodes and forces until you reach the "universal", fractal forces I'm discussing in the "Notes on Software Design" posts. That would give a solid ground to the entire diagram.

A final note on the forcefield diagram: at this stage, I'm just using Visio, or more exactly, I'm abusing some stencils in the Visio library. I wanted something relatively organic, mindmap-like. Maybe one day I'll move back to some 3D ideas (molecular structures come to mind), but I've yet to see how this scales to newer concepts, larger problems, and so on. If you want to play with it, I can send you the VSS file with the stencils.

Ok, I'll get back to Frequency (and Interference and Isolation and more :-) soon. Before that, however, I'd like to take a diversion on the Dependency Structure Matrix. See ya!

Monday, August 31, 2009

Representing Design Rationale Inside Activity Diagrams

Design is about making choices. We often do so on the fly, leaning on experience and intuition, by talking about the problem with colleagues, or borrowing from literature (e.g. patterns). We also make some choice by habit, which is a different form of experience, one that has higher risk of becoming disconnected with the real problem.
Most of this process is tacit, and even when we discuss choices openly, it doesn't get recorded. Sometimes, a list of pros/cons is made when there is some disagreement about the best option.

This all works well when the problem is simple, but sometimes even experienced designers feel like they're not grasping the essential issues, that something has not yet been found, named, disentangled. This is when having yet one more tool can prove useful.
Now, I don't usually go through the effort to model and transcribe the rationale behind each and every design choice I make. It could be interesting, also from a pedagogical point of view, but it would take a lot of time and would probably disrupt my thought processes. However, when the issues are particularly thorny/unclear, or when there is a large disagreement on the best choice (or even on the goals and criteria), I've found that getting design rationale out of our individual heads and talk on a shared representation can move things a little forward.

Over the years, I've tried out a number of tools, approaches, and so on; lately, I've tried using Activity Diagrams in a rather unorthodox way, to represent my reasoning about design, not design itself. The idea is not to encode your decisions ex-post, but ex-ante, while you're thinking (that is, while they're still options, not decisions). Also, the diagram must be considered quite fluid, as it shows our current understanding, and we're building the diagram to improve our understanding.

Enough talk, let's see a realistic example. I'll refer to the Large Display problem I discussed a few months ago. Actually, I'll just cover the initial choice between using a real-time database or an IPC/messaging system. It's gonna be quite a mouthful anyway!

To start, I'll have to draw a line between a messaging system and a RTDB, and that in itself is not easy. I'll go for a very simple distinction, because my goal here is not really to talk about RTDBs, but about design rationale (the usual "look at the moon, not at the finger" concept).
So, consider a control system that reads some data from the field and then needs to publish those data for other processes. It could just send data through a messaging (publish/subscribe) system. Here I define a messaging system as stateless, meaning it simply keeps track of subscriptions, and sends everything that is published to the subscribers (according to some criteria, like message type or tag). It does not keep an history, or a snapshot of what has been last sent. Therefore, it cannot apply some filters, like "notify me only if the difference between the previous value and the current value is above a threshold" because the previous value is just not stored. Also, when a subscriber is started, it cannot get the current snapshot of the system, because it is not there: it will have to wait for messages to come, incrementally. Shortly stated, a RTDB will keep a snapshot of the system, and well, you can figure out the difference. Of course, a RTDB is also more complex.

So, how do we choose between a messaging system and a RTDB? We may write down a long list of pro/cons, but that's really unstructured, and that's not the way our brain works. To provide more structure, I use an Activity Diagram with orthogonal swimlanes (all the following pictures are taken from Star UML, a free tool that is rather fast and unobtrusive).
The vertical swimlanes are flexible: they represent the main concerns. The horizontal swimlanes are fixed: they provide structure.


For the Large Display problems, we could start with a few main concerns like performance, reliability, cost, and so on. We just drop the names on the vertical swimlanes. My template is then partitioned in 3 horizontal bands: the root question, the reasoning, the outcome. Everything inside is dynamic, and changes as we understand more: even the root questions may change, as we discover larger or smaller, independent problems. Sometimes, even the main concerns change, as we discover options or issues we didn't consider before.

We can focus on just one concern right now, let's say performance (don't we all like performance? :-). A first-cut, interesting top-level question could be: is the published data rate high or low? If the rate is low and we have no persistent state, when you turn on the large display you see nothing: you have to wait till some data gets published. On the other hand, if the rate is high, it may even overwhelm the display system: there is little need to refresh a value a thousand times per second. That actually depends on the display: if it's a real-time plot, you may want a high refresh rate too.

Ok, we could start modeling this part of our reasoning using the familiar activity diagram symbols. Actually, since most of the nodes here would be decision nodes, I just omit the diamond and use an activity node with multiple outgoing paths to show choices.


Note: The empty boxes are just placeholders for some later reasoning. It's just laziness on my side :-) and they wouldn't appear in a real diagram.

Now, this seems just like a decision tree, but it's slightly different. First, it's a decision graph: common choices between paths are shared, and this is a precious information because it shows crucial choices (more on this later). Second, it's a multifaceted graph: every vertical swimlane shows a facet of a more complex reasoning; for instance, what is good for performance might not be good for reliability or cost.

Let's try to move ahead a little. When the incoming data rate is higher than what [most] clients need, we have basically two choices:
1) smarter subscriptions; they could still be rather dumb, like "no more than 3 times per second" or much smarter like "when relative change is higher than 5%, but no more than 5 times per second". Note that the latter is more suited to a RTDB than to a stateless messaging system.
2) change paradigm and move to client-initiated polling. The clients will ask for data with their own timing. Of course, at this point we give up the possibility of not asking for data if the value has not changed. Anyway, this again requires some kind of stateful middleware; a messaging system won't do.
When data rate is low, but high startup time for clients is not an option, we can't wait for data to come: we have to poll, at least at startup. So, polling can solve two problems, of course at expense of bandwidth if it is the only available option.



While drawing this, we may come to the conclusion that we need to ask better questions: are we building a publisher-driven or a client-driven system? If it's client-driven, it cannot be stateless! What do we really know about clients? How many there will be? What about publishers? What is the typical data rate and configuration? What are we aiming for? Do we need to narrow the expectations? This might change the top question (client Vs. publisher driven) or even some concern. That's fine, it means the technique is working :-) and that it's helping us thinking.

Now, it would take quite a lot of time to explore all the facets of even a simple system like this. Actually, most people won't even do it in real life: they will fall in love with one idea, spend most of their time preaching and rationalizing about the virtues of their idea, and never really take the time to go through this kind of process. Still, trying to work out the "Reliability" swimlane would prove interesting. For instance, a common technique to achieve reliability is redundancy. Redundancy is much easier for a stateless system. Redundancy is easier when clients don't have to subscribe at all, but can simply poll. And so on. If you have some spare time, you may want to give it a try.

The notation I use is quite informal. I could improve that easily: UML is fairly flexible; so far I didn't, because people can grasp it anyway, even when I drop in the < < or > > to represent options or when I have just one arrow coming out, meaning that I've just decomposed a choice and a consequence. It's just a reasoning workflow, and I haven't felt the need to make it any more precise than that.

Back to the forcefield: the rationale is not the forcefield. The rationale, however, is talking about forces and centers. Outcomes (messaging and RTDB) are centers. Main choices, like "client driven" or "stateless", are again centers. Those centers are attracting or rejecting each other. This is the forcefield. This is closer to the way I think in the back of my mind, how I "see" the system, how I keep options open. Now, I just need a way to show this. That's for my next post :-).

Tuesday, June 09, 2009

Design Rationale

In the past few weeks I've taken a little time to write down more about the concept of frequency; while doing so, I realized I had to explore the concept of forcefield better, and while doing so (yeap :-)) I realized there was a rather large overlap between the notion of forcefield and the notion of design rationale.

Design rationale extends beyond software engineering, and aims to capture design decisions and the reasoning behind those decisions. Now, design decisions are (ideally) taken as trade-offs between several competing forces. Those forces creates the forcefield, hence the large overlap between the two subjects.

The concept of design rationale has been around for quite a few years, but I haven't seen much progress either in tools or notations. Most often, tools fall into the “rationalize after the fact” family, while I'm more interested in reasoning tools and notations, that would help me (as a designer) get a better picture about my own thoughts while I'm thinking. That resonates with the concept of reflection in action that I've discussed in Listen to Your Tools and Materials a few years ago.

So, as I was reading a recent issue of IEEE Software (March/April 2009), I found a list of recent (and not so recent) tools dealing with design rationale in a paper by Philippe Kruchten, Rafael Capilla, Juan Carlos Dueñas (The Decision View’s Role in Software Architecture Practice), and I decided to take a quick ride. Here is a very quick summary of what I've found.

Seurat
Seurat (see also the PDF tutorial on the same website) is based on a very powerful language / model, but the tool (as implemented) is very limiting. It's based on a tree structure, which makes for a nice todo list, but makes visual reasoning almost impossible. Actually, in the past I've investigated on using the tree format myself (and while doing so, I discovered others have done the same: see for instance the Reasoning Tree pattern), but restricting visualization to (hyperlinked) nodes in a tree just does not work when you're facing difficult problems.

Sysiphus
Sysiphus seems to have recently morphed into another tool (UniCase), but from the demo of UniCase it's hard to appreciate any special support for design rationale (so far).

AREL
(see also some papers from Antony Tang on the same page; Antony also had an excellent paper on AREL in the same issue of IEEE Software)
AREL is integrated with Enterprise Architect. Integration with existing case tools (either commercial or free) seems quite a good idea to me. AREL uses a class diagram (through a UML profile) to model design rationale, so it's not limited to a tree format. Still, I've found the results rather hard to read. It seems more like a tool to give structure to design knowledge than a tool to reason about design. As I go through the examples, I have to study the diagram; it doesn't just talk back to me. I have to click around and look at other artifacts. The reasoning is not in the diagram, it's only accessible through the diagram.

PAKME
Honestly, PAKME seems more like an exercise in building a web-based collaboration tool for software development than a serious attempt at providing a useful / usable tool to record design rationale. It does little more than organize artifacts, and it requires so many clicks / page refresh to get anything done that I doubt a professional designer could ever use it (sorry guys).

ADDSS
ADDSS is very much like PAKME, although it adds a useful Patterns section. It's so far from what I consider a useful design tool (see my paper above for more) that I can't really think of using it (sorry, again).

Knowledge Architect
Again, a tool with some good ideas (like Word integration) but far from what I'm looking for. It's fine to create a structured design document, but not to reason about difficult design problems.

In the end, it seems like most of those tools suffer from the same problems:
- The research is good; a nice metamodel is built, some of the problems faced by professional designers seem to be well understood.
- The tool does little more than organize knowledge, would get in the way of the designer thinking about thorny issues, does not help through visualization, and is at best useful at the end of the design process, possibly to fake some rationality, a-la Parnas/Clements.

That said, AREL is probably the most promising tool of the pack, but in the end I've being doing pretty much the same for years now, using (well, abusing :-) plain old use case diagrams to model goals and issues, with a few ideas taken from KAOS and the like.

Recently, I began experimenting with another standard UML diagram (the activity diagram) to model some portion of design reasoning. I'll show an example in my next post, and then show how we can change our perspective and move from design reasoning to the forcefield.

Sunday, April 26, 2009

Bad Luck, or "fighting the forcefield"

In my previous post, I used the expression "fighting the forcefield". This might be a somewhat uncommon terminology, but I used it to describe a very familiar situation: actually, I see people fighting the forcefield all the time.

Look at any troubled project, and you'll see people who made some wrong decision early on, and then stood by it, digging and digging. Of course, any decision may turn out to be wrong. Software development is a knowledge acquisition process. We often take decisions without knowing all the details; if we didn't, we would never get anything done (see analysis paralysis for more). Experience should mitigate the number of wrong decisions, but there are going to be mistakes anyway; we should be able to recognize them quickly, backtrack, and take another way.

Experience should also bring us in closer contact with the forcefield. Experienced designers don't need to go through each and every excruciating detail before they can take a decision. As I said earlier, we can almost feel, or see the forcefield, and take decisions based on a relatively small number of prevailing forces (yes, I dare to consider myself an experienced designer :-).
This process is largely unconscious, and sometimes it's hard to rationalize all the internal reasoning; in many cases, people expect very argumentative explanations, while all we have to offer on the fly is aesthetics. Indeed, I'm often very informal when I design; I tend to use colorful expressions like "oh, that sucks", or "that brings bad luck" to indicate a flaw, and so on.

Recently, I've found myself saying that "bad luck" thing twice, while reviewing the design of two very different systems (a business system and a reactive system), for two different clients.
I noticed a pattern: in both cases, there was a single entity (a database table, a in-memory structure) storing data with very different timing/life requirements. In both cases, my clients were a little puzzled, as they thought those data belonged together (we can recognize gravity at play here).
Most naturally, they asked me why I would keep the data apart. Time to rationalize :-), once again.

Had they all been more familiar with my blog, I would have pointed to my recent post on multiplicity. After all, data with very different update frequency (like: the calibration data for a sensor, and the most recent sample) have a different fourth-dimensional multiplicity. Sure, at any given point in time, a sensor has one most recent sample and one set of calibration data; therefore, in a static view we'll have multiplicity 1 for both, suggesting we can keep the two of them together. But bring in the fourth dimension (time) and you'll see an entirely different picture: they have a completely different historical multiplicity.

Different update frequencies also hint at the fact that data is changing under different forces. By keeping together things that are influenced by more than one force, we expose them to both. More on this another time.

Hard-core programmers may want more than that. They may ask for more familiar reasons not to put data with different update frequencies in the same table or structure. Here are a few:

- In a multi-threaded software, in-memory structures requires locking. If your structure contains data that is seldom updated, that means it's being read more than written: if it's seldom read and seldom written, why keep it around at all?
Unfortunately, the high-frequency data is written quite often. Therefore, either we accept to slow down everything using a simple mutex, or we aim for higher performances through a more complex locking mechanism (reader/writer lock), which may or may not work, depending on the exact read/write pattern. Separate structures can adopt a simpler locking mechanism, as one is being mostly read, the other mostly written; even if you go with a R/W lock, here it's almost guaranteed to have good performance.

- Even on a database, high-frequency writes may stall low-frequency reads. You even risk a lock escalation from record to table. Then you either go with dirty reads (betting on your good luck) or you just move the data in another table, where it belongs.

- If you decide to cache database data to improve performances, you'll have to choose between a larger cache with the same structure of the database (with low frequency data too) or a smaller and more efficient cache with just the high-frequency data (therefore revealing once more that those data do not belong together).

- And so on: I encourage you to find more reasons!

In most cases, I tend to avoid this kind of problems instinctively: this is what I really call experience. Indeed, Donald Schön reminds us that good design is not for everyone, and that you have to develop your own sense of aesthetics (see "Reflective Conversation with Materials. An interview with Donald Schön by John Bennett", in Bringing Design To Software, Addison-Wesley, 1996). Aesthetics may not sound too technical, but consider it a shortcut for: you have to develop your own ability to perceive the forcefield, and instinctively know what is wrong (misaligned) and right (aligned).

Ok, next time I'll get back to the notion of multiplicity. Actually, although I've initially chosen "multiplicity" because of its familiarity, I'm beginning to think that the whole notion of fourth-dimensional multiplicity, which is indeed quite important, might be confusing for some. I'm therefore looking for a better term, which can clearly convey both the traditional ("static") and the extended (fourth-dimensional, historical, etc) meaning. Any good idea? Say it here, or drop me an email!

Monday, March 30, 2009

Notes on Software Design, Chapter 5: Multiplicity

Gravity, as we have seen, provides a least resistance path, leading to monolithic software. If gravity was the only force at play, all software would be a monolithic blob. That being not the case, there must be other forces at play. Pervasive, primitive forces just like gravity, setting up a different forcefield, so that it's more convenient to keep things apart.

Consider an amateur programmer, writing a simple program to keep track of his numerous books. He starts with a database-centric approach, and without much knowledge of conceptual modeling, he jumps into creating tables. He creates a Book table, and adds a few fields:
AuthorFirstName, AuthorLastName, Title, Publisher, ISBN, …
It doesn't take much for him to realize that an author could be present several times in his database. He may begin to realize that he could perhaps add an Author table and move AuthorFirstName and AuthorLastName to that table.

Why? He doesn't know squat about database normalization. It's just a simple matter of multiplicity. One author - many books. Different multiplicity suggests to keep things apart. It is quite a good suggestion, as different multiplicity basically requires different gravitational centers, lest we end up with an unfavorable forcefield.
Consider what happens when our amateur programmer discovers he wants to add more biographical data about authors. Without an Author table, there is not any good gravitational center that could possibly attract those data. There is only the Book table, so there they go - adding more data redundancy.

Our amateur programmer, however, might not be so eager to give in. A single table is easier to manage. No foreign keys, no referential integrity, no nothing. It's just simpler, and he doesn't live in the future. He wants to do the simplest thing that could possibly work, so he keeps the Author fields inside the Book table.

He doesn't need much more, however, to realize that many books have more than one author. One book - many authors. That's a different forcefield again, with a many-to-many relationship. Now, our amateur is rather stubborn. He wants to keep things inside a single table anyway. So he goes on and adds more fields:

AuthorFirstName1, AuthorLastName1, AuthorFirstName2, AuthorLastName2, AuthorFirstName3, AuthorLastName3, Title, Publisher, ISBN, …

Of course, at this point he can basically feel he's no longer going along the path of least resistance. Actually, he's fighting the forcefield. Sure, gravity wants him to keep things together, but multiplicity doesn't. The form he's trying to give to the Book table is not in frictionless contact with the forcefield. The forcefield wants Book and Author to stay on their own.

Multiplicity is the primordial force that keeps [software] things apart. It shouldn't come as a surprise, then, that a great emphasis is given to multiplicity in the Entity-Relationship model and also in the static view of OO models (class diagram).
Multiplicity, however, goes much deeper than that. Reusability is a special case of multiplicity. What? :-). Well, it that sounds odd, you're not thinking fourth dimensionally (as Doc said in "Back to the future").

Consider a different problem, at a different granularity. Our amateur programmer is writing another small application, to keep track of who has borrowed some of his precious books. He's doing the simplest thing again, so he's basically going GUI-centered, and he's putting all the business logic inside the form itself. When you click on "Ok", the form will validate data and store a record into some table. The form requires, among other things, a phone number, which must be validated. It's the only place where he has to validate a phone number, so he puts the validation logic right inside the OnOk method generously provided by his RAD tool.

What's wrong? Apparently, there is no multiplicity at play here. There is one function, where he's doing two distinct things (validation and insertion), and inside validation he's doing different things, but each one is intended to validate one field, so it wouldn't pay to move the field validation logic elsewhere. Gravity keeps things together.

Multiplicity is hidden in the fourth dimension: time. Reusability means being able to take something you have already written (in the past) and use it again, unchanged, in the future. It means you have multiple callers, just not at the same time. If you think fourth dimensionally, multiplicity comes out quite clearly.

Multiplicity is an interesting force, one we need to be very familiar with. It will take a few posts to give it justice. Right now, it's time for me to put my running shoes on and hit the road :-). Still, here are a few pointers to some important issues that I'm going to cover in the next weeks (or months :-)

The fractal nature of multiplicity
Maintainability
Scalability
Conway's Law
Tools and Languages - lowering costs
Good questions to ask while doing analysis and design.
Is multiplicity stronger than gravity?
Examples from patterns. On truly understanding Abstract Factory.
N-degrees of separation.
Interfaces and Multiplicity - what is separation, anyway?
Cross-cutting concerns.
Down-to-earth guidelines.
The Display problem, once again.

Sunday, February 22, 2009

Notes on Software Design, Chapter 4: Gravity and Architecture

In my previous posts, I described gravity and inertia. At first, gravity may seem to have a negative connotation, like a force we constantly have to fight. In a sense, that's true; in a sense, it's also true for its physical counterpart: every day, we spend a lot of energy fighting earth gravity. However, without gravity, like as we know it would never exist. There is always a bright side :-).

In the software realm, gravity can be exploited by setting up a favorable force field. Remember that gravity is a rather dumb :-) force, merely attracting things. Therefore, if we come up with the right gravitational centers early on, they will keep attracting the right things. This is the role of architecture: to provide an initial, balanced set of centers.

Consider the little thorny problem I described back in October. Introducing Stage 1, I said: "the critical choice [...] was to choose where to put the display logic: in the existing process, in a new process connected via IPC, in a new process connected to a [RT] database".
We can now review that decision within the framework of gravitational centers.

Adding the display logic into the existing process is the path of least resistance: we have only one process, and gravity is pulling new code into that process. Where is the downside? A bloated process, sure, but also the practical impossibility of sharing the display logic with other processes.
Reuse requires separation. This, however, is just the tip of the iceberg: reuse is just an instance of a much more general force, which I'll cover in the forthcoming posts.

Moving the display logic inside a separate component is a necessary step toward [independent] reusability, and also toward the rarely understood concept of a scaled-down architecture.
A frequently quoted paper from David Parnas (one of the most gifted software designers of all times) is properly titled "Designing Software for Ease of Extension and Contraction" (IEEE Transactions on Software Engineering, Vol. 5 No. 2, March 1979). Somehow, people often forget the contraction part.
Indeed, I've often seen systems where the only chance to provide a scaled-down version to customers is to hide the portion of user interface that is exposing the "optional" functionality, often with questionable aesthetics, and always with more trouble than one could possibly want.

Note how, once we have a separate module for display, new display models are naturally attracted into that module, leaving the acquisition system alone. This is gravity working for us, not against us, because we have provided the right center. That's also the bright side of the thorny problem, exactly because (at that point, that is, stage 2) we [still] have the right centers.

Is the choice of using an RTDB to further decouple the data acquisition system and the display system any better than having just two layers?
I encourage you to think about it: it is not necessarily trivial to undestand what is going on at the forcefield level. Sure, the RTDB becomes a new gravitational center, but is a 3-pole system any better in this case? Why? I'll get back to this in my next post.

Architecture and Gravity
Within the right architecture, features are naturally attracted to the "best" gravitational center.
The "right" architecture, therefore, must provide the right gravitational centers, so that features are naturally attracted to the right place, where (if necessary) they will be kept apart from other features at a finer granularity level, through careful design and/or careful refactoring.
Therefore, the right architeture is not just helping us cope with gravity: it's helping us exploit gravity to our own advantage.

The wrong architecture, however, will often conjure with gravity to preserve itself.
As part of my consulting activity, I’ve seen several systems where the initial partitioning of responsibility wasn’t right. The development team didn’t have enough experience (with software design and/or with the problem domain) to find out the core concepts, the core issues, the core centers.
The system was partitioned along the wrong lines, and as mass increased, gravity kicked in. The system grew with the wrong form, which was not in frictionless contact with the context.
At some point, people considered refactoring, but it was too costly, because mass brings Inertia, and inertia affects any attempt to change direction. Inertia keeps a bad system in a bad state. In a properly partitioned system, instead, we have many options for change: small subsystems won’t put up much of a fight. That’s the dream behind the SOA concept.
I already said this, but is worth repeating: gravity is working at all granularity levels, from distributed computing down to the smallest function. That's why we have to keep both design and code constantly clean. Architecture alone is not enough. Good programmers are always essential for quality development.

What about patterns? Patterns can lower the amount of energy we have to spend to create the right architecture. Of course, they can do so because someone else spent some energy re-discovering good ideas, cleaning them up, going through shepherding and publishing, and because we spent some time learning about them. That said, patterns often provide an initial set of centers, balancing out some forces (not restricted to gravity).
Of course, we can't just throw patterns against a problem: the form must be in effortless contact with the real problem we're facing. I've seen too many good-intentioned (and not so experienced :-) software designers start with patterns. But we have to understand forces first, and adopt the right patterns later.

Enough with mass and gravity. Next time, we're gonna talk about another primordial force, pushing things apart.

See you soon, I hope!

Wednesday, January 14, 2009

Notes on Software Design, Chapter 3: Mass, Gravity and Inertia

I thought I could discuss the whole concept of Gravity and its implications in 2 or 3 (long) posts. While writing, I realized I'll need at least 4 or 5. So, this time I'll talk a little about how we can cope with gravity, and about the concept of Inertia. Next time, I'll discuss how we can exploit gravity, and why (despite the obvious cost) it is important that we do not surrender to (or ignore) gravity.

How do we cope with gravity? Needless to say, we have to spend some energy to move away from the amorphous big blob. As usual, we can also borrow some of that energy from someone (or something) else. Here are a few well-proven ideas:

- Architecture. I used to define architecture as "an overall structure, providing a natural place for features and concepts". I could now say that architecture must provide the right centers, or (from the viewpoint of mass and gravity) the right gravitational centers, so that the system can grow harmoniously. The right architecture is also the key to exploit gravity. More about this (and about the role of design patterns) next time.

- Refactoring. While architecture requires some kind of upfront investment, refactoring fights gravity in a more piecemeal, continuous fashion.
Although Refactoring and Emergent Design are often seen as the arch-enemies of Architecture, they are not. Experienced developers know that both are needed, as they work at different scales.
No amount of architecture, for instance, will ever prevent small-scale gravity to attract more code into existing functions. When we add a new feature (maybe under a tight deadline) gravity suggests to add that feature in place, often without even breaking the smallest separation unit – the function.
Conversely, gravity (and even more so Inertia) does not allow refactoring to scale economically beyond some (hard to identify) threshold.

- Measurement and Correction. While refactoring is often performed on-the-fly by programmers, fixing bad smells as they go, we can also use automatic tools to help us keep the code within some quality bounds. See Simple Metrics and More on Code Clones for a few ideas. Of course, measures provide guidance, but then the usual refactoring techniques must be applied.

- Visualization. More on this another time.

- Better Languages and Technologies. At some granularity level, technology becomes either a boon or an hindrance. Consider components: creating binary, release-to-release compatible components in C++ is a nightmare. .NET, for instance, does a much better job. Languages with a simple grammar, like Java and C#, or with strong support for reflection, also allows better tools to be built (see next point)

- Better Tools. Consider web services. They provide a relatively painless way to create a distributed system. The lack of pain doesn't really come from SOAP (which isn't that stroke of genius), but from the underlying HTTP/XML infrastructure and from the widely available, easily interoperable WSDL tools. Consider also refactoring: without good tools, it's a relatively error-prone activity. Refactoring tools make it much easier to fight gravity, moving code around with relatively little effort.


On Inertia
Mass brings gravity. Gravitational attraction works to preserve the existing structure (at the fractal levels I discussed in Chapter 1). In the physical world, however, we have another interesting manifestation of mass, called Inertia. There are many formulations of the concept (see the wikipedia page for details), but what is most interesting here is the simple F=m*a equation. We apply external forces (human work) to a system, but systems with a large mass won't easily change their state of rest or motion (including their current direction).

What is, then, the state of rest/motion for a software system? We could provide several analogies. To find the best analogy for acceleration, we need the best analogy for speed. To find the best analogy for speed, we need the best analogy for space.

The underlying idea must be that we apply some effort to move our software through space. What is the nature of that space? A few real-world examples are needed. Consider a C++/MFC application; we want to migrate the GUI layer to C#/.NET (interestingly, "migration" is commonly used to indicate motion in space). Consider a monolithic, legacy application that must be exposed as a service; or a web application that requires some performance improvement. Sure, all this may require some change in mass too (as some code will be added, some removed), but what is required is to move the software to a different place. What is that place, or, inside which kind of space do we want to move? I encourage you to think about this on your own for a while, before reading further.

My answer is rather simple: that space is the decision space. Software is built by making a number of decisions: we choose languages, technologies, architectural styles, coding styles (e.g. error handling styles, readability/efficiency trade offs, etc.), and so on. We also choose a development process, a team, etc.
Some of those decisions are explicit and carefully worked out. Some are taken on the fly as we code. At any given time, our software is located in a specific (albeit difficult to define) place inside a huge, multi-dimensional decision space. Each decision affects some portion of code. Some are clearly separated. Some are pervasive or cross-cutting.

Software development is a learning process; therefore, some of those decisions will be wrong. Some will be right for a while, but since real-world software does not live in a vacuum, we'll have to change them anyway later.
Changing a decision requires moving our software through the decision space: every decomposition unit affected by that decision will be touched, therefore adding to the mass to be moved (hence the deadly cost of cross-cutting, pervasive concerns).

Inertia explains why some decisions are so hard to change. Any decision we change is bound to require a change in the state of rest, or motion, of our software, because we want to move it into another place.
Some of those decisions impact a large mass of software, and therefore a strong force must be applied. Experience shows that after a critical mass is reached, it becomes so hard to even understand what to do, that software becomes an immovable object (therefore requiring an irresistible force :-).

Of course, small systems won't show much inertia, which explains why the dynamics of programming in the small are different from the dynamics of programming in the large.

Also, speed and acceleration depends also on time. I'll save this for a later time, as I still have to understand a few things better :-)

Enough for today. See you guys soon!

Saturday, December 06, 2008

Notes on Software Design, Chapter 2: Mass and Gravity

Mass is a simple concept, which is better understood by comparison. For instance, a long function has bigger mass than a short one. A class with several methods and fields has bigger mass than a class with just a few methods and fields. A database with a large number of tables has bigger mass than a database with a few. A database table with many fields has bigger mass than a table with just a few. And so on.

Mass, as discussed above, is a static concept. We don't look at the number of records in a database, or at the number of instances for a class. Those numbers are not irrelevant, of course, but they do not contribute to mass as discussed here.

Although we can probably come up with a precise definition of mass, I'll not try to. I'm fine with informal concepts, at least at this time.

Mass exerts gravitational attraction, which is probably the most primitive force we (as software designers) have to deal with. Gravitational attraction makes large functions or classes to attract more LOCs, large components to attract more classes and functions, monolithic programs to keep growing as monoliths, 1-tier or 2-tiers application to fight as we try to add one more tier. Along the same lines, a single large database will get more tables; a table with many fields will attract more fields, and so on.

We achieve low mass, and therefore smaller and balanced gravity, through careful partitioning. Partitioning is an essential step in software design, yet separation always entails a cost. It should not surprise you that the cost of [fighting] gravity has the same fractal nature of separation.

A first source of cost is performance loss:
- Hardware separation requires serialization/marshaling, network transfer, synchronization, and so on.
- Process separation requires serialization/marshaling, synchronization, context switching, and so on.
- In-process component separation requires indirect function calls or load-time fix-up, and may require some degree of marshaling (depending on the component technology you choose)
- Interface – Implementation separation requires (among other things) data to be hidden (hence more function calls), prevents function inlining (or makes it more difficult), and so on.
- In-component access protection prevents, in many cases, exploitation of the global application state. This is a complex concept that I need to defer to another time.
- Function separation requires passing parameters, jumping to a different instruction, jumping back.
- Mass storage separation prevents relational algebra and query optimization.
- Different tables require a join, which can be quite costly (here the number of records resurfaces!).
- (the overhead of in-memory separation is basically subsumed by function separation).

A second source of cost is scaffolding and plumbing:
- Hardware separation requires network services, more robust error handling, protocol design and implementation, bandwidth estimation and control, more sophisticated debugging tools, and so on.
- Process separation requires most of the same.
- And so on (useful exercise!)

A third source of cost is human understanding:
Unfortunately, many people don’t have the ability to reason at different abstraction levels, yet this is exactly what we need to work effectively with a distributed, component-based, multi-database, fine-grained architecture with polymorphic behavior. The average programmer will find a monolithic architecture built around a single (albeit large) database, with a few large classes, much easier to deal with. This is only partially related to education, experience, and tools.

The ugly side of gravity is that it’s a natural, incremental, attractive, self-sustaining force.
It starts with a single line of code. The next line is attracted to the same function, and so on. It takes some work to create yet another function; yet another class; yet another component (here technology can help or hurt a lot); yet another process.
Without conscious appreciation of other forces, gravity makes sure that the minimum resistance path is followed, and that’s always to keep things together. This is why so much software is just a big ball of mud.

Enough for today. Still, there is more to say about mass, gravity and inertia, and a lot more about other (balancing) forces, so see you guys soon...

Breadcrumb trail: instance/record count cannot be ignored at design time. Remember to discuss the underlying forces.

Sunday, November 30, 2008

Notes on Software Design, Chapter 1: Partitioning

In a previous post, I discussed Alexander’s theory of Centers from a software design perspective. My (current) theory is that a Center is (in software) a locus of highly cohesive information.

It is worth noting that in order to create highly cohesive units, we must be able to separate things. This may seem odd at first, since cohesion (as a force) is about keeping things together, not apart, but is easily explained.
Without some way to partition knowledge, we would have to keep everything together. In the end, conceptual cohesion will be low, because a multitude of concepts, abstractions, etc., would all mash up into an incoherent mess.

Let’s focus on "executable knowledge", and therefore leave some artifacts (like requirement documents) away for a while. We can easily see that we have many ways to separate executable knowledge, and that those ways apply at different granularity levels.

- Hardware separation (as in distributed computing).
- Process separation (a lightweight form of distributed computing, with co-located processes).
- In-process component separation (e.g. DLLs).
- Interface – Implementation separation (e.g. interface inheritance in OO languages).
- In-component access protection, like public/private class members, or other visibility mechanism like modules in Modula 2.
- Function separation (simply different functions).

Knowledge is not necessarily encoded in code – it can be encoded in data too. We have several ways to partition data as well, and they apply to the entire hierarchy of storage.

- Mass storage separation (that is, using different databases).
- Different tables (or equivalent concept) within the same mass storage.
- Module or class static data (inaccessible outside the module).
- Data member (inaccessible outside the instance).
- Local / stack based variables (inaccessible outside the function).

It is interesting to see how poor data separation can harm code separation. Sharing tables works against hardware separation. Shared memory works against process separation. Global data with extern visibility works against module separation. Get/Set functions work against in-component access protection.
Code and data separation are not orthogonal concepts, and therefore they can interfere with each other.

There is more to say about separation and its relationship with old concepts like coupling (straight from the '70s). More on this another time; right now, I need to set things up for Chapter 2.

In the same post above, I mentioned the idea that centers have fractal nature, that is, they appear at different abstraction and granularity levels. If there are primordial forces in software, it seems reasonable that they follow the same fractal nature: in other words, they should apply at all abstraction levels, perhaps with a different slant.

The first force we have to deal with is Gravity. Gravity works against separation, and as such, is a force we cannot ignore. Gravity, as in physics, has to do with Mass, and another manifestation of Mass is Inertia. Gravity, like in the physical world, is a pervasive force, and therefore, separation always entails a cost. Surrending to gravity, however, won't make your software fly :-). I’ll talk about all this very soon.

On a more personal note, I haven’t said much about running lately. I didn’t give up; I just have nothing big to tell :-). Anyway: there is still a little snow around here, but I was beginning to feel like a couch potato today, so I geared up and went for a 10Km (slow :-) run. At Km 4 it started raining :-)), but not so much to require an about face. At Km 8 the rain stopped, and I ran my last 2 Km slightly faster. It feels so great to be alive :-).

Sunday, September 21, 2008

... and Found?

What is a center in software? Although I gave the subject some serious thinking, my answer is quite simple; in fact, we knew it all along.

Let's recap Alexander's definition:
Centers are those particular identified sets, or systems, which appear within the larger whole as distinct and noticeable parts. They appear because they have noticeable distinctness, which makes them separate out from their surroundings and makes them cohere, and it is from the arrangements of these coherent parts that other coherent parts appear.

We might be tempted to define centers as the main decomposition mechanism in some paradigm. For instance, we could say that centers are classes. In fact, if you change "centers" with "classes" in the text above, it still makes sense. Of course, that would be the wrong answer. Is is functions, then? This is the path Jim took, until he got down to the spatial properties of code and so on. I'll try the road less traveled by.

I've often quoted Philip Armour saying that software development is a knowledge acquisition activity. In Listen to your Tools and Materials, I went one small step further and said "Our material is knowledge, or information.We acquire, understand, filter, and structure information".

Information is often assimilated with data, as in data structures, databases, and so on. That's a limiting view, as it could only be applied to finite sets, defined extensionally. Information can also be captured intensionally, by defining predicates. Information can be captured procedurally, by defining processes. This is where I first depart from Jim. I see no need to transform procedures into spatial data structures. Procedures are fine just as they are: information encoded through a process.

It is interesting to see that the act of acquiring and encoding information permeates all phases (or activities) of software development. We analyze requirements by understanding, classifying, encoding information. It doesn't really matter if the result is a use case, a class diagram, a piece of code, some natural text, a set of test cases. We structure information during design, while coding, even while indenting text. It may be turtles :-) all the way down in meatspace, but it's information all the way down in cyberspace.

This is exactly the kind of fractal nature we were looking for. However, the magic X is not simply information: it's highly cohesive information. OK, that was it :-).

The concept scales very well across a number of paradigms, artifacts, scales. A few examples:
- a good class is a set of cohesive methods and data
- a good module is a set of cohesive classes or functions
- a good function (or method) manipulates cohesive input through a cohesive process (separation of concerns) giving out a cohesive output (information hiding)
- a good aspect brings scattered concerns together into a single, cohesive point of development, maintenance, and reuse.
- empty lines in source code are used to separate highly cohesive portions of code.
- proper layout in UML class/component diagrams is used to aggregate highly cohesive portions.
- and so on.

So what is a center? A center is a locus of highly cohesive information. The form of a center is influenced by our paradigm and our material. But as I contended a couple of years ago in a post I prophetically :-) titled Unifying Concepts, paradigms are all about one single principle. Partitioning knowledge, I said then. Partitioning information in highly cohesive sets (or loci), I should say now.

What about Jim question? What kind of x is there that makes it true to say that every successful program is an x of x's? Highly cohesive information, of course! From subsystems to components, down to grouping and indentation of source code.

I began this post by saying we knew it all along. In a sense, we did: in another prophetic post (More synchronicity, do I need to say more? :-), I quoted Yourdon saying that he got the concept of cohesion while reading Alexander.
That kinda closes the circle, doesn't it?

Saturday, September 13, 2008

Lost

I’ve been facing some small, tough design problems lately: relatively simple cases where finding a good solution is surprisingly hard. As usual, it’s trivial to come up with something that “works”; it’s also quite simple to come up with a reasonably good solution. It’s hard to come up with a great solution, where all forces are properly balanced and something beautiful takes shape.

I like to think visually, and since standard notations weren’t particularly helpful, I tried to represent the problem using a richer, non-standard notation, somehow resembling Christopher Alexander’s sketches. I wish I could say it made a huge difference, but it didn’t. Still, it was quite helpful in highlighting some forces in the problem domain, like an unbalanced multiplicity between three main concepts, and a precious-yet-fragile information hiding barrier. The same forces are not so visible in (e.g.) a standard UML class diagram.

Alexander, even in his early works, strongly emphasized the role of sketches while documenting a pattern. Sketches should convey the problem, the process to generate or build a solution, and the solution itself. Software patterns are usually represented using a class diagram and/or a sequence diagram, which can’t really convey all that information at once.

Of course, I’m not the first to spend some time pondering on the issue of [generative] diagrams. Most notably, in the late ‘90s Jim Coplien wrote four visionary articles dealing with sketches, the geometrical properties of code, alternative notations for object diagrams, and some (truly) imponderable questions. Those papers appeared on the long-dead C++ Report, but they are now available online:

Space-The final frontier (March 1998)
Worth a thousand words (May 1998)
To Iterate is Human, To Recurse, Divine (July/August 1998)
The Geometry of C++ Objects (October 1998)

Now, good ol’ Cope has always been one of my favorite authors. I’ve learnt a lot from him, and I’m still reading most of his works. Yet, ten years ago, when I read that stuff, I couldn’t help thinking that he lost it. He was on a very difficult quest, trying to define what software is really about, what beauty in software is really about, trying to adapt theories firmly grounded in physical space to something that is not even physical. Zen and the Art of Motorcycle Maintenance all around, some madness included :-).

I re-read those papers recently. That weird feeling is still here. Lights and shadows, nice concepts and half-baked ideas, lot of code-centric reasoning, overall confusion, not a single strong point. Yeah, I still think he lost it, somehow :-), and as far as I know, the quest ended there.
Still, his questions, some of his intuitions, and even some of his most outrageous :-) ideas were too good to go wasted.

The idea of center, that he got from The Nature of Order (Alexander’s latest work) is particularly interesting. Here is a quote from Alexander:
Centers are those particular identified sets, or systems, which appear within the larger whole as distinct and noticeable parts. They appear because they have noticeable distinctness, which makes them separate out from their surroundings and makes them cohere, and it is from the arrangements of these coherent parts that other coherent parts appear.

Can we translate this concept into the software domain? Or, as Jim said, What kind of x is there that makes it true to say that every successful program is an x of x's?. I’ll let you read what Jim had to say about it. And then (am I losing it too? :-) I’ll tell you what I think that x is.

Note: guys, I know some of you already think I lost it :-), and would rather read something about (e.g.) using variadic templates in C++ (which are quite cool, actually :-) to implement SCOOP-like concurrency in a snap. Bear with me. There is more to software design than programming languages and new technologies. Sometimes, we gotta stretch our mind a little.

Anyway, once I get past the x of x thing, I’d like to talk about one of those wicked design problems. A bit simplified, down to the essential. After all, as Alexander says in the preface of “Notes on the Synthesis of Form”: I think it’s absurd to separate the study of designing from the practice of design. Practice, practice, practice. Reminds me of another book I read recently, an unconventional translation of the Analects of Confucius. But I’ll save that for another time :-).

Tuesday, May 27, 2008

On the concept of Form (3): the Force Field

Warning: :-) this post is going to be somehow conceptual. I'll soon move to some real-world, software-based example, but I really need to introduce some concepts first.

The notion of force field might be unfamiliar to some, so I'll borrow a great example from Alexander himself. Consider your first "requirement" (for a system yet to be built) as a permanent magnet of some size and shape. If you place a flat glass over that magnet, and drop some iron filings on it, the iron will naturally dispose along the magnetic field lines. That gives us an image of [a section of] the force field. Now add another magnet: the shape of the field will change, as the magnets are interacting, thereby shaping a more complex force field.
We can change the [shape of the] field in many ways: moving magnets around, changing their shape, their magnetization, or even adding some shields around magnets.

The great thing about the magnetic field is that we can somehow observe its shape. Indeed, if our goal was to create a form that can be put into effortless contact with the field, we'll just have to replicate the same form that the magnetic field is giving to the iron filings. As Alexander says (NoTSoF, page 21), "once we have a diagram of forces [...] this will in essence also describe the form as a complementary diagram of forces".
In the real world, and even more so in the software world, we are never so lucky: the force field is invisible and tends also to be highly unstable.

Usually, the force field of a software project starts with Requirements. Requirements are often categorized in some way, like "functional" and "nonfunctional", or "user requirements" and "system requirements. However, requirements of any kind are just like magnets: they contribute to shape the overall field.

Requirements are just one kind of force, that is, they are not alone in shaping the field. Many technological choices we make, sometimes very (or too) early, are also shaping the force field.
Consider a simple business application. Once you decide that you'll build a web application, you have added quite a few powerful magnets. If you're familiar with JSP and EJB, you are naturally tempted to choose those technologies early on. That's like adding quite a few powerful magnets again. Or maybe it's like adding a magnetic shield: it really depends on context.

Sometimes, technology makes the field simpler: the right infrastructure should simplify the field, that is, it should act more like a shield than like a magnet. In this sense, infrastructure should be chosen when the dominant forces are known, unlike what happens in many projects, where infrastructure (usually a superstructure in disguise) is chosen too early, thereby making the overall field even more complex.

We also shape the field, so to speak, by choosing what to ignore and what to postpone in any given release. Anything we ignore, like anything we postpone, won't be allowed to shape the field right now.
This is fine, as long as the corresponding magnets will be placed somehow distant from the others (good modularity), possibly with some kind of magnetic shielding in between (stable interfaces). It's also fine if we can ignore it forever. Any attempt to temporarily ignore a strictly interacting force will wreak havoc later on, as our form will no longer match the resulting force field. Refactoring can accommodate minor misfits with the ideal form, but won't help much when the force field changes radically (see also my notes on refactoring here).

Here lies one of the architect's fundamental abilities: the intuitive understanding that something can be beneficially postponed, while something else must be dealt with immediately, because its influence on the force field is so strong that doing otherwise will shift us toward the wrong kind of form.

It is important to understand the role of choice in exploiting instability. Too often, software developers tend to see requirements as "fixed". They don't like to negotiate: it's much easier to fight the compiler than the marketing guys.
A good architect, however, can't miss the opportunity to simplify the field by moving some magnets around. That requires the ability to see the overall picture and the fine details at the same time. Here is Alexander again (page 18): "this ability to deal with several layers of form-context boundaries in concert is an important part of what we often refer to as the designer's sense of organization. The internal coherence of an ensemble depends on a whole net of such adaptations".
That ain't an easy feat. It requires an understanding of the business, the users, and the technology. And even more important, it requires a willingness to act on that knowledge. The power of choice extends to the infrastructure: sometimes, by willingly postponing a technological choice until the force fields takes shape, we can make a better, more "natural" choice.

This can be hard for some developers: they want certainty, and they want it now. In my experience, that goes in pair with the willingness to adopt a sub-optimal, but repetitive and context free solution for a wide class of problems, instead of adopting several optimal, but reasoned and context-dependent solution for smaller classes of problems.

Unfortunately, choosing the "wrong" technology is very much like choosing the wrong shape or orientation for a building. To quote Alexander once again (page 29): "Instead of orienting the house carefully for sun and wind, the builder conceives its organization without concern for orientation, and light, heat, and ventilation are taken care of by fans, lamps, and other kinds of peripheral devices. Bedrooms are not separated from living rooms in plan, but are placed next to one another and the walls between them stuffed with acoustic insulation".
I think we can easily see a parallel with software here: a misfit technology is chosen early on. As a consequence, you find yourself adding more and more technology (fans, lamps, insulation) to satisfy the end-user needs. "Modern" web applications seem to have taken this path: faced with a difficult field, they're layering one technology on top the other, desperately trying to overcome the problems of the previous layer.

Next time, in no particular order: agility, unstable requirements, early coding, TDD, "seeing" the field, internal and external representations, is UML any useful, order within chaos (dominant forces), constructive force field and systematic techniques, and whatever else will come to my mind :-).

Friday, April 25, 2008

Can AOP inform OOP (toward SOA, too? :-) [part 2]

Aspect-oriented programming is still largely code-centric. This is not surprising, as OOP went through the same process: early emphasis was on coding, and it took quite a few years before OOD was ready for prime time. The truth about OOA is that it never really got its share (guess use cases just killed it).

This is not to say that nobody is thinking about the so-called early aspects. A notable work is the Theme Approach (there is also a good book about Theme). Please stay away from the depressing idea that use cases are aspects; as I said a long time ago, it's just too lame.

My personal view on early aspects is quite simple: right now, I mostly look for cross-cutting business rules as candidate aspects. I guess it's quite obvious that the whole "friend gift" concept is a business rule cutting through User and Subscription, and therefore a candidate aspect. Although I'm not saying that all early aspects are cross-cutting business rules (or vice-versa), so far this simple guideline has served me well in a number of cases.

It is interesting to see how early aspects tend to be cross-cutting (that is, they hook into more than one class) but not pervasive. An example of pervasive concern is the ubiquitous logging. Early aspects tend to cut through a few selected classes, and tend to be non-reusable (while a logging aspect can be made highly reusable).
This seems at odd with the idea that "AOP is not for singleton", but I've already expressed my doubt on the validity of this suggestion a long time ago. It seems to me that AOP is still in its infancy when it comes to good principles.

Which brings me to obliviousness. Obliviousness is an interesting concept, but just as it happened with inheritance in the early OOP days, people tend to get carried over.
Remember when white-box inheritance was applied without understanding (for instance) the fragile base class problem?
People may view inheritance as a way to "patch" a base class and change its behaviour in unexpected ways. But truth is, a base class must be designed to be extended, and extension can take place only through well-defined extensions hot-points. It is not a rare occurrence to refactor a base class to make extension safer.

Aspects are not really different. People may view aspects as a way to "patch" existing code and change its behaviour in unexpected ways. But truth is, when you move outside the safe realm of spectators (see my post above for more), your code needs to be designed for interception.
Consider, for instance, the initial idea of patching the User class through aspects, adding a data member, and adding a corresponding data into the database. Can your persistence logic be patched through an aspect? Well, it depends!
Existing AOP languages can't advise any given line: there is a fixed grammar for pointcuts, like method call, data member access, and so on. So if your persistence code was (trivially)

class User
{
void Save()
{
// open a transaction
// do your SQL stuff
// close the transaction
}
}
there would be no way to participate to the transaction from an aspect. You would have to refactor your code, e.g. by moving the SQL part in a separate method, taking the transaction as a parameter. Can we still call this obliviousness? That's highly debatable! I may not know the details of the advice, but I damn sure know I'm being advised, as I refactored my code to be pointcut-friendly.

Is this really different from exposing a CreateSubscription event? Yeah, well, it's more code to write. But in many data-oriented applications, a well-administered dose of events in the CRUD methods can take you a long way toward a more flexible architecture.

A closing remark on the SOA part. SOA is still much of a buzzword, and many good [design] ideas are still in the decontextualized stage, where people are expected to blindly follow some rule without understanding the impact of what they're doing.

In my view, a crucial step toward SOA is modularity. Modularity has to take place at all levels, even (this will sound like an heresy to some) at the database level. Ideally (this is not a constraint to be forced, but a force to be considered) every service will own its own tables. No more huge SQL statements traversing every tidbit in the database.

Therefore, if you consider the "friend gift" as a separate service, it is only natural to avoid tangling the User class, the Subscription class, and the User table with information that just doesn't belong there. In a nutshell, separating a cross-cutting business rule into an aspect-like class will bring you to a more modular architecture, and modularity is one of the keys to true SOA.

Thursday, April 24, 2008

Can AOP inform OOP (toward SOA, too? :-) [part 1]

Note: I began tinkering with this idea several months ago, inspired by some design work I was doing on a real-world project. Initially, I thought I could pimp up this stuff into a full-fledged article. Months came by and I didn't, so (in the spirit of my old concept of Blogging as Destructuring) I thought I could just as well say something here instead.

Consider a web site offering some kind of service to users. Users have to register (yeah :-), and they can then buy a subscription to several services. A simple class diagram can model this easily:



In most cases, the underlying database schema wouldn't be much different.

In a real case, there might be different kind of services, each requiring a derived class, but we can ignore this issue right now. Also, there might be several kind of subscription (time-based, pay-per-use, and so on), but again, let's ignore that issue right now, and just concentrate on time-base subscriptions.

A subscription can be renewed at any time. In the object model, this would translate into a Renew method in class Subscription. Renew could take a parameter, like the extension of the renewal. Most likely, it would add a new record to the Subscription table (to keep track of the whole subscription history for that user), and possibly create a new Subscription object. So far, so good.

Now the marketing guys come up with a nice idea: whenever you register, you can provide the email address of a friend who has already registered. If you do, everytime you renew a subscription your friend will get some kind of gift, like a free extension or whatever. This may generate a few more leads, meaning a little more business.

From a purely OO mindset, this may lead us to perform a little maintenance on the existing model. We can add a relationship from User to User to model the "friend" relationship, or we could just add a field like friendEmail (possibly left empty). At the database level, adding a field is probably easier.
We can also modify the Renew method to check for the presence of a friend in the subscribing user, and if so, invoke some Gift logic. I won't draw a modified diagram for this scenario: I guess it's just too obvious.

Now, this approach obviously works. However, you may recognize that the whole "friend gift" concept is a cross-cutting concern: it cuts through User (requiring new data) and through Subscription (requiring new logic). More on detecting cross-cutting concerns during analysis and design (and on the difference between cross-cutting and pervasive concerns) next time.

In the AOP world, we could approach the problem differently. We could define a FriendGift aspect. The aspect may add a new data member to the User class (the friendEmail), and intercept the persistence logic to save / load that data from the database. The aspect may also intercept Renew and perform the Gift logic if required.
Actually the aspect doesn't have to modify User (class and table); indeed, it would be better not to, as the persistence logic might not be easy to intercept (more on this next time). The aspect could just use a different database table to store the friend email.

Using an UML-like notation, we could model this approach as:



Interestingly, the database is probably different from the previous scenario: Friend would map to its own table.

What if we are not using an AOP-enabled language? For instance, we might be using plain old C#. Can we still borrow some ideas from the above? I believe so. In the same sense as OO thinking can inform traditional structured programming, AOP thinking can inform traditional object oriented programming.

If we don't have pointcuts, join points and aspects, we may still have events (in C#/.NET, or callback in other languages, and so on). Sure, we have to forego obliviousness (which might not be so bad: more on this next time). But we can come up with a more modular and decoupled design by reasoning along the AOP lines:



No rocket science here :-). Just a different form, same function. Different database, a more general Subscription class, unaffected by a business rule which may change at any time. Also the User class and table are left unaffected.

More on this, and a few comments on the SOA part, in just a few days (I hope :-).