Tuesday, July 27, 2010

On Kent Beck's Responsive Design

I try to keep an eye on software design literature. I subscribe to relevant publications from IEEE and ACM; I get a copy of conference proceedings; I read blogs and, just like everybody else, I follow links and suggestions. Being aware of the potential risk of filtering out information that is not aligned with the way I think, I'm also following a few blogs that are definitely not aligned with my belief system, just to make sure I don't miss too much. I'm bound to miss something, anyway, but I can live with that :-).

A few weeks ago I realized that I wasn't following Kent Beck's blog. I don't know why: I'm not a fan of XP, but I'm rather fond of Kent. He's an experienced designer with many good ideas. So, I took a tour of his posts and discovered that I had completely missed the concept of Responsive Design. That's weird, because I'm reading/scanning quite a few agile-oriented blogs, and I've never seen any mention of it. Oh, well; time to catch up.

I did my homework and spent some time reading all posts in the Responsive Design category and watching Kent's interesting QCon presentation. Looking at blog dates, it seems like activity peaked in April 2009, but rapidly declined during 2009 to almost nothing in 2010. So, apparently, Responsive Design hasn't taken the world by storm yet. Well, design wasn't so popular in the pre-agile era, and it's not going to be popular in the post-agile era either :-).

Anyway, the presentation inspired me with a stream of reflections that I'd like to share with you guys. I would recommend that you watch the presentation first (I know, it takes about 1 hour, but I think it's worth it). What follows is not intended to be a critic. I'm mostly trying to relate Kent's perspective on "what software design is about" with mine. It's a rather long post, so I split it in three paragraphs. The first two are about concepts in the presentation itself. The third is about an interesting difference in perspective, and how it affects our thinking.

Reflections on the introductory part
The idea of taking notes as we design, to uncover our own strategies and tactics, sounded so familiar. I guess at some point some of us feel an urge to understand what we're really doing. Although I'm looking for something different from Kent, I share some of his worries: trivial or extremely complicated insights might be just around the corner :-)

The talk about the meaning of "responsive" around 0:13 is resonating very well with the concepts of forcefield, design as "shaping a material", and backtalk. From my perspective, Kent is mostly saying: am I going to look at the forcefield, and design accordingly, or am I going to force a preconceived set of design decisions upon the problem? (see also my empty cup post).

Steady flow of features. We all love that, of course, and in a sense it's the most agile, XP-ish concept in the entire talk. The part about "the right time to design" seems rather connected with the Least Responsible Moment, so I can't really spare you a link to my own little idea of Invariant Decisions.
Again, I have a feeling that there is never enough context when talking about timing. I've certainly designed systems with a large separation (in time) between design and implementation. In many cases, it worked out quite well (of course, we always considered design as something fluid, not cast in stone). I guess it has a lot to do with how much domain knowledge you can leverage. Not every project is about doing something nobody has done before. Many are "just" about doing something in a much better way. Context, context, context...

Starting with values: it's something I have to improve in my Physics of Software work. Although I've said several times that all new methods should start with a declaration of values and beliefs, so far I haven't been thorough on this side. For instance, I value honest, unbiased, free, creative communication and thinking about software design issues, where the best ideas, and not the best talkers, get to win (just because I'm a good talker doesn't mean I want to win that way :-)). That's why I'm trying to come up with a less ambiguous, wishy-washy way to think and talk about design.

"Most of the decisions I make while designing have nothing to do with the problem domain [… but ...] are shaped by the fact that I'm instructing a computer. Weird. This is not my experience. I surely reuse solutions across domains, but the problem domain is heavily shaping the forcefield. When you go down to small-scale decisions, yeah, it gets more domain-independent, but high-level design is heavily problem-dependent. For instance, while you can usually (but not always) implement a chosen data structure in a rather domain-independent way, choosing the right data structure is usually a domain-dependent choice.
Don't get me wrong: I know I can safely ignore some portions of the problem domain when I design - I do that all the time. My view is that I can ignore the (possibly very complex) functional issues that have little impact on the forcefield, and therefore on the form. This is a complex subject that would require an in-depth study.

Principles. I don't quite like the term "principle" because is so wide in meaning that you can use it for just about everything. So, while the "don't repeat yourself" is a commonly accepted design principle, Kent's examples of principles from the insurance world or from Dynamo looks much more like pre-made [meta] decisions to me.
"You should never to lose a write". Fine. It's a decision, you can basically consider that a requirement.
"We're there to aid human decision making". Fine. It's a meta-decision, meaning, it's not just a design decision: it will influence everything, from your value proposition down to functional requirements and so on. It's not really a design principle, not even a project-specific design principle: it's more akin to a metaphor or to a goal.
Aside from that distinction, I agree that sometimes constraints, whatever form they take, are actually helpful during design, because they prune out a lot of the decision space (again, see my post above on the empty cup). Of course, the wrong constraints can prevent you from finding the sweet spot, so timing is an issue here as well - some constraints should be considered fluid early on, because you may learn that they're not the right constraints for your project.

The short part about patterns and principles reminded me of my early work on SysOOD, some of which got published in IEEE Software exactly as Principles Vs. Patterns. So, while Kent wants to understand principles better, I've always been trying to eliminate principles altogether (universal principles, that is). Still, I'd like to see the "non-universal" or "project-specific" principles recast under some other name and further explored, because I agree that having a shared set of values / goals / constraints can help tremendously in any significant project. Oh, I still have the Byte smalltalk balloon issue somewhere : ).

Reflection on the 4 strategies
I see the "meat" of the talk as being about [meta] strategies to move in the decision space. Your software is at some point A in the multi-dimensional decision space; for instance, you decided early on that you would use filenames and not streams. Now you want to move your software to another point B in the decision space (where, for instance, streams are used instead of filenames). How do you go from A to B?
That's an interesting issue, where experienced designers/programmers routinely adopt different approaches compared to novices (see the empty cup post again), so it's both interesting and promising to see Kent articulate his reasoning. In a sense, it's completely orthogonal to both patterns/antipatterns (which may point you to B / away from A) and to my work (which is more concerned with understanding and articulating why B is a more desirable point than A, or to find B in the first place).
Actually, strictly speaking I'm not even sure this stuff is really about "design". In my view, design is more about finding B. The execution/implementation of design, however, is about moving to B. Therefore, Kent's strategies looks more like "design execution strategies" than "design strategies".

That said, there is a subtle, yet striking difference between Kent's perspective and mine. Kent is talking about / looking to software design in a "first-person perspective", while I'm looking through the "I'm shaping a material" metaphor. He says things like: "I'm here; how do I get there?", and he's literally mimicking small steps. I'm saying things like: "my software is here; how do I move it there?"; it may seem like a small, irrelevant distinction, but I think it's shaping our thoughts rather deeply. It surely shapes the names we use: I would have never used "stepping stone", because I'm not thinking of myself going anywhere : ). More on this on the third paragraph below.

Although this is probably the most “practical” part of the talk, concepts are a little fuzzy. Leap can be seen as a strategy, actually the simplest strategy of all: you just move there. Safe Steps is more like a meta-strategy to move inside the decision space (indeed, around 42:00 Kent talks about Safe/Small Steps as a "principle"). Once you choose Safe Steps, you have many options left, Parallel, Stepping Stones and Simplifications being 3 of them. Still, the granularity of those strategies is so coarse that it's rather surprising Kent found only 3 of them.

For instance, I probably wouldn't have used Parallel to deal with the filenames/streams issues. I can't really say without looking at the code, but assuming I wanted to take small steps, I could have created a new class, which IS-A stream and HAS-A filename, moved the existing code to use that class (a very simple and safe change, because both the stream and the filename are there), gradually moved all the filename-dependent code inside that class, removed the filename from the interface (as it's no longer used), moved the filename-dependent code in a factory (so that I could make a choice on the stream type). At that point I could have killed the temporary class, using a plain stream again. This strategy (I could call it wrap/extract/unwrap) is quite simple and effective when dealing with legacy code, and doesn't really fit in any of the strategies Kent is proposing.

Stepping Stone. While Leap and Parallel are more like strategies to implement a decision you have already taken, it seems to me that Stepping Stone is different. Although this might be my biased understanding of it, it's also the only interpretation I can give that makes Stepping Stone different from good old top-down decomposition (which Kent makes clear is different, at about 50:30).
I would say that Stepping Stone is about making progress where you can, and while doing so, shed some light on the surroundings as well. This is not very clear in the beginning, but as Kent moves forward, it's more and more obvious that he's thinking about novel situations, where you don't know exactly what you want to build.
Indeed, sometimes the forcefield is just too blurry. Still, we often can see a portion of it rather clearly, and we may have a sensible idea about the form we want to shape locally. Stepping Stone is a good strategy here: build (or design) what you understand, get feedback (or backtalk), get progress, etc. I usually keep exploring the dark areas while I'm building stepping stones. I'm also rather careful not to overcommit on stepping stones, as they may turn out not to fit perfectly with what's coming next. That's fine, inasmuch as we understand that design is a process, not a phase with a beginning and an end.
I can also build "stepping stones" when I know where I'm going to, but as I said, at that point it's hard to tell the difference between building stepping stones and executing top-down design. Oh, by the way, for the agile guys who hated me so much for saying that TDD is a limited design strategy: move around 50:20. Play. Rewind. Play again. Repeat till necessary :-). But back to Stepping Stones: I think the key phrase here is around 55:40, when Kent says that by standing on a stepping stone he can see where to go next, which I read like: a previously blurry portion of the forcefield is now visible and clear, and I can move further.

Simplification. This is more akin to a design strategy, not to an execution strategy. I could see Simplification as a way to create small steps (perhaps even stepping stones). Indeed, the difference between Stepping Stones and Simplification is not really obvious (around 1:02:00 a participant asks about the similarity and doesn't seem like Kent can explain the difference so well either). I guess simplification is about to solve one single problem by solving a simpler version of the same problem first, whereas Stepping Stones are about solving a complex problem by solving another problem first (one that, however, brings us closer to solving the initial problem). In this sense, Simplification can be used where you can see where you wanna go (but it's too big / risky / ambitious) but also where you can't see exactly where you want to go, but you can clearly see a smaller version of it. Or, from my "I'm shaping a material" perspective, you can't really see the form of the real thing, but you can see the form of a smaller version of it. As I said, this stuff is interesting but rather fuzzy, and honestly, I'm not really sold on those 4 strategies.

Overall, I'd like to see Responsive Design further developed. It would probably benefit from a better distinction between design, design execution, principles, values, goals, and so on, but it's interesting stuff in the almost flat landscape of contemporary software design literature.

Perspective Matters
As I said, Kent is looking at the decision space in first-person perspective, while I see myself moving other things around. There is an interesting implication: when you think about moving yourself, you mostly consider distance. You are who you are. You want to go elsewhere. Distance, and therefore Leaps, Small Steps, Stepping Stones, it's all that matters.

When you think about moving something else, you have to consider two more factors. That, in my opinion, allows better reasoning.

Sure, distance is an important factor. But, as I explained in my post about Inertia, another important factor is Mass. You cannot ignore mass, but it's unnatural to consider a varying mass from a first-person perspective.

A simple example: I use Visual Studio (yeah yeah I know, some of you don't like it : ). Sometimes, as I'm creating a new project, I click in the wrong place (yeah yeah I'm kinda dumb :-) and I get the wrong project type. I want a C# project and I get a VB.NET project (yikes!! :-)). That's quite a distance in the decision space. Yet there is a very simple, safe step (Leap): delete the project and start from scratch. The distance is not a problem, because mass is basically null. I can use Leap not because the distance is small, but because the mass is small. I wouldn't do that with an existing, large project (I may consider going through an IL -> C# transformation though :-).

There is more: when I discussed Inertia, I talked about software being in a state of rest or motion in the decision space. This is the third factor, and again, it's hard to think about it in first-person perspective. You want to go somewhere, but you're already going elsewhere. It's not truly natural. Deflecting a moving object, however, is quite ordinary (most real-life games involving a ball, for instance, involve deflecting a moving object).

Consider a large project. Maybe a team is busy moving a subsystem S1 to another place in the decision space. Now you want to move subsystem S2 from A to B. It's not enough to consider distance and mass. You have to consider whether or not S2 is already moving, perhaps by attraction from S1. So S2 may be small (in mass) and the distance A-B may be small too, but if S2 is already moving into a different direction your job is harder, more risky, and in need of a careful execution strategy.

A real-world example: you have a legacy C++/MFC application. Currently, the UI is talking with the application through a small, homebrew messaging component (MC), of which you are in charge. You just found a better messaging component (BMC) on the internet, say something that is more efficient, portable, and actively developed by a community. You want to move your subsystem to another place in the decision space, from MC to BMC. Part of BMC being "portable" is based on the reasonable assumption that messages are strictly C++ objects.

Meanwhile, another team is moving the UI toward .NET (that's a big mass - big distance issue, but it's their problem, not yours :-). Now, there is an attraction between your messaging component and the UI. A .NET UI may have some issues dealing with native C++ messages. So, like it or not, there is another force at play, trying to move your component to a different place (say, NMC). Now, the mass of the messaging component might be small. The distance between MC and BMC may be small too - perhaps something you can deal with by using an adapter. But probably BMC is in a very different direction than NMC, where the UI team is pushing your component to move. So it's not just a matter of distance. It's not even enough to consider mass. You need to consider distance + full-blown Inertia. Or, to cut it short, work

Of course, depending on your project, some factor may dominate over the others, and a simpler model might be enough, but usually, distance alone won't cut it.

There is more...
At the end of the presentation, Kent mentions power law distribution. This is something that has got me interested too in the past few months (I mentioned that in Facebook), but it's too early to say something about it, and I want to check a few ideas with the author of an interesting paper first, anyway.

See you guys soon with a post on Distance (not in the decision space, but in the artifact space!)