Sunday, December 16, 2007

On the concept of Form (2)

In the Notes, Alexander defines form in a rather unique and interesting way: "The form is the part of the world over which we have control, and which we decide to shape while leaving the rest of the world as it is. The context is that part of the world which puts demands on this form; anything in the world that makes demands on the form is context. [...] We want to put the context and the form into effortless contact or frictionless coexistence" (pg. 18-19).

Alexander's definition goes well beyond the old-school, classical dichotomy between form and function, and also beyond the old-school dichotomy between functional and non-functional requirements. In fact, function intended as purpose is part of the context: functional requirements are obviously putting demands on form. Non-functional requirements are part of the context as well, and they should be considered first-class citizens too.

It is important to understand the human role in the design process. It is first an act of selection, and then an act of shaping.
We decide to shape a part of the world and leave the rest as it is. These are powerful words. I've long contended that in most case we have a much higher degree of control over requirements (context) than we're inclined to believe.
I've also contended for a very long time (mostly inspired by Tom Peters, I guess) that efficiency demands requirements (that is, context) to be developed jointly by developers and marketing.

Move away from this, and you get an inefficient context: a set of requirements (mostly functional) which may put strenuous demands on the form, without a corresponding premium in the resulting value. If you wanna win the game, you have to set the right rules.
In software development, more than in any other discipline, we can choose the right context before we step up to build our product. Of course, it is much easier to get a "fixed" set of requirements, complain about marketing lack of understanding, and start coding. But that's just not efficient. Also, keeping the context fixed while we shape the form is inefficient, as we wouldn't be profiting from newly discovered opportunities.

Building form, therefore, is an incremental process of discovery, as we don't know the exact context, and even if we do, we don't know if that's the context we wanna play with (or against, if you're the competitive kind ;-).
Indeed, Alexander reminds us that "In a real design project [...] we are searching for some kind of harmony between two intangibles: a form which we have not yet designed, and a context which we cannot properly describe" (pg. 26). It is not hard to see a deep connection with the notion of "software development as knowledge acquisition" popularized by Armour.

It is also important to understand a simple, yet deep consequence of Alexander's definition of form. As we create a product, even a software product, we are shaping its form. Even if we ignore design altogether, or even if we just concentrate on making it work, we are shaping some kind of form.
In most piecemeal development cases, that form will be determined by a small subset of forces: functional requirements, and sometimes the viscosity of previously developed software as well. The resulting structure won't be in frictionless contact with the true context, and the resulting friction will make our product brittle.

There is still a lot to be said about the subtle relationship between the above and agility, about the concept of shaping software, the [constructive] force field, and about the ubiquitous real options as well. Some more software-oriented examples are probably badly due too. And I should really say something about inventing requirements! So many things, so little time :-). Stay tuned!

Wednesday, November 28, 2007

Architecture as Tradition in the Unselfconscious Process

In my previous post, On the concept of Form (1), I mentioned how Architecture is providing viscosity, and therefore playing the role Alexander ascribed to tradition.

I've also proposed that the unselfconscious design process, which is very similar to the emergent design concept held so dearly by many agilists, requires some degree of tradition, and therefore, an underlying architecture. I've also gone so far as to propose the idea that many agile projects begin with a "traditional" architecture in mind:
Now, although some people in the XP/agile camp might disagree, refactoring is a viable solution only when the desired rate of change is slow, and only when the gap to fill is small. In other words, only when the overall architecture (or plain structure) is not challenged: maybe it's dictated by the J2EE way of doing things, or by the Company One True Way of doing things, or by the Model View Controller police, and so on. Truth is, without an overall architecture resisting change, a neverending sequence of small-scale refactoring may even have a negative large-scale impact.

In the past few days, I've been reading "The Economics of Architecture-First," by Grady Booch, IEEE Software, Sept/Oct, 2007. Here is an interesting excerpt:
Now, strict agilists might counter that an architecture-first approach is undesirable because we should allow a system's architecture to emerge over time. On the one hand, they're absolutely correct: a system's architecture is simply the name we give to the artifact that results from the many local design decisions made over a software-intensive system's lifetime. On the other hand, they're wrong: agile projects often start out assuming a given platform and environmental context together with a set of proven design patterns for that domain, all of which represent architectural decisions in a very real sense.

I could almost call this synchronicity :-).

For more on emergent architecture (or structure), see my now-old post Infrastructure and Superstructure.

Wednesday, November 14, 2007

Process as the company's homeostatic system

I was learning about EPOC (Excess Post-Exercise Oxygen Consumption) a few weeks ago, for no particular reason (as I often do :-). If you ever exercised at medium-to-high intensity, you've probably experienced EPOC: after you stop training, your oxygen intake stays higher than your usual resting intake for quite a while, declining over several hours.
The reason for EPOC "is the general disturbance to homeostasis brought on by exercise" (Brooks and Fahey, "Exercise physiology. Human bioenergetics and its applications", Macmillan Publishing Company, 1984).

In the next few days, inspired by different readings, I began thinking of process as the company's homeostatic system.
I've often claimed that companies have their own immune system, actively killing ideas, concepts and practices misaligned with the true nature of the company ("true" as opposed to, for instance, a theoretical statement of the company's values).
However, the homeostatic system has a different purpose, as it is basically designed to (wikipedia quote) "allow an organism to function effectively in a broad range of environmental conditions". Once you replace "organism" with "team", this becomes the best definition of a good development process that I've ever (or never :-) been able to formulate.

It is interesting to understand that the homeostatic system is very sophisticated. It works smoothly, but can leverage several different strategies to adapt to different external conditions. In fact, when good old Gerald Weinberg classified company's cultural patterns on the 1-5 scale, he called level-2 companies "Routine" (as they always use the same approach, without much care for context), level-3 "Steering" (dynamically picking a routine from a larger repertoire) and level-4 "Anticipating" (you figure :-). Of course, any company using the same process every time, no matter the process (Waterfall, RUP-inspired, XP, whatever), is at level 2.

For instance, if we find ourselves working with stakeholders with highly conflicting requirements and we don't apply a more "formal" process based on the initial assessment of viewpoints and concerns (see, for instance, my posts Value and Priorities and Requirements and Viewpoints for a few references) because "we usually gather users stories, implement, and get feedback quickly", then we're stuck in a Routine culture. Of course, if we always apply the viewpoints, we're stuck in a Routine culture too.

Well, there would be much more to say, especially about contingent methodologies and about congruency (another of Weinberg's favorite concepts) between company's process and company's strategy, but time is up again, so I'll leave you with a question. We all know that some physical activity is good for our health. Although high-intensity exercises bring disturbance to homeostasis, in the end we get a better homeostatic system. So the question is, if process is the company's homeostatic system, what is the equivalent of good training? How do we bring a dose of periodic, healthy disturbance to make our process stronger?

Monday, October 29, 2007

On the concept of Form (1)

I postponed writing on Form (vs. Function) for a while now, mostly because I was struggling to impose a rational structure on my thoughts. Form is a complex subject, barely discussed in the software design literature, with a few exception I'll quote when relevant, so I'm sort of charting new waters here.

The point is, doing so would take too much [free] time, delaying my writings for several more weeks or even months. This is exactly the opposite of what I want to do with this blog (see my post blogging as destructuring), so I'll commit the ultimate sin :-) and talk about form in a rather unstructured way. I'll start a random sequence of posting on form, writing down what I think is relevant, but without any particular order. I won't even define the concept of form in this first instalment.

Throughout these posts, I'll borrow extensively from a very interesting (and warmly suggested to any software designer) book from Christopher Alexander. Alexander is better known for his work on patterns, which ultimately inspired software designers and created the pattern movement. However, his most releavant work on form is Notes on the Synthesis of Form. When quoting, I'll try to remember page numbers, in which case, I'll refer to the paperback edition, which is easier to find.

Alexander defines two processes through which form can be obtained: the unselfconscious and the selfconscious process. I'll get back to these two concepts, but in a few words, the unselfconscious process is the way some ancient cultures proceeded, without an explicit set of principles, but relying instead on rigid tradition mediated by immediate, small scale adaptation upon failure. It's more complex than that, but let's keep it simple right now.

Tradition provides viscosity. Without tradition, and without explicit design principles, the byproducts of the unselfconscious process will quickly degenerate. However, merely repeating a form won't keep up with a changing environment. Yet change happens, maybe shortly after the form has been built. Here is where we need immediate action, correcting any failure using the materials at hand. Of course, small scale changes are sustainable only when the rate of change (or rate of failure) is slow.

Drawing parallels to software is easy, although subjective. Think of quick, continuous, small scale adaptation. The immediate software counterpart is, very naturally, refactoring. As soon as a bad smell emerge, you fix it. Refactoring is usually small scale, using the "materials at hand" (which I could roughly translate into "changing only a small fraction of code"). Refactoring, by definition, does not change the function of code. Therefore, it can only change its form.

Now, although some people in the XP/agile camp might disagree, refactoring is a viable solution only when the desired rate of change is slow, and only when the gap to fill is small. In other words, only when the overall architecture (or plain structure) is not challenged: maybe it's dictated by the J2EE way of doing things, or by the Company One True Way of doing things, or by the Model View Controller police, and so on. Truth is, without an overall architecture resisting change, a neverending sequence of small-scale refactoring may even have a negative large-scale impact.

I've recently said that we can't reasonably turn an old-style, client-server application into a modern web application by applying a sequence of small-scale changes. It would be, if not unfeasible, hardly economic, and the final architecture might be far from optimal. The gap is too big. We're expecting to complete a big change in a comparatively short time, hence the rate of change is too big. The viscosity of the previous solution will fight that change and prevent it from happening. We need to apply change at an higher granularity level, as the dynamics in the small are not the dynamics in the large.

Curiously enough (or maybe not :-), I'll be talking about refactoring the next two days. As usual, I'll try to strike a balance, and get often back to good design princples. After all, as we'll see, when the rate of change grows, and/or when the solution space grows, the unselfconscious process must be replaced by the selfconscious process.

Saturday, August 04, 2007

Get the ball rolling, part 4 (of 4, told ya :-)

The past two weeks have been hectic to say the least, and I had no time to conclude this quadrilogy. Fear not :-), however, because here I am. Anyway,I hope you used the extra time to read comments and answers to the previous posts, especially Zibibbo's comments; his contribution has been extremely valuable.
Indeed, I've left quite a few interesting topics hanging. I'll deal with the small stuff first.

Code [and quality]:
I already mentioned that I'm not always so line-conscious when I write code. Among other things, in real life I would have used assertions more liberally.
For instance, there is an implicit contract between Frame and Game. Frame is assuming that Game won't call its Throw method if the frame IsComplete. Of course, Game is currently respecting the contract. Still, it would be good to make the contract explicit in code. An assertion would do just fine.
I remember someone saying that with TDD, you don't need assertions anymore. I find this concept naive. Assertions like the above would help to locate defects (not just detect them) during development, or to highlight maintenance changes that have violated some internal assumption of existing code. Test cases aren't always the most efficient way to document the system. We have several tools, we should use them wisely, not religiously.

When is visual modeling not helpful?
In my experience, there are times when a model just won't cut it.
The most frequent case is when you're dealing with untried (or even unstable) technology. In this context, the wise thing to do is write code first. Code that is intended to familiarize yourself with the technology, even to break it, so that you know what you can safely do, and what you shouldn't do at all.
I consider this learning phase part of the design itself, because the ultimate result (unless you're just hacking your way through it) is just abstract knowledge, not executable knowledge: the code is probably too crappy to be kept, at least by my standards.
Still, the knowledge you acquired will shape the ultimate design. Indeed, once you possess enough knowledge, you can start a more intentional design process. Here modeling may have a role, or not, depending on the scale of what you're doing.
There are certainly many other cases where starting with a UML model is not the right thing to do. For instance, if you're under severe resource constraints (memory, timing, etc), you better do your homework and understand what you need at the code level before you move up to abstract modeling. If you are unsure about how your system will scale under severe loading, you can still use some modeling techniques (see, for instance, my More on Quantitative Design Methods post), but you need some realistic code to start with, and I'm not talking about UML models anyway.
So, again, the process you choose must be a good fit for your problem, your knowledge, your people. Restraining yourself to a small set of code-centric practices doesn't look that smart.

What about a Bowling Framework?
This is not really something that can be discussed briefly, so I'll have to postpone some more elaborated thoughts for another post. Guess I'll merge them with all the "Form Vs. Function " stuff I've been hinting to all this time.
Leaving the Ball and Lane aside for a while, a small but significant improvement would be to use Factory Method on Game, basically making Game an abstract class. That would allow (e.g.) regular bowling to create 9 Frames and 1 FinalFrame, 3-6-9 bowling to change the 3rd, 6th and 9th frame to instances of a (new) StrikeFrame class, and so on.
Note that this is a simple refactoring of the existing code/design (in this sense, the existing design has been consciously underengineered, to the point where making it better is simple). Inded, there is an important, trivial and at the same time deep reason why this refactoring is easy. I'll cover that while talking about options.
If you try to implement 3-6-9 bowling, you might also find it useful to change
if( frames[ currentFrame ].IsComplete() )
   ++currentFrame;
into
while( frames[ currentFrame ].IsComplete() )
   ++currentFrame;

but this is not strictly necessary, depending on your specific implementation.
There are also a few more changes that would make the framework better: for instance, as I mentioned in my previous post, moving MaxBalls from a constant to a virtual function would allow the base class to instantiate the thrownBalls array even if the creational responsibility for frames were moved to the derived class (using Factory Method).
Finally, the good Zibibbo posted also a solution based on a state machine concept. Now, it's pretty obvious that bowling is indeed based on a state machine, but curiously enough, I wouldn't try to base the framework on a generalized state machine. I'll have to save this for another post (again, I suspect, Form Vs. Function), but shortly, in this specific case (and in quite a few more) I'd try to deal with variations by providing a structure where variations can be plugged in, instead of playing the functional abstraction card. More on this another time.

Procedural complexity Vs. Structural complexity
Several people have been (and still are) taught to program procedurally. In some disciplines, like scientific and engineering computing, this is often still considered the way to go. You get your matrix code right :-), and above that, it's naked algorithms all around.
When you program this way (been there, done that) you develop the ability to pour through long functions, calling other functions, calling other functions, and overall "see" what is going on. You learn to understand the dynamics of mutually recursive function. You learn to keep track of side effects on global variables. You learn to pay attention to side effects on the parameters you're passing around. And so on.
In short, you learn to deal with procedural complexity. Procedural complexity is best dealt with at the code level, as you need to master several tiny details that can only be faithfully represented in code.
People who have truly absorbed what objects are about tend to move some of this complexity on the structural side. They use shape to simplify procedures.
While the only shape you can give to procedural code is basically a tree (with the exception of [mutually] recursive functions, and ignoring function pointers), OO software is a different material, which can be shaped in a more complex collaboration graph.
This graph is best dealt with visually, because you're not interested in the tiny details of the small methods, but in getting the collaboration right, the shape right (where "right" is, again, highly contextual), even the holes right (that is, what is not in the diagram can be important as well).
Dealing with structural complexity requires a different set of skills and tools. Note that people can be good at dealing with procedural and with structural complexity; it's just a matter of learning.
As usual, good design is about balance between this two forms of complexity. More on this another time :-).

Balls, Real Options, and some Big Stuff.
Real Options are still cool in project management, and in the past few years software development has taken notice. Unfortunately, most papers on software and real options tend to fall in one of these two categories:
- the mathematically heavy with little practical advice.
- the agile advocacy with the rather narrow view that options are just about waiting.
Now, in a paper I've referenced elsewhere, Avi Kamara put it right in a few words. The problem with the old-fashioned economic theory is the assumption that the investment is an all or nothing, now or never, project. Real options challenge that view, by becoming aware of the costs and benefits of doing or not doing things, build flexibility and take advantage of opportunities over time (italics are quotes from Kamara).
Now, this seems to be like a recipe for agile development. And indeed it is, once we break the (wrong) equation agile = code centric, or agile = YAGNI. Let's look a little deeper.
Options don't come out of nowhere. They came either from the outside, or from the inside. Options coming from the outside are new market opportunities, emerging users's requests or feedback, new technologies, and so on. Of course, sticking to a plan (or a Big Upfront Design) and ignoring those options wouldn't be smart (it's not really a matter of being agile, but to be economically competent or not).
Options come also from the inside. If you build your software upon platform-specific technologies, you won't have an option to expand into a different platform. If you don't build an option for growth inside your software, you just won't have the option to grow. Indeed, it is widely acknowledged in the (good) literature that options have a cost (the option premium). You pay that cost because it provides you with an option. You don't want to make the full investment now (the "all" in "all or nothing") but if you just do nothing, you won't get the option either.
The key, of course, is that the option premium should be small. Also, the exercise cost should be reasonable as well (exercise price, or strike price, is what you pay to actually exercise your option). Note: for those interested in these ideas, the best book I've found so far on real option is "Real options analysis" by Johnathan Mun, Wiley finance series)

Now, we can begin to see the role of careful design in building the right options inside software, and why YAGNI is an oversimplified strategy.
In a previous post I mentioned how a class Lane could be a useful abstraction for a more realistic bowling scorer. The lane would have a sensor in the foul line and in the gutters. This could be useful for some variation of bowling, like Low Ball (look under "Special Games"). So, should I invest into a Lane class I don't really need right now, because it might be useful in the future? Wait, there is more :-).
If you consider 5 pin Bowling, you'll see that different pins are awarded different scores. Yeap. You can no longer assume "number of pins = score". If you think about it, there is just no natural place in the XP episode code to put this kind of knowledge. They went for then "nothing" side of the investment. Bad luck? Well guys, that's just YAGNI in the real world. Zero premium, but potentially high exercise cost.
Now, consider this: a well-placed, under-engineered class can be seen as an option with a very low premium and very low exercise cost. Consider my Ball class. As it stands now, it does precious nothing (therefore, the premium cost was small). However, the Ball class can easily be turned into a full-fledged calculator of the Ball score. It could easily talk to a Lane class if that's useful - the power of modularity allows the rest of my design to blissfully ignore the way Ball gets to know the score. I might even have a hierarchy of Ball classes for different games (well, most likely, I would).
Of course, there would be some small changes here and there. I would have to break the Game interface, that takes a score and builds a Ball. I used that trick to keep the interface compatible with the XP episode code and harvest their test cases, so I don't really mind :-). I would also have to turn the public hitPins member into something else, but here is the power of names: they make it easier to replace a concept, especially in a refactoring-aware IDE.
Bottom line: by introducing a seemingly useless Ball class, I paid a tiny premium cost to secure myself a very low exercise cost, in case I needed to support different kinds of bowling. I didn't go for the "all" investment (a full-blown bowling framework), but I didn't go for the "nothing" either. That's applied real option theory; no babbling about being agile will get you there. The right amount of under-engineering, just like the right amount of over-engineering, comes only from reasoning, not from blindly applying oversimplified concepts like YAGNI.
One more thing about options: in the Bowling Framework section, I mentioned that refactoring my design by applying Factory Method to Game was a trivial refactoring. The reason it was trivial, of course, is that Game is a named entity. You find it, refactor it, and it's done. Unnamed entities (like literals) are more troublesome. Here is the scoop: primitive types are almost like unnamed entities. If you keep your score into an integer, and you have integers everywhere (indexes, counters, whatever), you can't easily refactor your code, because not any integer is a score. That was the idea behind Information Hiding. Some people got it, some didn't. Choose your teacher wisely :-).

A few conclusions
As I said, there are definitely times when modeling makes little sense. There are also times when it's really useful. Along the same lines, there are times when requirements are relatively unknown, or when nobody can define what "success" really means before they see it. There are also times when requirements are largely well-known and (dare I say it!) largely stable.
If you were asked to implement an automatic scoring system for 10 pin bowling and 3-6-9 bowling and 9 pin bowling, would you really start coding the way they did in the XP episode, and then slowly change it to something better?
Would you go YAGNI, even though you perfectly know that you ARE gonna need extensibility? Or would you rather spend a few more minutes on the drawing board, and come up with something like I did?
Do you really believe you can release a working 3-6-9 and a working 9-pin faster using the XP episode code than mine? Is it smart to behave as if requirements were unknown when they're, in fact, largely known? Is it smart not to exploit knowledge you already have? What do you consider more agile, that is, adaptive to the environment?
Now, just to provoke a reaction from a friend of mine (who, most likely, is somewhere having fun and not reading my blog anyway :-). You might remember Jurassik Park, and how Malcom explained chaos theory to Ellie by dropping some water on her hand (if you don't, here is the script, look for "38" inside; and no, I'm not a fan :-).
The drops took completely different paths, because of small scale changes (although not really small scale for a drop, but anyway). Some people contend that requirements are just as chaotic, and that minor changes in the environment will have a gigantic impact on your code anyway, so why bother with upfront design.
Well, I could contend that my upfront design led to code better equipped to deal with change than the XP episode did, but I won't. Because you know, the drops did take a completely different path because of small scale changes. But a very predictable thing happened: they both went down. Gravity didn't behave chaotically.
Good designers learn to see the underlying order inside chaos, or more exactly, predominant forces that won't be affected by environmental conditions. It's not a matter of reading a crystal ball. It's a matter of learning, experience, and well, talent. Of course, I'm not saying that there are always predominant, stable forces; just that, in several cases, there are. Context is the key. Just as ignoring gravity ain't safe, ignoring context ain't agile.

Tuesday, July 17, 2007

Get the ball rolling, part 3 (of 4, pretty sure)

In the last few months I've been repeating a mantra (also in this blog): design is always highly contextual.
So, before taking a look at what I did, it's important to consider why I did it. It's also important to conside how I did it, as to put everything in perspective.

Why
- I use models quite often, but I'm not a visual modeling zealot. I know, from experience, that visual modeling can be extremely valuable in some cases, and totally useless in others (more on this next time). I can usually tell when visual modeling can be useful, using a mixture of experience, intuition, and probably some systematic rules I never took the time to write down.
When I first read the XP episode, I've been negatively impressed by the final results, and intuitively felt that I could get something better by not rushing into coding. So a first reason was to prove that point (to myself first :-).

- I aimed to remove all the bad smells I've identified in their code, without introducing others. I also wanted to prove a point: that a careful designed solution didn't have to be a code monster, as implied elsewhere. I wanted the final result to have basically the same number of rows as they had. Maybe less. Therefore, I wrote the code in a line-conscious way.

- I wanted the comparison to be fair. Therefore, I did not aim for the super-duper, ultimate bowling framework. I just aimed for a better design and better code, through a different process. I'll talk a little about what it would take to turn my design into a true "bowling framework" in my next post.

- Actually, I found the idea of not writing the ultimate bowling framework extremely useful. There has been a lot of babbling about Real Options in software development in the last few years, and I even mention a few concepts in my project management course.
However, I lacked a simple example where I could explain a few concepts better. By (consciously :-) under-engineering some parts of the design, I got just the example I needed (more on this next time).

- I also wanted to check how many bugs I could possibly put in my code without using TDD. I would just write the code, run all their tests, and see how many bugs I had to fix before I got a clean run.

How
I must confess I procrastinated this stuff for quite a while. In some way, it didn't feel totally right to criticize other people's work like I had to do. But after all, by writing this I'll open my work to critics as well. So one evening, at about 10 PM (yeap :-), I printed out the wikipedia page on bowling, fired up a CASE tool (just for drawing), and got busy.

Unfortunately, I can't offer you a realistic transcript of what I thought all along. When you really use visual modeling, you're engaged in a reflective conversation with your material (see my paper, "Listen to your tools and materials"; I should really get a pre-publishing version online). You're in the flow, and some reasoning that will take a while to explain here can take just a few seconds in real time.

Anyway, I can offer a few distinct reflections that took place as I was reading the wikipedia page and (simultaneously) designing my solution. Note that I entirely skipped domain modeling; the domain was quite simple, and the wikipedia page precise enough. Also, note that I felt more comfortable (coming from 0-knowledge of bowling) reading wikipedia, not reverse engineering requirements from the XP episode. That might be my own personal bias.

I immediately saw quite a few candidate classes: the game itself, the player, the pins, the lane, the frame, the ball. Bertrand Meyer said that classes "are here for picking", and in this case he was quite right.
I never considered the throw as a class though. Actually, "throw" can be seen as a noun or as a verb. In this context, it looked more like a verb: you throw the ball and hit pins. Hmmm, guess what, they didn't look at it this way in the XP episode; they sketched a diagram with a Throw class and rushed into coding.

Rejecting classes is not easy without requirements, even informal ones. For instance, if I have to design a real automatic scoring system, there will be a sensor in the lane, to detect the player crossing the foul line. There will also be the concept of foul ball, which was ignored in the XP episode. So (see above, "fair comparison") I decided that my "requirements" were to create a system with the same capabilities as they did. No connection to sensors, no foul ball. But, I wanted my design to be easily extended to include that stuff. So I left out the Lane class (which could also encapsulate any additional sensor, like one in the gutter). But I wanted a Ball class; more on this later.

Game is an obvious class: we need a starting point, and also a sort of container for the Frames (see below) and for Balls. The Game has a need to know when the current frame is completed (all allowed balls thrown, or strike), so it can move to the next frame. But while advancing to the next frame is clearly its own responsibility, knowledge of frame completion is not (see below).

The Frame looked like a very promising abstraction. If you look at the throwing and scoring rules, it's pretty obvious that in 10-pin bowling you have 9 "normal" frames and a 10th "non standard" frame, where up to 3 balls can be thrown, as there is the concept of bonus ball (hence the imprecise domain model in the XP episode, where they just wrote 0..3 throws in a frame). Also in 3-6-9 bowling you have "special" frames, and so on.
Does this distinction between standard and nonstandard frames ring the bell of inheritance? Good. Now if you want the frame to be polymorphic, you gotta put some behaviour in it. Of course you can do that by reasoning upon the diagram: you don't need to write test code to discover that hey, you dunno what method to put in a Frame class.
Anyway, they may not know, but I do :-). The Frame can then tell the Game if it is completed, shielding the Game from the difference between a standard or last frame (or "strike frames" in 3-6-9 bowling, for instance). To do so, the Frame must know about any Ball that has been thrown in that frame, so it may also have a Throw method. The Frame should also calculate its own score. Wait... that's something the Frame can't do on its own.

In fact, the two authors have been smart enough to pick a problem where trivial encapsulation won't do it. The Frame needs to know about balls that might have been thrown in other frames to calculate its score. Now, there are several ways to let it know about them, and I did consider a few, but for brevity, here is what I did (the simplest thing): the Score method in class Frame takes a list of Balls as a parameter; the Game keeps a list of Balls and passes it to Frame. The Frame just keeps track of the first Ball thrown in the frame itself. From that knowledge, it's trivial to calculate the score. The LastFrame class is just slightly different, as the completion criteria changes to account for bonus balls; I also have to redefine the Throw method, again to account for bonus balls.

So what is the real need for a Ball class? Well, right now, it keeps track of how many pins were hit, and its own index in the Balls list. However, it's also a cheap option for growth. More on this next time, where I'll talk about the difference between having even a simple, non-encapsulated class like Ball, and using primitive types like integers.

What
In the end I got something like this (except I had no Status: in the beginning, I put a couple of booleans for Strike and Spare).



At that point, I decided to move to coding. In the XP episode the selected language is Java, but I don't like the letter J much :-) so I used C#. Given the similarity of the languages, the difference is irrelevant anyway. I didn't even want to wait for Visual Studio to come up, so I just started my friendly Notepad and typed the code without much help from the editor :-). Classes and methods were just a few lines long anyway. At that point it was 11 PM, and I called it a day. (So all this stuff took about 1 hour, between BUFD :->> and coding).

Next morning I was traveling, so I started Visual Studio, imported the code, corrected quite a few typos to make it compile, then imported the test cases from the XP episode, converted them to the NUnit syntax, and hit the run button. Quite a few failed. I had a stupid bug in LastFrame.Throw. I fixed the bug. All tests succeeded. While playing with the code, I saw an opportunity to make it shorter by adding a class (an enum actually) that could model the frame state. I did so in the code, as the change was simple and neat. I also brought back the change in the diagram, which took me like 10 seconds (too much for the XP guys, I guess). I run the tests, and they failed, exactly like before :-).
I could blame the editor and the autocompletion, but in LastFrame.Throw I had something like

if( status != Status.Spare && status != Status.Spare )

instead of

if( status != Status.Strike && status != Status.Spare )

Ok, I'm pretty dumb :-), but anyway, I fixed the bug and got a green bar again.

That was it. Well actually it wasn't. In my initial design I had a MaxBalls virtual method in Frame. That allowed for some more polymorphism in Game. In the end I opted for a constant to make the code a few line shorter, since the agile guys seem so concerned about LOC (again, more on this next time). I'm not really so line-conscious in my everyday programming, so I would have usually left the virtual method there (no YAGNI here; or maybe I could restate is You ARE Gonna Need It :-)).

So, here is the code. Guess what, if you remove empty lines, it's a few line shorter than their code. If you remove also brace-only lines, it's a few lines longer. So we could call it even. Except it's just much better :-).

Ok, this is like the longest post ever. Next time I'll talk a little about the readability of the code, give some examples of how this design/code could be easily changed to (e.g.) 3-6-9 bowling, digress on structural Vs. procedural complexity, tinker a little with real options and under/over engineering, and overall draw some conclusions.


internal class Ball
  {
  public Ball( int pins, int index )
    {
    hitPins = pins;
    indexInGame = index;
    }

  public int hitPins;
  public int indexInGame;
  }

internal class Frame
  {
  public const int MaxBalls = 2;

  public virtual bool IsComplete()
    {
    return status != Status.Incomplete ;
    }

  public int Score( Ball[] thrownBalls )
    {
    int score = 0;
    if( IsComplete() )
      {
      score = firstBall.hitPins + thrownBalls[ firstBall.indexInGame + 1 ].hitPins;
      if( status == Status.Strike || status == Status.Spare )
        score += thrownBalls[ firstBall.indexInGame + 2 ].hitPins;
      }
    return score;
    }

  public virtual void Throw( Ball b )
    {
    const int totalPins = 10;
    if( firstBall == null )
      firstBall = b;
    if( b == firstBall && b.hitPins == totalPins )
      status = Status.Strike;
    else if( b != firstBall && firstBall.hitPins + b.hitPins == totalPins )
      status = Status.Spare;
    else if( b != firstBall )
      status = Status.Complete;
    }

  private Ball firstBall;
  protected Status status;
  protected enum Status { Incomplete, Spare, Strike, Complete } ;
  }

internal class LastFrame : Frame
  {
  public new const int MaxBalls = Frame.MaxBalls + 1;

  public override bool IsComplete()
    {
      return ( status == Status.Strike && bonusBalls == 2 ) ||
             ( status == Status.Spare && bonusBalls == 1 ) ||
             ( status == Status.Complete );
    }

  public override void Throw( Ball b )
    {
    if( status != Status.Strike && status != Status.Spare )
      base.Throw( b );
    else
      ++bonusBalls;
    }

  private int bonusBalls;
  }

public class Game
  {
  public Game()
    {
    const int MaxFrames = 10;
    frames = new Frame[ MaxFrames ];
    for( int i = 0; i < MaxFrames - 1; ++i )
      frames[ i ] = new Frame();
    frames[ MaxFrames - 1 ] = new LastFrame();
    int maxBalls = (MaxFrames-1)*Frame.MaxBalls + LastFrame.MaxBalls;
    thrownBalls = new Ball[ maxBalls ];
    }

  public void Throw( int pins )
    {
    if( frames[ currentFrame ].IsComplete() )
      ++currentFrame;
    Ball b = new Ball( pins, currentBall );
    frames[ currentFrame ].Throw( b );
    thrownBalls[ currentBall++ ] = b;
    }

  public int Score()
    {
    int score = 0;
    foreach( Frame f in frames )
      score += f.Score( thrownBalls );
    return score;
    }

  private Ball[] thrownBalls;
  private Frame[] frames;
  private int currentFrame;
  private int currentBall;
  }

Saturday, July 14, 2007

Get the ball rolling, part 2 (of 4, most likely)

I hope you had some time to read the XP episode paper and ponder a little on what they managed to get in the end. As I've anticipated, I do I have a few issues with the results. It's not a matter of correctness: they pass all the tests. It's a matter of quality.
The design is questionable, and even the code is questionable. No big deal: after all, it's a little more than a toy example. It gets slightly worse when you consider all this stuff in perspective (at the end of the XP game, code is all you get; I'll get back to this later). Right now, here are the main flaws I see:

- overabundance of numerical literals.
This is programming 101: literals are bad (although they keep your code a few line shorter). Go over the code. You'll see numbers like 10 just about everywhere. Unfortunately, in bowling you got 10 pins and also 10 frames. You always have to read carefully to understand which is which. Now go back to the wikipedia page on bowling. See the mention of 5-pin bowling, 9-pin bowling, etc? Just go ahead and change the code to use 9 pins and 10 frames(9-pin bowling ain't that simple, but anyway). Guess what, none of the test cases will help, and it's not a one-line change as it ought to be. There is even a 21 in that code, being 10 * 2 + 1. I'll let you figure what that 10, 2, and 1 are :-).

- programming against an implementation, not against the problem
Some portion of the code are indeed quite good:

 public int scoreForFrame(int theFrame)
  {
    ball = 0;
    int score=0;
    for (int currentFrame = 0; 
         currentFrame < theFrame; 
         currentFrame++)
    {
      if (strike())
        score += 10 + nextTwoBalls();
      else if (spare())
        score += 10 + nextBall();
      else
        score += twoBallsInFrame();
    }
    return score;
  }

You can basically read it in plain english. Unfortunately, most of the rest is not of the same quality, for instance:

 public void add(int pins)
  {
    itsScorer.addThrow(pins);
    adjustCurrentFrame(pins);
  }
  private void adjustCurrentFrame(int pins)
  {
    if (firstThrowInFrame == true)
    {
      if (adjustFrameForStrike(pins) == false)
        firstThrowInFrame = false;
    }
    else
    {
      firstThrowInFrame=true;
      advanceFrame();
    }
  }
...
  private void advanceFrame()
  {
    itsCurrentFrame = Math.min(10, itsCurrentFrame + 1);
  }

"Adjust" a frame? Couldn't we get it right on the first place :-)? Even a Math.min call?? C'mon. There isn't any resemblance with the problem domain. It's just programming against the implementation, until things seems to work. Now, I've seen a lot of code like this over the years; it's the typical code people get after years of tweaking, except here it didn't take years, just hours.
Now, tweaking code till it works is already bad per se, but it's much worse when you consider that in XP, at the end of the game, the code is the design, and the test cases are the requirements. Again, everything is fine when the domain is trivial, largely known, well documented elsewhere (like in this case). But if we tackle a difficult new problem in an uncharted domain, and all we leave behind is code like that, I pity the souls that will join the team.

- Feature Envy: Game over Frame
According to Fowler, a method (or worse, a class) is exhibiting Feature Envy
when it "seems more interested in a class other than the one it actually is in". A variation of Feature Envy is when the other class does not even exist, but it is continually talked about in a method (or another class). Take Game. Excluding empty lines. Game is 45 lines long; if you exclude brace-only lines, it's 25 lines long. Of those, 16 contain the word "Frame". Is that suggesting you a missing abstraction?

- Feature Envy: Scorer over Ball
Just like above, Scorer is 57 lines long excluding empty lines (35 excluding also braces). Of those, 15 contain the word Ball (and 6, the word Frame). C'mon. We're talking about bad smells here, you agile guys are supposed to fix this stuff.

(note: the two previous issues may come from hindsight, as in my design, I do have a Frame and a Ball class).

- "Scorer"?
Good ol' Peter Coad once talked about the "er-er Principle": Challenge any class name that ends in "-er" (e.g. Manager or Controller). If it has no parts, change the name of the class to what each object is managing. If it has parts, put as much work in the parts that the parts know enough to do themselves.
That's OOP 101, because if you need a manager, it's because you got passive objects, and objects are about behaviour, not just data. You need a scorer because you haven't got the right classes.

Overall, this is not code you may want to maintain. Wanna give it a try? Ok, try to change to rules to, e.g., 3-6-9 bowling or low-ball bowling. Or make your own random rule, like "the 7th frame is only allowed one ball". See how easy it is. Hey, don't give me that YAGNI look :-), you are supposed to embrace change :-)).
It's worth repeating that in a real case, you'll be much worse off: you won't have a lenghty transcript of the session, telling you about how they come to put together that code. You'll have just the code; except that, in real-world projects, code will be 3 or 4 orders or magnitude bigger. You'll have to figure out everything else just from the code. If that's the average quality, well, good luck.

Of course, having only code to maintain means cheaper maintenance. Well, more exactly, it means that the mechanical act of maintenance (modifying stuff that needs to be changed) is cheaper. Of course, maintenance is not just about the mechanics: it is (mainly) about understanding the existing, understanding the new, understanding how to get from here to there.
Some redundancy (between wordy requirements, high-level design, and code) can help. Indeed, when I decided to design & implement my version of what above, I found it more convenient to read the wikipedia page, not to reverse engineer requirements from code. Of course, I also used the test cases they wrote to corroborate my understanding of the rules. Some redundancy can be helpful. Just ask your favorite aerospace engineer :-)).
This is why I always teach people to look for the bright and the dark side of everything, redundancy included (which of course, has well known shortcomings as well). That's why I don't like "extreme" approaches that pretend there is no downside in making "extreme" choices, only benefits.

So, on the bright side :-)), the code above is short (excluding blank lines and excluding test code, it totals 102 lines). Indeed, in a usenet post RCM reports that people who have "generated code" from the devilish overdesigned :-) UML diagram and added behavior got a whooping 400 lines.
Big deal. That was (in the authors' intention) a sketchy domain model, not a sensible design model. And code generation is usually a scam. That has nothing to do with diagrammatic reasoning (quite the opposite, I would say: it's diagrammatic nonreasoning: more on this next time).
Back to the bright side of the above, that code uses only 2 classes, which could be good if you don't like diagrams (which would help you understand the collaboration between multiple classes). Appalling enough, considering that the two authors together have something like 70 years programming experience, it looks like beginners' code. Which could be good in some environments. I'll get back to this another time.

Ok, enough bad cop for today. I'll show you my stuff in a few days (meanwhile, you can grind your teeth :-). I'll also describe my "process" and then draw some conclusions.

Wednesday, July 11, 2007

Get the ball rolling, part 1 (of 4, I guess)

A few weeks ago, I got a sad email from a reader. It was about some people around him bashing UML, and more generally, diagrammatic reasoning.
This is not big news: some people just don't like modeling (for various reasons), and in the last few years they've also found fertile ground in several "agile" precepts.
Indeed, his colleagues just sent him a link to Engineer Notebook: An Extreme Programming Episode, by Robert C. Martin and Robert S. Koss. Somewhere around the end, RCM says "We were bedevilled by the daemons of diagramatic overdesign. My God, three little boxes drawn on the back of a napkin, Game, Frame, and Throw, and it was still too complicated and just plain wrong." His colleagues sort of wanted this carved on his tombstone :-).

Now, I've been reading Robert Martin's works since the mid 90s. I've enjoyed several of his papers, and I've even published a couple of papers in C++ Report when he was the magazine editor. I must admit, however, that although I own the book from which that paper has been derived, I just skimmed through it. So, before saying anything, I did the right thing™ and read the paper carefully. Truth to be said, I found that paper laudable, because by providing a realistic example, Robert gives us an opportunity to tackle the same problem in a different way, and compare the results.

Of course, the problem is on a very small scale, so someone could complain that it's not the kind of setting where diagrammatic reasoning shines. That would be a lame excuse, so I won't say that.
I won't even question the fact that they both knew Bowling quite well, and that therefore the dynamics isn't really representative of what they can do in an unfamiliar domain (in fact, there is no user's involvement in the XP episode). Of course that's quite relevant, but what the hell, I can learn bowling rules in no time :-).
I won't even try to assess whether they were faithful to the XP principles or not; it seems to me that they did it just fine: they kept the emphasis on working code, did some refactoring, wrote tests first, applied pair programming... seems like they're doing it by the book.

Still, I do have issues with the quality of the results, and I do have issues with the conclusions. I'll save this stuff for my next posts. Meanwhile, I really urge you to read the paper. It may put you off a little if you don't know/play bowling (I don't :-), but it's a worthy exercise. Read the code, see if you like it. I'll see you in a few days :-).

Note: this is not going to be a theoretical, wishy-washy, hand-waving blurb. I can play theories, but in this case, it's much easier to play code. In fact, one year ago or so I posted a simple challenge for the TDD crowd. I basically got back theories, insults, and the like, but no one wrote any damn code (except in the end, when a good guy did). I'm an action guy :-))), so I'll just show you what I don't like, how I would model and implement the same thing, and compare the results. Actually, maybe you wanna give it a try too!

Tuesday, June 26, 2007

Got Multicore? Think Asymmetric!

Multicore CPU are now widely available, yet many applications are not tapping into their true potential. Sure, web applications, and more generally container-based applications have an inherent degree of coarse parallelism (basically at the request level), and they will scale fairly well on new CPU. However, most client-side applications don't fall in the same pattern. Also, some server-side applications (like batch processing) are not intrinsically parallel as well. Or maybe they are?

A few months ago, I was consulting on the design of the next generation of a (server-side) banking application. One of the modules was a batch processor, basically importing huge files into a database. For several reasons (file format, business policies), the file had to be read sequentially, processed sequentially, and imported into the database. The processing time was usually dominated by a single huge file, so the obvious technique to exploit a multicore (use several instances to import different files in parallel) would have not been effective.
Note that when we think of parallelism in this way, we're looking for symmetric parallelism, where each thread performs basically the same job (process a request, or import a file, or whatever). There is only so much you can do with symmetrical parallelism, especially on a client (more on this later). Sometimes (of course, not all the times), it's better to think asymmetrically, that is, model the processing as a pipeline.

Even for the batch application, we can see at least three stages in the pipeline:
- reading from the file
- doing any relevant processing
- storing into the database
You can have up to three different threads performing these tasks in parallel: while thread 1 is reading record 3, thread 2 will process record 2, and thread 3 will store [the processed] record 1. Of course, you need some buffering in between (more on this in a short while).
Actually, in our case, it was pretty obvious that the processing wasn't taking enough CPU to justify a separate thread: it could be merged with the read file operation. What was actually funny (almost exhilarating :-) was to discover that despite the immensely powerful database server, storing into the database was much slower than reading from the file (truth to be said, the file was stored in an immensely powerful file server as well). A smart guy in the bank quickly realized that it was our fault: we could have issued several parallel store operations, basically turning stage two of the pipeline into a symmetrical parallel engine. That worked like a charm, and the total time dropped by a factor of about 6 (more than I expected: we were also using the multi-processor, multi-core DB server better, not just the batch server multicore CPU).

Just a few weeks later (meaningful coincidence?), I stumbled across a nice paper: Understand packet-processing performance when employing multicore processors by Edwin Verplanke (Embedded Systems Design Europe, April 2007). Guess what, their design is quite similar to ours, an asymmetric pipeline with a symmetric stage.

Indeed, the pipeline model is extremely useful also when dealing with legacy code which has never been designed to be thread-safe. I know that many projects aimed at squeezing some degree of parallelism out of that kind of code fails, because the programmers quickly find themselves adding locks and semaphores everywhere, thus slowing down the beast so much that there is either no gain or even a loss.
This is often due to an attempt to exploit symmetrical parallelism, which on legacy, client-side code is a recipe for resource contention.Instead, thinking of pipelined, asymmetrical parallelism often brings some good results.
For instance, I've recently overheard a discussion on how to make a graphical application faster on multicore. One of the guy contended that since the rendering stage is not thread-safe, there is basically nothing they can do (except doing some irrelevant background stuff just to keep a core busy). Of course, that's because he was thinking of symmetrical parallelism. There are actually several logical stages in the pipeline before rendering takes place: we "just" have to model the pipeline explicitly, and allocate stages to different threads.

As I've anticipated, pipelines need some kind of buffering between stages. Those buffers must be thread safe. The banking code was written in C#, and so we simply used a monitor-protected queue, and that was it. However, in high-performance C/C++ applications we may want to go a step further, and look into lock-free data structures.

A nice example comes from Bjarne Stroustrup himself: Lock-free Dynamically Resizable Arrays. The paper has also a great bibliography, and I must say that the concept of descriptor (by Harris) is so simple and effective that I would call it a stroke of genius. I just wish a better name than "descriptor" was adopted :-).

For more predictable environments, like packet processing above, we should also keep in mind a simple, interesting pattern that I always teach in my "design patterns" course (actually in a version tailored for embedded / real-time programming, which does not [yet] appear on my website [enquiries welcome :-)]. You can find it in Pattern Languages of Program Design Vol. 2, under the name Resource Exchanger, and it can be easily made lock-free. I don't know of an online version of that paper, but there is a reference in the online Pattern Almanac.
If you plan to adopt the Resource Exchanger, make sure to properly tweak the published design to suit your needs (most often, you can scale it down quite a bit). Indeed, over the years I've seen quite a few hard-core C programmers slowing themselves down in endless memcpy calls where a resource exchanger would have done the job oh so nicely.

A final note: I want to highlight the fact that symmetric parallelism can still be quite effective in many cases, including some kind of batch processing or client-side applications. For instance, back in the Pentium II times, I've implemented a parallel sort algorithm for a multiprocessor (not multicore) machine. Of course, there were significant challenges, as the threads had to work on the same data structure, without locks, and (that was kinda hard) without having one processor invalidating the cache line of the other (which happens quite naturally in discrete multiprocessing if you do nothing about it). The algorithm was then retrofitted into an existing application. So, yes, of course it's often possible to go symmetrical, we just have to know when to use what, at which cost :-).