Monday, October 17, 2011

You're solving the wrong problem

It had to happen. After my post on Yahtzee, my agile friend (Marco) has come back with more code katas, adding to the pile we already discussed. His latest shot was Kata Nine, a.k.a. "Back to the CheckOut".
His code was fine, really. Nothing fancy, basically a straightforward application of the Strategy pattern. You can find many similar implementations on the net (one of which prompted my tweet about ruby on training wheels, but it was more about the development style than the final result). Of course, my friend would have been disappointed if I had just said "yeah, well done!", so I had to be honest (where is the point of being friends if you can't be honest) and say "yeah, but you're solving the wrong problem".

A short conversation followed, centered on the need to question requirements (as articulated by users and customers) and come up with the real, hidden requirements, the value of domain knowledge and the skills one would need to develop to become an effective analyst or "requirement engineer" (not that I suggest you do :-), with the usual drift into agility and such.

I'll recommend that you read the problem statement for kata nine. It's not really necessary that you solve the problem, but it wouldn't hurt to think about it a little. I'm not going to present any code this time.

Kata Nine through the eye of a requirements engineer
The naive analyst would just take a requirement statement from the user/customer, clean it up a little, and pass it downstream. That's pretty much useless. The skilled analyst will form a coherent [mental] model of the world, and move toward a deeper understanding of the problem. The quintessential tool to get there is simple, and we all learn to use it around age 4: it's called questions :-). Of course, it’s all about the quality of the questions, and the quality of the mental model we're building . In the end, it has a lot to do with the way we look at the world, as explained by the concept of framing.

The problem presented in kata nine can be framed as a set of billing rules. Rules, as formulated, are based on product type (SKU), product price (per SKU) and a discount concept based on grouping a fixed quantity of items (quantity depending on SKU) and assigning a fixed price to that group (3 items 'A' for $1.30). It might be tempting for an analyst to draw a class diagram here, and for a designer to introduce an interface over the concept of rule, so that the applicability and the effects of a rule become just an implementation detail (you wish :-).

Note how quickly we're moving from the real-world issues of products, prices, discounts into the computing world, made of classes, interfaces, information hiding. However, the skilled analyst will recognize some underlying assumptions and ask for more details. Here are a few questions he may want to ask, and some of the reasoning that may follow. Some are trivial, some are not. Sometimes, two trivial questions conjure to uncover a nontrivial interplay.

- rules may intersect on a single SKU (3 items 'A' for $1.30, 6 for $2.50). Is that the case? Is there a simplified/simplifying assumption that grouping by larger quantities will always lead to a more convenient price per unit?

- without discounts, billing is a so-called online process, that is, you scan one item, and you immediately get to know the total (so far). You just add the SKU price, and never have to re-process the previous items.
If you have at most one grouping-based discount rule per SKU, you still have a linear process. You add one item, and you either form a new group, or add to the spare items. Forming a new group may require to recalculate the billing (just for the spares).
If you allow more than one rule for the same SKU, well, it depends. Under the assumption above about convenience for larger groups, you basically have a prioritized list of rules. You add one item, and you have to reprocess the entire set of items with the same SKU. You apply the largest possible quantity rule, get a spare set, apply the larger quantity rule to that spare set, and so on, until only the default "quantity 1" rule applies. This is a linear, deterministic process, repeated every time you add a new item.
In practice, however, you may have a rule for 3, 4, 5 items of the same SKU. A group of 8 items could then be partitioned in 4+4 or 5+3. Maybe 5+3 (which would be chosen by a priority list) is more convenient. Maybe not. Legislation tends to favor the buyer, so you should look for the most favorable partitioning. This is no longer a linear process, and the corresponding code will increase in complexity, testing will be more difficult, etc.
Perhaps we should exclude these cases. Perhaps we just discovered a new requirement for another part of the systems, where rules are interactively created (we don't expect to add rules by writing code, do we :-). We need to dig deeper.
Note: understanding when a requirement is moving the problem to an entirely new level of complexity is one of the most important moments in requirement analysis. As I said before, I often use the term "nonlinear decision" here, implying that there is a choice: to include that requirement or not, to renegotiate the requirement somehow, to learn more about the problem and maybe discover that it's our lucky day and that the real problem is not what we thought. This happens quite often, especially when requirements are gathered from legacy systems. I discussed this in an old (but still relevant) paper: When Past Solutions Cause Future Problems, IEEE Software, 1997.

- grouping by SKU is still a relatively simple policy. At worst, you recalculate the partitioning for all the items of the same SKU that you're scanning, and deal with a localized combinatorial problem. Is that the only kind of rule we need? What about the popular "buy 3 items, get the cheapest for $1", that is not based on SKU? What about larger categories, like "buy 3 items in the clothing department, get the cheapest for $1"? A naive analyst may ignore the issue, and a naive designer may think that dropping an interface on top of a rule will make this a simple extension. That's not the case.
Once you move beyond grouping by SKU, you have to recalculate the entire partitioning for all scanned products every time a new product is scanned. You add a product B priced as $5. Which combination of products and rules results in the most favorable price? Grouping with the other 3 items with the same SKU, or with other 2 in the clothing department? Finding the most favorable partitioning is now a rule-based combinatorial process. New rules can be introduced at any time, so it's hard to come up with an optimization scheme. Beware the large family with two full carts :-).
Your code is now two orders of magnitude harder than initially expected. Perhaps we should really renegotiate requirements, or plan a staged development within a larger timeframe than expected. What, you didn't investigate requirements, promised a working system in a week, and are now swamped? Well, you can always join the recurrent twitter riot against estimates :-).

- Oh, of course you also have fidelity cards, don't you. Some rules apply only if you have a card. In practice, people may hand their card over only after a few items have been scanned. So you need to go back anyway, but that wouldn't be a combinatorial process per se, just a need to start over.

It gets much worse...
There is an underlying, barely visible concept of time in the problem statement: "this week we have a special offer". So, rules apply only during a time interval.

Time-dependent rules are tricky. The skilled analyst knows that it's quite simple to verify if a specific instant (event) falls within an interval, and then apply the rules that happen to be effective at that instant. However, he also understands that a checkout is not an instantaneous process. It takes time, perhaps a few seconds, perhaps a few minutes. What if the two intervals overlap, that is, a rule is effective when checkout starts, but not when it ends (or vice versa)? The notion of "correctness", here, is relatively hard to define, and the store manager must be involved in this kind of decision. The interplay with the need to recalculate everything whenever a new item is scanned should be obvious: if you don't do anything about it, the implementation is going to choose for you.
Note: traditionally, many businesses have tried to sidestep this sort of problem by applying changes to business rules "after hours". Banks, for instance, still rely a lot on nightly jobs, but even more so on the concept of night. This safe harbor, however, is being constantly challenged by a 24/7 world. Your online promotion may end at midnight in your state, but that could be morning in your customer's state.

... before it gets better :-)
Of course, it's not just about finding more and more issues. The skilled analyst will quickly see that a fixed price discount policy (3 x $1.30) could easily be changed into a proportional discount ("20% off"). He may want to look into that at some point, but that's a secondary detail, because it's not going to alter the nature of the problem in any relevant way. Happy day :-).

Are we supposed to ask those questions?
Shouldn't we just start coding? It's agile 2011, man. Implement the first user story, get feedback, improve, etc. It's about individual and interactions, tacit knowledge, breaking new ground, going into the unknown, fail fast, pivot, respond to change. Well, at least, it's just like that on Hacker News, so it must be true, right?
I'm not going to argue with that, if not by pointing out an old post of mine on self-fulfilling expectations (quite a few people thought it was just about UML vs. coding; it's the old "look at the moon, not at the finger" thing). How much of the rework (change) is actually due to changing business needs, and how much is due to lack of understanding of the current business needs? We need to learn balance. Analysis paralysis means wasting time and opportunities. So does the opposite approach of rushing into coding.

We tried having an analyst, but it didn't work
My pre-canned, context-free :-) answer to this is: "no, your obsolete coder is not a skilled analyst". Here are some common traits of the unskilled analyst:
- ex coder, left himself become obsolete but knows something about the problem domain and the legacy system, so the company is trying to squeeze a little more value out of him.
- writes lengthy papers called "functional specifications" that nobody wants to read.
- got a whole :-) 3 days training on use cases, but never got the difference with a functional spec.
- kinda knows entity-relationship, and can sort-of read a class diagram. However, he would rather fill 10 pages of unfathomable prose than draw the simplest diagram.
- talks about web applications using CICS terminology.
- the only Michael Jackson he ever heard about was a singer.
- after becoming agile :-), he writes user stories like "as a customer, I want to pay my bill" (yeah, sure, whatever).
- actually believes he can use a set of examples (or test cases) as a specification (good luck with combinatorial problems).

More recurrent ineffective analysts: the marketer who couldn't market, the salesman who couldn't sale, the customer care who didn't care. Analysis is though. Just because Moron Joe couldn't do it, however, it doesn't mean it can't be done.

How to become a skilled analyst
The answer is surprisingly simple:
- you can't. It's too late. The discipline has been killed and history has been erased.
- you shouldn't. Nobody is looking for a skilled analyst anyway. Go learn a fashionable language or technology instead.

More seriously (well, slightly more seriously), the times of functional specifications are gone for good, and nobody is mourning. Use cases stifled innovation for a long while, then faded into background because writing effective use cases was hard, writing crappy use cases was simple, and people usually went for simple. Users stories are a joke, but who cares.

What if you want to learn the way of the analyst, anyway? Remember, a skilled analyst is not an expert in a specific domain (although he tends to grow into an expert in several domains). He's not the person you pay to get answers. He's the person you pay to get better questions (and so, indirectly, better answers).
The skilled analyst will ask different questions, explore different avenues, and find out different facts. So the essential skills would be observation, knowledge of large class of problems, classification, and description. Description. Michael Jackson vehemently advocated the need for a discipline of description (see "Defining a Discipline of Description," IEEE Software, Sep./Oct. 1998; it's behind pay walls, but I managed to find a free copy here, until it lasts; please go read it). He noted, however, that we lacked suck a discipline; that was 1998, and no, we ain't got one meanwhile.

Now, consider the patterns of thought I just used above to investigate potential issues:
Rules -> overlapping rules.
Incremental problem -> Priority list -> Selective combinatorial problem -> Global combinatorial problem.
Time interval -> overlapping intervals -> effective rules.
The problem is, that kind of knowledge is nowhere to be learnt. You get it from the field, when it dawns on you. Or maybe you don't. Maybe you get exposed to it, but never manage to give it enough structure, so you can't really form a proto-pattern and reuse that knowledge elsewhere. Then you won't have 20 years of experience, but 1 year of experience 20 times over.

Contrast this with more mature fields. Any half-decent mechanical engineer would immediately recognize cyclic stress and point out a potential fatigue issue. It's part of the basic training. It's not (yet) about solving the problem. It's about recognizing large families of recurrent issues.

C'mon, we got Analysis Patterns!
Sort of. I often recommend Fowler's book, and even more so the lesser known volumes from David Hay (Data Model Patterns) and a few more, similar works. But honestly, those are not analysis patterns. They are "just" models. Some better and more refined than others. But still, they are just models. A pattern is a different beast altogether, and I'm not even sure we can have analysis patterns (since a pattern includes a solution resolving forces, and analysis is not about finding a solution).

A few years ago I argued that problem frame patterns were closer to the spirit of patterns than the more widely known work from Fowler, and later used them to investigate the applicability of Coad's Domain Neutral Component (which is another useful weapon in the analyst arsenal).

However, in a maturing discipline of requirements engineering, we would need to go at a much finer granularity, and build a true catalog of recurring real-world problems, complete with questions to ask, angles to investigate, subtleties to work out. No solutions. It's not about design. It's not about process reengineering. It's about learning the problem.

Again, I don't think we will, but here is a simplified attempt at documenting the Rules/Timing stuff, in a format somewhat inspired by patterns, but still new (and therefore, tentative).

  • Name
Time-dependent Rules and Transactions

  • Context
We have a set of Rules that must be checked during a Transaction. Rules are effective from time T1 to time T2. Time can be expressed in absolute terms (e.g. December 25, 2011, 12:00 AM), recurring terms (every Friday, 12:00 to 17:00), etc. A Transaction is a set of actions, taking place over time, with a non-negligible duration (that is, a transaction is not an instantaneous event). Rules may have to be checked several times during the lifetime of a single Transaction.

  • Problem
The interval during which any given Rule is effective may overlap with the time interval required to carry over a transaction, that is, a rule may be effective when the transactions starts, but not when it ends, or vice versa, or may be valid for an interval beginning and ending inside the transaction time span (for long-lived transactions). Still, we need to guarantee some form of coherence in the observed behavior.

  • Example
[... the checkout problem would be a perfect fit here...]

  • Issues and Questions
- Is there any kind of regulation dictating the system behavior?
- Would the effect of simply applying rules effective at different times be noticeable?
- Can the transaction be simplified into an instantaneous event?
- Can we just "freeze" the rule set when the transaction starts? (there are two facets here: the real-world implications of doing so, and the machine-side implications of doing so).
- Can we replay the entire transaction every time a new event is processed (e.g. a new item is added to the cart), basically moving everything forward in time?
- etc.

I don't know about you, but I would love to have a place where this kind of knowledge could be stored, cleaned up, structured, and curated.

Conclusions
It would be easy to blame it all on agility. After all, a lot of people have taken agility as an excuse for "if it's hard, don't do it". Design has been trivialized to SOLID and Beck's 4 principles. Analysis is just gone. Still, I'm not going to. I've seen companies stuck in waterfall. It ain't pretty. But it's more and more obvious that we're stuck in just another tar pit, where a bunch of gurus keep repeating the same mantras, and the discipline is not moving forward.

Coding is fascinating. A passionate programmer will stay up late to work on some interesting problem and watch his code work. I did, many times, and I'm sure I'll still do for a long while. Creating models and descriptions does not seem to be on a par with that. There is some intellectual satisfaction in there too, but not near as much as seeing something actually run. In the end, this might be the simplest, most honest explanation of why, in the end, we always come back to code.

Still, we need a place, a format, a curating community for long-term, widely useful knowledge that transcends nitty-gritty coding issues (see also my concept of Half-Life of a Knowledge Repository). Perhaps we just need to make it fun. I like the way the guys at The Fun Theory can turn everything into something fun (take a look at the piano stairs). Perhaps we should give requirements engineering another shot :-).


If you read so far, you should follow me on twitter.

Acknowledgement
The image on top is a copy of this picture from Johnny Jet, released under a creative common license with permission to share for commercial use, with attribution.

Friday, September 02, 2011

Notes on Software Design, Chapter 15: Run-Time Entanglement

So, here I am, back to my unpopular :-) series on the Physics of Software. It's time to explore the run-time side of entanglement, or more exactly, begin to explore, as I'm trying to write shorter posts, hoping to increase frequency as well.
A lot of time has gone by, so here is a recap from Chapter 12: Two clusters of information are entangled when performing a change on one immediately requires a change on the other (to maintain overall consistency). If you're new to this, reading chapter 12, 13 and 14 (dealing with the artifact-side of entanglement) won't hurt.

Moving to the run-time side, some forms of entanglement are obvious (like redundant data). Some are not. Some are rather narrow in scope. Some are far-reaching. I'll cover quite a few cases in this post, although I'm sure it's not an exhaustive list (yet).

  • Redundancy
This is the most obvious form of run-time entanglement: duplicated information.
Redundancy occurs even without your intervention: swap-file vs. RAM contents, 3rd-2nd-1st level cache vs. RAM and vs. caches in other cores/processors, etc; at this level, entanglement satisfaction is usually called coherence.
In other cases, redundancy is an explicit design decision: database replicas, mem-cached contents, etc. At a smaller scale, we also have redundancy whenever we store the same value in two distinct variables. Redundancy means our system is always on the verge of violating the underlying information entanglement. That's why caches are notoriously hard to "get right" in heavily concurrent environments without massive locking (which is a form of friction). This is strictly related with the CAP Theorem, which I have discussed months ago, but I'll come back to it in the future.
Just like we often try (through hard-won experience) to minimize redundancy in artifacts (the Keep it dry principle), we learnt to minimize redundancy in data as well (through normalization, see below), but also to somehow control redundancy through design patterns like the observer. I'll get back to patterns in the context of entanglement in a future post.

  • Referential Integrity
This is perhaps less obvious, but easy to understand. Referential integrity, as known within the relational database culture, requires (quoting Wikipedia) that any field in a table that is declared a foreign key can contain only values from a parent table's primary key or a candidate key. That means you cannot remove some records unless you update others, or that you cannot create some records unless some other is in place. That's a form of entanglement, and now we can start to grasp another concept: the idea of transaction is born out of the need to have intermediate micro-states that do not respect the entanglement constraint, yet make those micro-states invisible. As far as the external observer is concerned, time is quantized, and transactions are the ticks.
Interestingly, database normalization is all (just? :-) about imposing a specific form on data entanglement. Consider a database that is not in second normal form; I'll use the same example of the Wikipedia page for simplicity. Several records contain the same information (work location). If we change the work location for Jones, we have to update three records. This is entanglement through redundancy, of course. Normalization does not remove entanglement. What it does is to impose a particular form on it: through foreign keys. That makes it easier to deal with entanglement within the relational paradigm, but it is not without consequences for modularity (which would require an entire post to explore). In a sense, this "standardization" of entanglement form could be seen as a form of dampening. Views, too, are a practical dampening tool (on capable hands). More on this in a future post.

  • Class Invariant
This, I must confess, was not obvious at all to recognize at first, which is weird in insight. A class invariant is a constraint on the internal state (data) of any object of that class. That means, of course, that those data cannot change independently. Therefore, they are entangled by definition. More precisely, a class invariant can often be stated as a set of predicates, and each predicate is revealing some form of entanglement between two or more data members.
Curiously enough (or perhaps not :-), invariant must hold at the beginning/end of externally observable methods, though it can be broken during the actual execution of those methods. That is to say, public methods play the role of transactions, making time quantized once again.

  • Object Lifetime
Although this can be seen as a special case of class invariant (just like a cascade delete is ascribed to referential integrity) I'd like to make this explicit, for reasons we'll understand better in a future post. Some objects cannot outlive others. Some objects cannot be created without others. Reusing the terminology of chapter 13, this is a D-D or C-C form of entanglement. The same form of entanglement can be found underneath several DB normalization concepts, and thanks to the RT/Artifact dualism, will turn out to be a useful concept in the artifact world as well.

Toward a concept of tangling through procedural knowledge
All the above, I guess, it's pretty obvious in hindsight. It borrows heavily on the existing literature, which always considered data as something that can be subject to constraints. The risk here is to fall into the "information as data" tar pit. Information is not data. Information includes procedural knowledge (see also my old posts Lost ... and Found? for more on the concept of Information and Alexandrian centers). Is there a concept of entanglement for procedural knowledge?

Consider this simple spreadsheet:

AB
11020
2200240

That may look like plain data, but behind the curtain we have two formulas (procedural knowledge):

A2=A1*B1
B2=A2*1.2

In the picture above, the system is stable, balanced, or as I will call it, in a steady state. Change a value in A1 or B1, however, and procedural knowledge will kick in. After a short interval (during which the system is unbalanced, or as I'll call it, in a transient state), balance will be restored, and all data entangled through procedural knowledge will be updated.
A spreadsheet, of course, is the quintessential Dataflow system, so it's also the natural trait d'union between knowledge-as-data and procedural knowledge. It should be rather obvious, however, that a similar reasoning can be applied to any kind of event-driven or reactive system. That kind of system is sitting idle (steady state) until something happens (new information entering the system and moving it to a transient state). That unleashes a chain of reactions, until balance has been restored.

Baby steps: now, what was the old-school view of objects (before the watering down took place)? Objects were like tiny little machines, communicating through messages. Right. So invoking a method on an object is not really different than sending the object a message. We're still upsetting its steady state, and the procedural knowledge inside the method is restoring balance (preserving the invariant, etc.)

So, what about the traditional, top-down, imperative batch system? It's actually quite straightforward now: invoking a function (even the infamous main function) means pushing some information inside the system. The system reacts by moving information around, creating new information, etc, until the output is ready (the new steady state). The glue between all that information is the procedural knowledge stored inside the system. Procedural knowledge is basically encoding the entanglement path for dynamic information.

Say that again?
This is beginning to look too much like philosophy, so let's move to code once again. In any C-like languages, given this code:
int a2 = a1 * b1;
int b2 = a2 * 1.2;
int a3 = a1 + b1;
we're expecting the so-called control-flow to move forward, initializing a2, then b2, then a3. In practice, the compiler could reorder the statements and calculate a3 first, or in between, and even the processor could execute the two instructions "out of order" (see the concept of "observable behavior" in the C and C++ standard). What we are actually saying is that a2 can be calculated given a1 and b1, and how; that b2 can be calculated given a2, and how; and that a3 can be calculated given a1 and b1, and how. The "can be calculated given ..." is an abstract interpretation of the code. That abstract interpretation provides the entanglement graph for run-time entities. The "how" part, as it happens, is more germane to the function than to the form, and in this context, is somehow irrelevant.

What about the control-flow primitives, like iteration, choice, call, etc? Iteration and choice are nothing special: at run-time, a specific branch will be taken, for a specific number of times, resulting in a specific run-time entanglement between run-time entities. Call (procedure call, function call) is somewhat different, because it reveals the fractal nature of software: whatever happens inside the function is hidden (except via side effects) from the caller. The function will carry out its job, that is, will restore the fine-grained input-output balance upon invocation.

This is the general idea of procedural knowledge: given an input, control-flow goes on, until the program reaches a steady state, where output has been produced. As noted above, due to the fractal nature of information, this process takes place at different levels. A component instance may still be in a transient state while some object contained in that instance is already back to a steady state.

In this sense, a procedure (as an artifact) is how we teach a machine to calculate an output given an input, through the stepwise creation and destruction of entangled run-time information. Phew :-).

Next steps
The beauty of the Run-Time/Artifact dualism is that whatever we learn on one side, we can often apply on the other side (with the added benefit that many things are easier to "get" on one side or another).
Here, for instance, I've introduced the concept of steady and transient state for run-time information. The same reasoning can be applied to the artifact side. Whenever you create, update or delete an artifact A1, if there is any other artifact that is entangled with A1, the system goes into an unbalanced, transient state. You have to apply work (energy) to bring it back to the steady state, where all the entangled information is properly updated (balanced). A nice quote from Leonardo da Vinci could apply here: "Motion is created by the destruction of balance" (unfortunately, I can't find an authoritative source for the quote :-). Motion in the information space is created by unsettling the system (moving to a transient state) until entanglement is finally satisfied again, and a steady state is reached. This is as true in the artifact space as in the run-time space.
Overall, this has been more like a trampoline chapter, where a few notions are introduced for the benefits of forthcoming chapters. Here is a short list of topics that I'll try to explore at some point in the future:

- The truth about multiplicity, frequency, and attraction/rejection in the run-time world.
- As above, in the artifact world.
- Moving entanglement between run-time and artifacts (we do this a lot while we design).
- The Entanglement Diagram.
- A few examples from well-known Design Patterns.
- Cross-cutting concerns and Delocalized plans: relationships with entanglement.
- More on distance, entanglement, probability of failure, and the CAP theorem.
- Dampening (isolation). This will open a new macro-chapter in the Physics of Software series.

If you read so far, you should follow me on twitter, or perhaps even share this post (I dare you :-)

Wednesday, June 29, 2011

Cut the red wire!

We're all familiar with this scene: the good guys trying to save the day (or the world) by defusing a bomb. Quickly running out of time, they have to cut a wire. Failure is not an option. The red bar won't turn into a green bar.

What if you had to write code that works the very first time? What if you had no chance to test your code, let alone fixing bugs? You got only one chance to run your code and save the world. Would that change the way you write? The way you think? How? What can you learn from that?

Lest you think I'm kinda crazy: I'm not actually suggesting that you shouldn't test your code. I'm not actually suggesting that you stop doing your test-driven thing (if you do) or that you forget your test plan (if you have one), etc etc.
Consider this like a stretching exercise. We may learn something useful just by going outside our comfort zone. Give it a try.

Saturday, May 21, 2011

The CAP Theorem, the Memristor, and the Physics of Software

Where I present seemingly unrelated facts that are, in fact, not unrelated at all :-).

The CAP Theorem
If you keep current on technology, you're probably familiar with the proliferation of NoSQL , a large family of non-relational data stores. Most NoSQL stores have been designed for internet-scale applications, where large data stores are preferably deployed using an array of loosely connected low-cost servers. That brings a set of well-known issues to the table:

- we'd like data to be consistent (that is, to respect all the underlying constraints at any time, especially replication constraints). We call this property Consistency.

- we'd like fast response time, even when some server is down or under heavy load. We call this property Availability.

- we'd like to keep working even if the system gets partitioned (a connection between servers fails). We call this property Partition tolerance.

The CAP Theorem (formerly the Brewer's Conjecture) says that we can only choose two properties out of three.

There is a lot of literature on the CAP Theorem, so I don't need to go in further details here: if you want a very readable paper covering the basics, you can go for Brewer's CAP Theorem by Julian Browne, while if you're more interested in a formal paper, you can refer to Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services by Seth Gilbert and Nancy Lynch.

There is, however, a relatively subtle point that is often brought up and seldom clarified, so I'll give it a shot. Most people realize that there is some kind of "connection" between the concept of availability and the concept of partition. If a server is partitioned away, it is by definition not available. In that sense, it may seem like there is some kind of overlapping between the two concepts, and overlapped concepts are bad (orthogonal concepts should be preferred). However, it is not so:

- Availability is an external property, something the clients of the data store can see. They either can get data when they need it, or they can't.

- Partitioning is an internal property, something clients are totally unaware of. The data store has to deal with that on the inside.

Of course, given the fractal nature of software, in a system of systems we may see lack of availability at one level, because of the lack of partition tolerance at a lower level (or vice-versa, as an unavailable node may not be distinguishable from a partitioned node).

To sum up, while the RDBMS world has usually approached the problem by choosing Consistency and Availability over Partition tolerance, the NoSQL world has often chosen Availability and Partition tolerance, leading to the now famous Eventually Consistent model.

The Memristor, or the Value of a Theory
As a kid, I spent quite a bit of time hacking electronic stuff. I used to scavenge old TV sets and the like, and recover resistors, capacitors, and inductors to use on very simple projects. Little I knew that, theoretically, there was a missing passive element: the memristor.

Leon Chua developed the theory in 1971. It took until 2008 before someone could finally create a working memristor, using nanoscale techniques that would have looked like science fiction in 1971. Still, the theory was there, long before, pointing toward the future. At the bottom of the theory was the realization that the three known passive elements could be defined by a relationship between four concepts (current, voltage, charge, and flux-linkage). By investigating all possible relationships, he found a missing element, one that was theoretically possible, but yet undiscovered. He used the term completeness to indicate this line of reasoning, although we often call this kind of property symmetry.

Software Entanglement
In Chapter 12 of my ongoing series on the nature of software, I introduced the concept of Entanglement, further discusses in Chapter 13 and Chapter 14 (for the artifact world).

Here, I'll reuse my early definition from Chapter 12: Two clusters of information are entangled when performing a change on one immediately requires a change on the other.

Entanglement is constantly at work in the artifact world: change a class name, and you'll have to fix the name everywhere; add a member to an enumeration, and you'll have to update any switch/case statement over the enumeration; etc. It's also constantly at work in the run-time world.

Although I haven't formally introduced entanglement for the run-time world (that would/will be the subject of the forthcoming Chapter 15), you can easily see that maintaining a database replica creates a run-time entanglement between data: update one, and the other must be immediately (atomically, as we use to say) updated. Perhaps slightly more subtle is the fact that referential integrity is strongly related to run-time entanglement as well. Consistency in the CAP theorem, therefore, is basically Entanglement satisfaction.

Once we understand that, it's perhaps easier to see that the CAP theorem applies well beyond our traditional definition of distributed system. Consider a multi-core CPU with independent L1 caches. Cache contents become entangled whenever they map to the same address. CPU designers usually go with CA, forgetting P. That makes a lot of sense, of course, because we're talking about in-chip connections.

That's sort of obvious, though. Things get more interesting when we start to consider the run-time/artifact symmetry. That's part of the value of a [good] theory.

A CAP Theorem for Artifacts
My perspective on the Physics of Software is strongly based on the run-time/artifact dualism. Most forces apply in both worlds, so it is just natural to bring ideas from one world to another, once we know how to translate concepts. Just like symmetry allowed Chua to conceive the memristor, symmetry may allow us to extend concepts from the run-time world to the artifact world (and vice-versa). Let's give the CAP theorem a shot, by moving each run-time concept to the corresponding notion in the artifact world:

Consistency: just like in the run-time world, it's about satisfying Entanglement. While change in the run-time world is a C/U/D action on data, here is a C/U/D action on artifacts. If you add a member to an enumerated type, your artifacts are consistent when you have updated every portion of code that was enumerating over the extension of that type, etc.

Availability: just like in the run-time world, is the ability to access a working, up-to-date system. If you can't deploy your new artifacts, your system is not available (as far as the artifact world is concerned). The old system may be up and running, but your new system is not available to the user.

Partition tolerance: just like in the run-time world, it means you want your system to be usable even if some changes can't be propagated. That is, some artifacts have been partitioned away, and cannot be reached. Therefore, some sort of partial deployment will take place.

Well, indeed, you can only have two :-). Consider the artifacts involved in the usual distributed system:

- one or more servers
- one or more clients
- a contract in between

The clients and the server (as artifacts – think source code not processes) are U/D entangled over the contract: change a method signature (in RPC speak), or the WSDL, or whatever represents your contract, and you have to change both to maintain consistency.

What happens if you update [the source code of] a server (say, to fix a serious bug), and by doing so you need to update the contract, but cannot update [the source code of] a client [timely]? This is just like a partition. Think of a distributed development team: the team working on the client, for a reason or another, has been partitioned away.

You only have two choices:

- let go of Availability: you won't deploy your new server until you can update the client. This is the artifact side of not having your database available until all the replicas are connected (in the run-time world).

- let go of Consistency: you'll have to live with an inconsistent client, using a different contract.

Consequences
Chances are that you:
- Shiver at the idea of letting go of Consistency.
- Think versioning may solve the problem.

Of course, versioning is a coping strategy, not a solution. In fact, versioning is usually proposed in the run-time world of data stores as well. However, versioning your contract is like pretending that the server has never been updated. It may or may not work (what if your previous contract had major security flaws that cannot be plugged unless you change the contract? What if you changed your server in a way that makes the old contract impossible to keep? etc). It also puts a significant burden on the development team (maintaining more source code than strictly needed) and may significantly slow down development, which is another way to say it's impacting availability (of artifacts).

It is important, however, that we thoroughly understand context. After all, any given function call is a contract, and we surely want to maintain consistency as much as we can. So, when does the CAP Theorem apply to Artifacts? Partition tolerance and Availability must be part of the equation. That requires one of the following conditions to apply:

- Independent development teams. The Facebook ecosystem, for instance, is definitely prone to the artifact side of the CAP Theorem, just like any other system offering a public API to the world at large. You just can't control the clients.

- A large system that cannot be updated in every single part (unless you accept to slow down deployment/forgo availability). A proliferation of clients (web based, rich, mobile, etc) may also bring you into partitioning problems.

If you work on a relatively small system, and you have control of all clients, you're basically on a safe field – you can go with Consistency and Availability, because you just don't have significant partitions. Your development team is more like a multi-core CPU than a large distributed system.

Once we understand that versioning is basically a way to pretend changes never happened on the server side, is there something similar to Eventual Consistency for the artifact side?

TolerantReader may help. If you put extra effort into your clients, you can actually have a working system that will eventually be consistent (when source code for clients will be finally updated) but still be functional during transition times, without delaying the release of server updated. Of course, everything that requires discipline on the client side is rather useless in a Facebook-like ecosystem, but might be an excellent strategy for a large system where you control the clients.

Interestingly, Fowler talks about the virtues of TolerantReader as if it was always warranted for a distributed system. I tend to disagree: it makes sense only when some form of development-side partitioning can be expected. In many other cases, an enforced contract through XSD and code generation is exactly what we need to guarantee Consistency and Availability (because code generation helps to quickly align the syntactical bits – you still have to work on the semantic parts on both sides). Actually, the extra time required to create a TolerantReader will impact Availability on the short term (the server is ready, the client is not). Context, context, context. This is one more reason why I think we need a Physics of Software: without a clear understanding of forces and materials, it's too easy to slip into the fallacy that just because something worked very well for me [in some cases], it should work very well for you as well.

In fact, once you recognize the real forces, it's easy to understand that the problem is not limited to service-oriented software, XML contracts, and the like. You have the same problem between a database schema and a number of different clients. Whenever you have artifact entanglement and the potential for development partitioning (see above) you have to deal with the artifact-side CAP Theorem.

Moving beyond TolerantReader (which is not helping much when your client is sending the wrong data anyway :-), when development partitioning is expected, you need to think about a flexible contract from the very beginning. Facebook, for instance, allows clients to specify which fields they want to receive, but is still pretty rigid in the fields they have to send.

This is one more interesting R&D challenge that is not easily solved with today's technology. Curiously enough, in the physical world we have long understood the need to use soft materials at the interface level to compensate for imperfections and unexpected variations of contact surfaces. Look at any recent, thermally insulated window, and most likely you're gonna find a seal between the sash and the frame. Why are seals still needed in a world of high-precision manufacturing? ;-)

Conclusions
Time for a confession: when I first thought about all the above, I had already read quite a bit on the CAP Theorem, but not the original presentation from Eric Brewer where the conjecture (it wasn't a theorem back then) was first introduced. The presentation is about a larger topic: challenges in the development of large-scale distributed systems.

As I started to write this post, I decided to do my homework and go through Brewer's slides. He identified three issues: state distribution, consistency vs. availability, and boundaries. The first two are about the run-time world, and ultimately lead to the CAP theorem. While talking about Borders, however, Brewers begins with run-time issues (independent failure) and then moves into the artifact world, talking (guess what!) about contracts, boundary evolution, XML, etc. Interestingly, nothing in the presentation suggests any relationship between the problems. Here, I think, is the value of a good theory: as for the memristor, it provides us with leverage, moving what we know to a higher level.

In fact, another pillar of the Physics of Software is the Decision Space. I'm pretty sure the CAP theorem can be applied in the decision space as well, but that will have to wait.

Well, if you liked this, stay tuned for Chapter 15 of my NOSD series, share this post, follow me on twitter, etc.

Wednesday, April 20, 2011

Your coding conventions are hurting you

If you take a walk in my hometown, and head for the center, you may stumble into a building like this:



from a distance, it's a relatively pompous edifice, but as you get closer, you realize that the columns, the ornaments, everything is fake, carefully painted to look like the real thing. Even shadows have been painted to deceive the eye. Here is another picture from the same neighbor: from this angle, it's easier to see that everything has just been painted on a flat surface:



Curiously enough, as I wander through the architecture and code of many systems and libraries, I sometimes get the same feeling. From a distance, everything is object oriented, extra-cool, modern-flexible-etc, but as you get closer, you realize it's just a thin veneer over procedural thinking (and don't even get me started about being "modern").

Objects, as they were meant to be
Unlike other disciplines, software development shows little interest for classics. Most people are more attracted by recent works. Who cares about some 20 years old paper when you can play with Node.js? Still, if you never took the time to read The Early History of Smalltalk, I'd urge you to. Besides the chronicles of some of the most interesting times in hw/sw design ever, you'll get little gems like this: "The basic principal of recursive design is to make the parts have the same power as the whole." For the first time I thought of the whole as the entire computer and wondered why anyone would want to divide it up into weaker things called data structures and procedures. Why not divide it up into little computers, as time sharing was starting to? But not in dozens. Why not thousands of them, each simulating a useful structure?

That's brilliant. It's a vision of objects like little virtual machines, offering specialized services. Objects were meant to be smart. Hide data, expose behavior. It's more than that: Alan is very explicit about the idea of methods as goals, something you want to happen, unconcerned about how it is going to happen.

Now, I'd be very tempted to write: "unfortunately, most so-called object-oriented code is not written that way", but I don't have to :-), because I can just quote Alan, from the same paper: The last thing you wanted any programmer to do is mess with internal state even if presented figuratively. Instead, the objects should be presented as sites of higher level behaviors more appropriate for use as dynamic components. [...]It is unfortunate that much of what is called “object-oriented programming” today is simply old style programming with fancier constructs. Many programs are loaded with “assignment-style” operations now done by more expensive attached procedures.

That was 1993. Things haven't changed much since then, if not for the worse :-). Lot of programmers have learnt the mechanics of objects, and forgot (or ignored) the underlying concepts. As you get closer to their code, you'll see procedural thinking oozing out. In many cases, you can see that just by looking at class names.


Names are a thinking device
Software development is about discovering and encoding knowledge. Now, humans have relatively few ways to encode knowledge: a fundamental strategy is to name things and concepts. Coming up with a good name is hard, yet programming requires us to devise names for:
- components
- namespaces / packages
- classes
- members (data and functions)
- parameters
- local variables
- etc
People are basically lazy, and in the end the compiler/interpreter doesn't care about our beautiful names, so why bother? Because finding good names is a journey of discovery. The names we choose shape the dictionary we use to talk and think about our software. If we can't find a good name, we obviously don't know enough about either the problem domain or the solution domain. Our code (or our model) is telling us something is wrong. Perhaps the metaphor we choose is not properly aligned with the problem we're trying to solve. Perhaps there are just a few misleading abstractions, leading us astray. Still, we better listen, because we are doing it wrong.

As usual, balance is key, and focus is a necessity, because clock is ticking as we're thinking. I would suggest that you focus on class/interface names first. If you can't find a proper name for the class, try naming functions. Look at those functions. What is keeping them together? You can apply them to... that's the class name :-). Can't find it? Are you sure those functions belong together? Are you thinking in concepts or just slapping an implementation together? What is that code really doing? Think method-as-a-goal, class-as-a-virtual-machine. Sometimes, I've found useful to think about the opposite concept, and move from there.

Fake OO names and harmful conventions
That's rather bread-and-butter, yet it's way more difficult than it seems, so people often tend to give up, think procedurally, and use procedural names as well. Unfortunately, procedural names have been institutionalized in patterns, libraries and coding conventions, therefore turning them into a major issue.

In practice, a few widely used conventions can seriously stifle object thinking:
- the -er suffix
- the -able suffix
- the -Object suffix
- the I- prefix

of these, the I- prefix could be the most harmless in theory, except that in practice, it's not :-). Tons of ink has been wasted on the -er suffix, so I'll cover that part quickly, and move to the rest, with a few examples from widely used libraries.

Manager, Helper, Handler...
Good ol' Peter Coad used to say: Challenge any class name that ends in "-er" (e.g. Manager or Controller). If it has no parts, change the name of the class to what each object is managing. If it has parts, put as much work in the parts that the parts know enough to do themselves (that was the "er-er Principle"). That's central to object thinking, because when you need a Manager, it's often a sign that the Managed are just plain old data structures, and that the Manager is the smart procedure doing the real work.

When Peter wrote that (1993), the idea of an Helper class was mostly unheard of. But as more people got into the OOP bandwagon, they started creating larger and larger, uncohesive classes. The proper OO thing to do, of course, is to find the right cooperating, cohesive concepts. The lazy, fake OO thing to do is to take a bunch of methods, move them outside the overblown class X, and group them in XHelper. While doing so, you often have to weaken encapsulation in some way, because XHelper needs a privileged access to X. Ouch. That's just painting an OO picture of classes over old-style coding. Sadly enough, in a wikipedia article that I won't honor with a link, you'll read that "Helper Class is one of the basic programming techniques in object-oriented programming". My hope for humanity is only restored by the fact that the article is an orphan :-).

I'm not going to say much about Controller, because it's so popular today (MVC rulez :-) that it would take forever to clean this mess. Sad sad sad.

Handler, again, is an obvious resurrection of procedural thinking. What is an handler if not a damn procedure? Why do something need to be "handled" in the first place? Oh, I know, you're thinking of events, but even in that case, EventTarget, or even plain Target, is a much better abstraction than EventHandler.

Something-able
Set your time machine to 1995, and witness the (imaginary) conversation between naïve OO Developer #1, who for some reason has been assigned to design the library of the new wonderful-language-to-be, and naïve OO Developer #2, who's pairing with him along the way.
N1: So, I've got this Thread class, and it has a run() function that it's being executed into that thread... you just have to override run()...
N2: So to execute a function in a new thread you have to extend Thread? That's bad design! Remember we can only extend one class!
N1: Right, so I'll use the Strategy Pattern here... move the run() to the strategy, and execute the strategy in the new thread.
N2: That's cool... how do you wanna call the strategy interface?
N1: Let's see... there is only one method... run()... hmmm
N2: Let's call it Runnable then!
N1: Yes! Runnable it is!

And so it began (no, I'm not serious; I don't know how it all began with the -able suffix; I'm making this up). Still, at some point people thought it was fine to look at an interface (or at a class), see if there was some kind of "main method" or "main responsibility" (which is kinda obvious if you only have one) and name the class after that. Which is a very simple way to avoid thinking, but it's hardly a good idea. It's like calling a nail "Hammerable", because you known, that's what you do with a nail, you hammer it. It encourages procedural thinking, and leads to ineffective abstractions.

Let's pair our naïve OO Developer #1 with an Object Thinker and replay the conversation:

N1: So, I've got this Thread class, and it has a run() function that it's being executed into that thread... you just have to override run()...
OT: So to execute a function in a new thread you have to extend Thread? That's bad design! Remember we can only extend one class!
N1: Right, so I'll use the Strategy Pattern here... move the run() to the strategy, and execute the strategy in the new thread.
OT: OK, so what is the real abstraction behind the strategy? Don't just think about the mechanics of the pattern, think of what it really represents...
N1: Hmm, it's something that can be run...
OT: Or executed, or performed independently...
N1: Yes
OT: Like an Activity, with an Execute method, what do you think? [was our OT aware of the UML 0.8 draft? I still have a paper version :-)]
N1: But I don't see the relationship with a thread...
OT: And rightly so! Thread depends on Activity, but Activity is independent of Thread. It just represents something that can be executed. I may even have an ActivitySequence and it would just execute them all, sequentially. We could even add concepts like entering/exiting the Activity...
(rest of the conversation elided – it would point toward a better timeline :-)

Admittedly, some interfaces are hard to name. That's usually a sign that we don't really know what we're doing, and we're just coding our way out of a problem. Still, some others (like Runnable) are improperly named just because of bad habits and conventions. Watch out.


Something-Object
This is similar to the above: when you don't know how to name something, pick some dominant trait and add Object to the end. Again, the problem is that the "dominant trait" is moving us away from the concept of an object as a virtual machine, and toward the object as a procedure. In other cases, Object is dropped in just to avoid more careful thinking about the underlying concept.

Just like the -able suffix, sometimes it's easy to fix, sometimes is not. Let's try something not trivial, where the real concept gets obscured by adopting a bad naming convention. There are quite a few cases in the .NET framework, so I'll pick MarshalByRefObject.

If you don't use .NET, here is what the documentation has to say:
Enables access to objects across application domain boundaries in applications that support remoting
What?? Well, if you go down to the Remarks section, you get a better explanation:
Objects that do not inherit from MarshalByRefObject are implicitly marshal by value. […] The first time [...] a remote application domain accesses a MarshalByRefObject, a proxy is passed to the remote application
Whoa, that's a little better, except that it's "are marshaled by value", not "are marshal by value" but then, again, the name should be MarshaledByRefObject, not MarshalByRefObject. Well, all your base are belong to us :-)

Now, we could just drop the Object part, fix the grammar, and call it MarshaledByReference, which is readable enough. A reasonable alternative could be MarshaledByProxy (Vs. MarshaledByCopy, which would be the default).
Still, we're talking more about implementation than about concepts. It's not that I want my object marshaled by proxy; actually, I don't care about marshaling at all. What I want is to keep a single object identity across appdomains, whereas with copy I would end up with distinct objects. So, a proper one-sentence definition could be:

Preserve object identity when passed between appdomains [by being proxied instead of copied]
or
Guarantees that methods invoked in remote appdomains are served in the original appdomain

Because if you pass such an instance across appdomains, any method call would be served by the original object, in the original appdomain. Hmm, guess what, we already have similar concepts. For instance, we have objects that, after being created in a thread, must have their methods executed only inside that thread. A process can be set to run only on one CPU/core. We can configure a load balancer so that if you land on a specific server first, you'll stay on that server for the rest of your session. We call that concept affinity.

So, a MarshalByRefObject is something with an appdomain affinity. The marshaling thing is just an implementation detail that makes that happen. AppDomainAffine, therefore, would be a more appropriate name. Unusual perhaps, but that's because of the common drift toward the mechanics of things and away from concepts (because the mechanics are usually much easier to get for techies). And yes, it takes more clarity of thoughts to come up with the notion of appdomain affinity than just slapping an Object at the end of an implementation detail. However, clarity of thoughts is exactly what I would expect from framework designers. While we are at it, I could also add that AppDomainAffine should be an attribute, not a concrete class without methods and fields (!) like MarshalByRefObject. Perhaps I'm asking too much from those guys.


ISomething
I think this convention was somewhat concocted during the early COM days, when Hungarian was widely adopted inside Microsoft, and having ugly names was therefore the norm. Somehow, it was the only convention that survived the general clean up from COM to .NET. Pity :-).

Now, the problem is not that you have to type an I in front of things. It's not even that it makes names harder to read. And yes, I'll even concede that after a while, you'll find it useful, because it's easy to spot an interface just by looking at its name (which, of course, is a relevant information when your platform doesn't allow multiple inheritance). The problem is that it's too easy to fall into the trap, and just take a concrete class name, put an I in front of it, and lo and behold!, you got an interface name. Sort of calling a concept IDollar instead of Currency.

Case in point: say that you are looking for an abstraction of a container. It's not just a container, it's a special container. What makes it special is that you can access items by index (a more limited container would only allow sequential access). Well, here is the (imaginary :-) conversation between naïve OO Developer #1, who for unknown reasons has been assigned to the design of the base library for the newfangled language of the largest software vendor on the planet, and himself, because even naïve OO Developer #2 would have made things better:

N1: So I have this class, it's called List... it's pretty cool, because I just made it a generic, it's List<T> now!
N1: Hmm, the other container classes all derive from an interface... with wonderful names like IEnumerable (back to that in a moment). I need an interface for my list too! How do I call it?
N1: IListable is too long (thanks, really :-)))). What about IList? That's cool!
N1: Let me add an XML comment so that we can generate the help file...

IList<T> Interface
Represents a collection of objects that can be individually accessed by index.


So, say that you have another class, let's call it Array. Perhaps a SparseArray too. They both can be accessed by index. So Array IS-A IList, right? C'mon.

Replay the conversation, drop in our Object Thinker:

N1: So I have this class, it's called List... it's pretty cool, because I just made it a generic, it's List<T> now!
N1: Hmm, the other container classes all derive from an interface... with wonderful names like IEnumerable. I need an interface for my list too! How do I call it?
OT: What's special about List? What does it really add to the concept of enumeration? (I'll keep the illusion that "enumeration" is the right name so that N1's mind won't blow away)
N1: Well, the fundamental idea is that you can access items by index... or ask about the index of an element... or remove the element at a given index...
OT: so instead of sequential access you now have random access, as it's usually called in computer science?
N1: Yes...
OT: How about we call it RandomAccessContainer? It reads pretty well, like: a List IS-A RandomAccessContainer, an Array IS-A RandomAccessContainer, etc.
N1: Cool... except... can we put an I in front of it?
OT: Over my dead body.... hmm I mean, OK, but you know, in computer science, a List is usually thought of as a sequential access container, not a random access container. So I'll settle for the I prefix if you change List to something else.
N1: yeah, it used to be called ArrayList in the non-generic version...
OT: kiddo, do you think there was a reason for that?
N1: Oh... (spark of light)


Yes, give me the worst of both worlds!
Of course, given enough time, people will combine those two brilliant ideas, turn off their brain entirely, and create wonderful names like IEnumerable. Most .NET collections implement IEnumerable, or its generic descendant IEnumerable<T>: when a class implements IEnumerable, its instances can be used inside a foreach statement.

Indeed, IEnumerable is a perfect example of how bad naming habits thwart object thinking, and more in general, abstraction. Here is what the official documentation says about IEnumerable<T>:
Exposes the enumerator, which supports a simple iteration over a collection of a specified type.
So, if you subscribe to the idea that the main responsibility gives the -able part, and that the I prefix is mandatory, it's pretty obvious that IEnumerable is the name to choose.

Except that's just wrong. The right abstraction, the one that is completely hidden under the IEnumerable/IEnumerator pair, is the sequential access collection, or even better, a Sequence (a Sequence is more abstract than a Collection, as a Sequence can be calculated and not stored, see yield, continuations, etc). Think about what makes more sense when you read it out loud:

A List is an IEnumerable (what??)
A List is a Sequence (well, all right!)

Now, a Sequence (IEnumerable) in .NET is traversed ("enumerated") through an iterator-like class called an IEnumerator ("iterator" like in the iterator pattern, not like that thing they called iterator in .NET). Simple exercise: what is a better name than IEnumerator for something that can only move forward over a Sequence?.

Is that all you got?
No, that's not enough! Given a little more time, someone is bound to come up with the worst of all worlds! What about an interface with:
an I prefix
an -able suffix
an Object somewhere in between

That's a challenging task, and strictly speaking, they failed it. They had to put -able in between, and Object at the end. But it's a pretty amazing name, fresh from the .NET Framework 4.0: IValidatableObject (I wouldn't be surprised to discover, inside their code, an IValidatableObjectManager to, you know, manage :-> those damn stupid validatable objects; that would really close the circle :-).

We can read the documentation for some hilarious time:
IValidatableObject Interface - Provides a way for an object to be invalidated.

Yes! That's what my objects really want! To be invalidated! C'mon :-)). I'll spare you the imaginary conversation and go straight to the point. Objects don't want to be invalidated. That's procedural thinking: "validation". Objects may be subject to constraints. Yahoo! Constraints. What about a Constraint class (to replace the Validation/Validator stuff, which is so damn procedural). What about a Constrained (or, if you can't help it, IConstrained) interface, to replace IValidatableObject?

Microsoft (and everyone else, for that matter): what about having a few more Object Thinkers in your class library teams? Oh, while we are at it, why don't you guys consider that we may want to check constraints in any given place, not just in the front end? Why not moving all the constraint checking away from UI or Service layers and make the whole stuff available everywhere? It's pretty simple, trust me :-).

Bonus exercise: once you have Constraint and IConstrained, you need to check all those constraints when some event happens (like you receive a message on your service layer). Come up with a better name than ConstraintChecker, that is, something that is not ending in -er.

And miles to go...
There would be so much more to say about programming practices that hinder object thinking. Properties, for instance, are usually misused to expose internal state ("expressed figuratively" or not), instead of being just zero-th methods (henceforth, goals!). Maybe I'll cover that in another post.

An interesting question, for which I don't really have a good answer, is: could some coding convention promote object thinking? Saying "don't use the -er suffix" is not quite the same as saying "do this and that". Are your conventions moving you toward object thinking?

Note: for more on the -er suffix, see: Life without a controller, case 1.


If you read so far, you should follow me on twitter.

Sunday, February 27, 2011

Notes on Software Design, Chapter 14: the Enumeration Law

In my previous post on this series, I used the well-known Shape problem to explain polymorphism from the entanglement perspective. We've seen that moving from a C-like approach (with a ShapeType enum and a union of structures) to an OO approach (with a Shape interface implemented by concrete shape classes) creates new centers, altering the entanglement between previous centers.
This time, I'll explore the landscape of entanglement a little more, using simple, well-known problems. My aim is to provide an intuitive grasp of the entanglement concept before moving to a more formal definition. In the end, I'll come up with a simple Software Law (one of many). This chapter borrows extensively from the previous one, so if you didn't read chapter 13, it's probably better to do so before reading further.

Shapes, again
Forget about the Shape interface for a while. We now have two concrete classes: Triangle and Circle. They represent geometric shapes, this time without graphical responsibilities. A triangle is defined by 3 points, a Circle by center and radius. Given a Triangle and a Circle, we want to know the area of their intersection.
As usual, we have two problems to solve: the actual mathematical problem and the software design problem. If you're a mathematician, you may rightly think that solving the first (the function!) is important, and the latter is sort of secondary, if not irrelevant. As a software designer, however, I'll allow myself the luxury of ignoring the gory mathematical details, and tinker with form instead.

If you're working in a language (like C++) where functions exist [also] outside classes, a very reasonable form would be this:
double Intersection( const Triangle& t, const Circle& c )
{
// something here
}
Note that I'm not trying to solve a more general problem. First, I said "forget the Shape interface"; second, I may not know how to solve the general problem of shapes intersection. I just want to deal with a circle and a triangle, so this signature is simple and effective.

If you're working in a class-only language, you have several choices.

1) Make that an instance method, possibly changing an existing class.

2) Make that a static method, possibly changing an existing class.

3) If your language has open classes, or extension methods, etc, you can add that method to an existing class without changing it (as an artifact).

So, let's sort this out first. I hope you can feel the ugliness of placing that function inside Triangle or Circle, either as an instance or static method (that feeling could be made more formal by appealing to coupling, to the open/closed principle, or to entanglement itself).
So, let's say that we introduce a new class, and go for a static method. At that point is kinda hard to find nice names for class and method (if you're thinking about ShapeHelper or TriangleHelper, you've been living far too long in ClassNameWasteLand, sorry :-). Having asked this question a few times in real life (while teaching basic OO thinking), I'll show you a relatively decent proposal:
class Intersection
{
public static double Area( Triangle t, Circle c )
{
// something here
}
}
It's quite readable on the calling site:
double a = Intersection.Area( t, c ) ;
Also, the class name is ok: a proper noun, not something ending with -er / -or that would make Peter Coad (and me :-) shiver. An arguably better alternative would be to pass parameters in a constructor and have a parameterless Area() function, but let's keep it like this for now.

Whatever you do, you end up with a method/function body that is deeply coupled with both Triangle and Circle. You need to get all their data to calculate the area of the intersection. Still, this doesn't feel wrong, does it? That's the method purpose. It's ok to manipulate center, radius, and vertexes. Weird, coupling is supposed to be bad, but it doesn't feel bad here.

Interlude
Let me change problem setting for a short while, before I come back to our beloved shapes. You're now working on yet another payroll system. You got (guess what :-) a Person class:
class Person
{
Place address;
String firstName;
String lastName;
TelephoneNumber homePhone;
// …
}
(yeah, I know, some would have written "Address address" :-)

You also have a Validate() member function (I'll simplify the problem to returning a bool, not an error message or set of error messages).
bool Validate()
{
return
address.IsValid() &&
ValidateName( firstName ) &&
ValidateName( lastName ) &&
homePhone.IsValid() ;
}
Yikes! That code sucks! It sucks because it's asymmetric (due to the two String-typed members) but also because it's fragile. If I get in there and add "Email personalEmail ;" as a field, I'll have to update Validate as well (and more member functions too, and possibly a few callers). Hmmm. That's because I'm using my data members. Well, I'm using Triangle and Circle data member as well in the function above, which is supposed to be worse; here I'm using my own data members. Why was that right, while Validate "feels" wrong? Don't use hand-waving explanations :-).

Back to Shapes
A very reasonable request would be to calculate the intersection of two triangles or two circles as well, so why don't we add those functions too:
class Intersection
{
public static double Area( Triangle t, Circle c )
{ // something here }
public static double Area( Triangle t1, Triangle t2 )
{ // something here }
public static double Area( Circle c1, Circle c2 )
{ // something here }
}
We need 3 distinct algorithms anyway, so anything more abstract than that may look suspicious. It makes sense to keep the three functions in the same class: the first function is already coupled with Triangle and Circle. Adding the other two doesn't make it any worse from the coupling perspective. Yet, this is wrong. It's a step toward the abyss. Add a Rectangle class, and you'll have to add 4 member functions to Intersection.

Interestingly, we came to this point without introducing a single switch/case, actually without even a single "if" in our code. Even more interestingly, our former Intersection.Area was structurally similar to Person.Validate, yet relatively good. Our latter Intersection is not structurally similar to Person, yet it's facing a similar problem of instability.

Let's stop and think :-)
A few possible reactions to the above:

- I'm too demanding. You can do that and nothing terrible will happen.
Probably true, a small scale software mess rarely kills. The same kind of problem, however, is often at the core of a large-scale mess.

- I'm throwing you a curve.
True, sort of. The intersection problem is a well-known double-dispatch issue that is not easily solved in most languages, but that's only part of the problem.

- It's just a matter of Open/Closed Principle.
Yeah, well, again, sort of. The O/C is more concerned with extension. In practice, you won't create a new Person class to add email. You'll change the existing one.

- I know a pattern / solution for that!
Me too :-). Sure, some acyclic variant of Visitor (or perhaps more sophisticated solutions) may help with shape intersection. A reflection+attribute-based approach to validation may help with Person. As usual, we should look at the moon, not at the finger. It's not the problem per se; it's not about finding an ad-hoc solution. It's more about understanding the common cause of a large set of problems.

- Ok, I want to know what is really going on here :-)
Great! Keep reading :-)

C/D-U-U Entanglement
Everything that felt wrong here, including the problem from Chapter 13, that is:

- the ShapeType-based Draw function with a switch/case inside

- Person.Validate

- the latter Intersection class, dealing with more than one case

was indeed suffering from the same problem, namely an entanglement chain: C/D-U-U. In plain English: Creation or Deletion of something, leading to an Update of something else, leading to an Update of something else. I think that understanding the full chain, and not simply the U-U part, makes it easier to recognize the problem. Interestingly, the former Intersection class, with just one responsibility, was immune from C/D-U-U entanglement.

The chain is easier to follow on the Draw function in Chapter 13, so I'll start from there. ShapeType makes it easy to understand (and talk about) the problem, because it's both Nominative and Extensional. Every concept is named: individual shape types and the set of those shape types (ShapeType itself). The set is also providing through its full extension, that is, by enumerating its values (that's actually why it's called an enumerated type). Also, every concrete shape is given a name (a struct where concrete shape data are stored).

So, for the old C-like solutions, entanglement works (unsurprisingly) like this:

- There is a C/D-C/D entanglement between members of ShapeType and a corresponding struct. Creation or deletion of a new ShapeType member requires a corresponding action on a struct.

- There is a C/D-U entanglement between members of ShapeType and ShapeType (this is obvious: any enumeration is C/D-U entangled with its constituents).

- There is U-U entanglement between ShapeType and Draw, or every other function that is enumerating over ShapeType (a switch/case being just a particular case of enumerating over names).

Consider now the first Intersection.Area. That function is enumerating over two sets: the set of Circle data, and the set of Triangle data. There is no switch/case involved: we're just using those data, one after another. That amounts to enumeration of names. The function is C/D-U-U entangled with any circle and triangle data. We don't consider that to be a problem because we assume (quite reasonably) that those sets won't change. If I change my mind and represent a Circle using 3 points (I could) I'll have to change Area. It's just that I do not consider that change any likely.

Consider now Person.Validate. Validate is enumerating over the set of Person data, using their names. It's not using a switch/case. It doesn't have to. Just using a name after another amounts to enumeration. Also, enumeration is not broken by a direct function call: moving portions of Validate into sub-functions wouldn't make any difference, which is why layering is ineffective here.
Here, Validate is U-U entangled with the (unnamed) set of Person data, or C/D-U-U entangled with any of its present or future data members. Unlike Circle or Triangle, we have all reasons to suspect the Person data to be unstable, that is, for C and/or D to occur. Therefore, Validate is unstable.

Finally, consider the latter Intersection class. While the individual Area functions are ok, the Intersection class is not. Once we bring in more than one intersection algorithm, we entangle the class with the set of shape types. Now, while in the trivial C-like design we had a visible, named entity representing the set of shapes (ShapeTypes), in the intersection problem I didn't provide such an artifact (on purpose). The fact that we're not naming it, however, does not make it any less real. It's more like we're dealing with an Intensional definition of the set itself (this would be a long story; I wrote an entire post about it, then decided to scrap it). Intersection is C/D-U-U entangled with individual shape types, and that makes it unstable.

A Software Law
One of the slides in my Physics of Software reads "software has a nature" (yeah, I need to update those slides with tons of new material). In natural sciences, we have the concept of Physical law, which despite the name doesn't have to be about physics :-). In physics, we do have quite a few interesting laws, not necessarily expressed through complex formulas. The Laws of thermodynamics , for instance, can be stated in plain English.

When it comes to software, you can find several "laws of software" around. However, most of them are about process (like Conway's law, Brooks' law, etc), and just a few (like Amdhal's law) are really about software. Some are just principles (like the Law of Demeter) and not true laws. Here, however, we have an opportunity to formulate a meaningful law (one of many yet to be discovered and/or documented):

- The Enumeration Law
every artifact (a function, a class, whatever) that is enumerating over the members of set by name is U-U entangled with that set, or said otherwise, C/D-U-U entangled with any member of the set.

This is not a design principle. It is not something you must aim for. It's a law. It's always true. If you don't like the consequences, you have to change the shape of your software so that the law no longer applies, or that entanglement is harmless because the set is stable.

This law is not earth-shattering. Well, the laws of thermodynamics don't look earth-shattering either, but that doesn't make them any less important :-). Honestly, I don't know about you, but I wish someone taught me software design this way. This is basically what is keeping me going :-).

Consequences
In Chapter 12, I warned against the natural temptation to classify entanglement in a good/bad scale, as it has been done for coupling and even for connascence. For instance, "connascence of name" is considered the best, or least dangerous, form. Yet we have just seen that entanglement with a set of names is at the root of many problems (or not, depending on the stability of that name set).

Forces are not good or evil. They're just there. We must be able to recognize them, understand how they're influencing our material, and deal with them accordingly. For instance, there are well-known approaches to mitigate dependency on names: indirection (through function pointers, polymorphism, etc) may help in some cases; double indirection (like in double-dispatch) is required in other cases; reflection would help in others (like Validate). All those techniques share a common effect, that we'll discuss in a future post.

Conclusions
Speaking of future: recently, I've been exploring different ways to represent entanglement, as I wasn't satisfied with my forcefield diagram. I'll probably introduce an "entanglement diagram" in my next post.

A final note: visit counters, sharing counters and general feedback tell me that this stuff is not really popular. My previous rant on design literature had way more visitors than any of my posts on the Physics of Software, and was far easier to write :-). If you're not one of the usual suspects (the handful of good guys who are already sharing this stuff around), consider spreading the word a little. You don't have to be a hero and tell a thousand people. Telling the guy in front of you is good enough :-).

Thursday, February 03, 2011

Is Software Design Literature Dead?

Sometimes, my clients ask me what to read about software design. Most often than not, they don't want a list of books – many already have the classics covered. They would like to read papers, perhaps recent works, to further advance their understanding of software design, or perhaps something inspirational, to get new ideas and concepts. I have to confess I'm often at loss for suggestions.

The sense of discomfort gets even worse when they ask me what happened to the kind of publications they used to read in the late '90s. Stuff like the C++ Report, the Journal of Object Oriented Programming, Java Report, Object Expert, etc. Most don't even know about the Journal of Object Technology, but honestly, the JOT has taken a strong academic slant lately, and I'm not sure they would find it all that interesting. I usually suggest they keep current on design patterns: for instance, the complete EuroPLoP 2009 proceedings are available online, for free. However, mentioning patterns sometimes furthers just another question: what happened to the pattern movement? Keep that conversation going for a while, and you get the final question: so, is software design literature dead?

Is it?
Note that the question is not about software design - it's about software design literature. Interestingly, Martin Fowler wrote about the other side of the story (Is Design Dead?) back in 2004. He argued that design wasn't dead, but its nature had changed (I could argue that if you change the nature of something, then it's perhaps inappropriate to keep using the same name :-), but ok). So perhaps software design literature isn't dead either, and has just changed nature: after all, looking for "software design" on Google yields 3,240,000 results.
Trying to classify what I usually find under the "software design" chapter, I came up with this list:

- Literature on software design philosophy, hopefully with some practical applications. Early works on Information Hiding were as much about a philosophy of software design as about practical ways to structure our software. There were a lot of philosophical papers on OOP and AOP as well. Usually, you get this kind of literature when some new idea is being proposed (like my Physics of Software :-). It's natural to see less and less philosophy in a maturing discipline, so perhaps a dearth of philosophical literature is not a bad sign.

- Literature on notations, like UML or SysML. This is not really software design literature, unless the rationale for the notation is discussed.

- Academic literature on metrics and the like. This would be interesting if those metrics addressed real design questions, but in practice, everything is always observed through the rather narrow perspective of correlation with defects or cost or something appealing for manager$. In most cases, this literature is not really about software design, and is definitely not coming from people with a strong software design background (of course, they would disagree on that :-)

- Literature on software design principles and heuristics, or refactoring techniques. We still see some of this, but is mostly a rehashing of the same old stuff from the late '90s. The SOLID principles, the Law of Demeter, etc. In most cases, papers say nothing new, are based on toy problems, and are here just because many programmers won't read something that has been published more than a few months ago. If this is what's keeping software design literature alive, let's pull the plug.

- Literature on methods, like Design by Contract, TDD, Domain-Driven Design, etc. Here you find the occasional must-read work (usually a book from those who actually invented the approach), but just like philosophy, you see less and less of this literature in a maturing discipline. Then you find tons of advocacy on methods (or against methods), which would be more interesting if they involved real experiments (like the ones performed by Simula labs) and not just another toy projects in the capable hands of graduate students. Besides, experiments on design methods should be evaluated [mostly] by cost of change in the next few months/years, not exclusively by design principles. Advocacy may seem to keep literature alive, but it's just noise.

- Literature on platform-specific architectures. There is no dearth of that. From early EJB treatises to JBoss tutorials, you can find tons of papers on "enterprise architecture". Even Microsoft has some architectural literature on application blocks and stuff like that. Honestly, in most cases it looks more like Markitecture (marketing architecture as defined by Hohmann), promoting canned solutions and proposing that you adapt your problem to the architecture, which is sort of the opposite of real software design. The best works usually fall under the "design patterns" chapter (see below).

- Literature for architecture astronauts. This is a nice venue for both academics and vendors. You usually see heavily layered architectures using any possible standards and still proposing a few more, with all the right (and wrong) acronyms inside, and after a while you learn to turn quickly to the next page. It's not unusual to find papers appealing to money-saving managers who don't "get" software, proposing yet another combination of blueprint architectures and MDA tools so that you can "generate all your code" with a mouse click. Yeah, sure, whatever.

- Literature on design patterns. Most likely, this is what's keeping software design literature alive. A well-written pattern is about a real problem, the forces shaping the context, an effective solution, its consequences, etc. This is what software design is about, in practice. On the other hand, a couple of conferences every year can't keep design literature in perfect health.

- Literature on design in the context of agility (mostly about TDD). There is a lot of this on the net. Unfortunately, it's mostly about trivial problems, and even more unfortunately, there is rarely any discussion about the quality of the design itself (as if having tests was enough to declare the design "good"). The largest issue here is that it's basically impossible to talk about the design of anything non-trivial strictly from a code-based perspective. Note: I'm not saying that you can't design using only code as your material. I'm saying that when the problem scales slightly beyond the toy level, the amount of code that you would have to write and show to talk about design and design alternatives grows beyond the manageable. So, while code-centric practices are not killing design, they are nailing the coffin on design literature.

Geez, I am usually an optimist :-)), but this picture is bleak. I could almost paraphrase Richard Gabriel (of "Objects have failed" fame) and say that evidently, software design has failed to speak the truth, and therefore, software design narrative (literature) is dying. But that would not be me. I'm more like the opportunity guy. Perhaps we just need a different kind of literature on software design.

Something's missing
If you walk through the aisles of a large bookstore, and go to the "design & architecture" shelves, you'll all the literary kinds above, not about software, but about real-world stuff (from chairs to buildings). except on the design of physical objects too. Interestingly, you can also find a few morelike:

- Anthologies (presenting the work of several designers) and monographs on the work of famous designers. We are at loss here, because software design is not immediately visible, is not interesting for the general public, is often kept as a trade secret, etc.
I know only one attempt to come up with something similar for software: "Beautiful Architecture" by Diomidis Spinellis. It's a nice book, but it suffers from a lack of depth. I enjoyed reading it, but didn't come back with a new perspective on something, which is what I would be looking for in this kind of literature.
It would be interesting, for instance, to read more about the architecture of successful open-source projects, not from a fanboy perspective but through the eyes of experienced software designers. Any takers?

- Idea books. These may look like anthologies, but the focus is different. While an anthology is often associated with a discussion or critics of the designer's style, an idea book presents several different objects, often out of context, as a sort of creative stimulus. I don't know of anything similar for software design, though I've seen many similar book for "web design" (basically fancy web pages). In a sense, some literature on patterns comes close to being inspirational; at least, good domain-specific patterns sometimes do. But an idea book (or idea paper) would look different.

I guess anthologies and monographs are at odd with the software culture at large, with its focus on the whizz-bang technology of the day, little interest for the past, and often bordering on religious fervor about some products. But idea books (or most likely idea papers) could work, somehow.

Indeed, I would like to see more software design literature organized as follows:

- A real-world, or at least realistic problem is presented.

- Forces are discussed. Ideally, real-world forces.

- Possibly, a subset of the whole problem is selected. Real-world problems are too big to be dissected in a paper (or two, or three). You can't discuss the detailed design of a business system in a paper (lest you appeal only to architecture astronauts). You could, however, discuss a selected, challenging portion. Ideally, the problem (or the subset) or perhaps just the approach / solution, should be of some interest even outside the specific domain. Just because your problem arose in a deeply embedded environment doesn't mean I can't learn something useful for an enterprise web application (assuming I'm open minded, of course :-). Idea books / papers should trigger some lateral thinking on the reader, therefore unusual, provocative solutions would be great (unlike literature on patterns, where you are expected to report on well-known, sort of "traditional" solutions).

- Practical solutions are presented and scrutinized. I don't really care if you use code, UML, words, gestures, whatever. Still, I want some depth of scrutiny. I want to see more than one option discussed. I don't need working code. I'm probably working on a different language or platform, a different domain, with different constraints. I want fresh design ideas and new perspectives.

- In practice, a good design aims at keeping the cost of change low. This is why a real-world problem is preferable. Talking about the most likely changes and how they could be addressed in different scenarios beats babbling about tests and SOLID and the like. However, "most likely changes" is meaningless if the problem is not real.

Funny enough, there would be no natural place to publish something like that, except your own personal page. Sure, maybe JOT, maybe not. Maybe IEEE Software, maybe not. Maybe some conference, maybe not. But we don't have a software design journal with a large readers pool. This is part of the problem, of course, but also a consequence. Wrong feedback loop :-).

Interestingly, I proposed something similar years ago, when I was part of the editorial board of a software magazine. It never made a dent. I remember that a well-known author argued against the idea, on the basis that people were not interested in all this talking about ins-and-outs, design alternatives and stuff. They would rather have a single, comprehensive design presented that they could immediately use, preferably with some source code. Well, that's entirely possible; indeed, I don't really know how many software practitioners would be interested in this kind of literature. Sure, I can bet a few of you guys would be, but overall, perhaps just a small minority is looking for inspiration, and most are just looking for canned solutions.

Or maybe something else...
On the other hand, perhaps this style is just too stuck in the '90s to be appealing in 2011. It's unidirectional (author to readers), it's not "social", it's not fun. Maybe a design challenge would be more appealing. Or perhaps the media are obsolete, and we should move away from text and graphics and toward (e.g.) videos. I actually tried to watch some code kata videos, but the guys thought they were like this, but to an experienced designer they looked more like this. (and not even that funny). Maybe the next generation of software designers will come up with a better narrative style or media.

Do something!
I'm usually the "let's do something" kind of guy, and I've entertained the idea of starting a Software Design Gallery, or a Software Design Idea Book, or something, but honestly, it's a damn lot of work, and I'm a bit skeptical about the market size (even for a free book/paper/website/whatever). As usual, any feedback is welcome, and any action on your part even more :-)