Friday, March 12, 2010

Where is your Knowledge?

Software development is a process of knowledge acquisition and knowledge encoding (see Phillip Armour, copiously quoted in this blog). Where, and how, do we store that knowledge? In several places, in several ways:

In source code: that's executable knowledge
In models: that's formal knowledge
In other kind of documents: that's written knowledge
In our brain, consciously: that's explicit knowledge
In our brain, unconsciously: that's tacit knowledge

Knowledge stored in source code has the extremely useful property of being executable, but we can't store the entire development knowledge in executable statements. Design Rationale, for instance, is not present in code (and not even in most UML diagrams, for that matter), and is basically stored at the conscious/unconscious level. My forcefield diagram is much better at formally capturing rationale.

Explicit knowledge is often passed by as oral tradition, while tacit knowledge is often passed by as "a way of doing things", just by working together. Pair programming, reviews, joint design sessions (and so on) help distribute both explicit and tacit knowledge.

Knowledge has value, but that value is not constant over time. In 1962, Fritz Machlup came up with the concept of Half-life of knowledge: the amount of time that has to elapse before half of the knowledge in a particular area is superseded or shown to be untrue.

Moreover, the initial value of a particular piece of knowledge can be very high, like a new algorithm that took you years to get right, or very small, like a trivial validation rule.

Recently, I began to think about the half-life of our knowledge repositories as well. With my usual boldness, I'll go ahead and define the Half-Life of a Knowledge Repository: the amount of time that has to elapse before half of the knowledge in a repository is unrecoverable or just too costly to recover. I could even define "too costly" as "higher than the discounted value of that knowledge when lookup is attempted".

The concept of recoverable knowledge is slightly deeper than it may seem. Sure, it does cover the obvious problems of losing knowledge for lack of backup procedures, or because it's stored in a proprietary format no longer supported, and so on. But it covers also several interesting cases:

- the knowledge is in the brain of an employee, who leaves the company
- the knowledge is in source code, but it's in an obsolete language
- the knowledge is in source code, but it's extremely hard to read
- etc.

I'll leave it up to you to define the half-life of source code, models, documents, brain (conscious and unconscious). Of course, more details are needed: niche languages, for instance, tend to have a shorter half-life.

Now, here is the real boon :-). We can combine the concept of Knowledge Half-Life, Knowledge Value, and Knowledge Repository Half-Life to map the risk of storing a particular piece of knowledge in a particular repository (only). Here is my first-cut map:

Knowledge Half-Life Knowledge (initial) Value Repository Half-Life Result
Long Long Long OK
Long Long Short Risk
Long Short Long Little Waste
Long Short Short Little Risk
Short Long Long Little Waste
Short Long Short Little Risk
Short Short Long Waste
Short Short Short OK


It's interesting to review one of the values in the Agile Manifesto (Working software over comprehensive documentation) under this perspective.

Let's say we have a piece of knowledge, and that knowledge can be indeed stored in code (as I said, you can't store everything in code).

If the half-life of knowledge is short, storing it in code only is probably the best economical choice. If the half-life of knowledge is long, we have to worry a little more. If we add relevant unit tests to that piece of code, we increase the repository half-life, as they make it easier to recover knowledge from code. If we use a mainstream language, we can also increase the repository half-life.

This may still not be enough. If you had to recover the entire knowledge stored in a non-trivial piece of code (say, an mp4 codec) having only the source code, and no (comprehensive) documentation on what that piece of code is doing, why, and how, it would take you far too much. The half-life of code is shorter than the half-life of code + documents.
Actually, depending on context, given the choice to have just the code and nothing else, or just comprehensive documentation and nothing else, we better be careful about what we choose (when knowledge half-life is long, of course).

Of course, the opposite is also true: if you store knowledge with short half-life outside code, you seriously risk wasting your time.

I've often been critic about teaching and applying principles and techniques without the necessary context. I hope that somehow, the table above and the underlying concepts can move our understanding of when to use what a little further.

Monday, March 08, 2010

Why you should learn AOP

A few days ago, I've spent some time reading a critic of AOP (The Paradoxical Success of Aspect-Oriented Programming by Friedrich Steimann). As often, I felt compelled to read some of the bibliographical references too, which took me a little more (week-end) time.

Overall, in the last few years I've devoted quite some time to learn, think, and even write a little about AOP. I'm well aware of the problems Steimann describes, and I share some skepticism about the viability of the AOP paradigm as we know it.

Too much literature, for instance, is focused on a small set of pervasive concerns like logging. I believe that as we move toward higher-level concerns, we must make a clear distinction between pervasive concerns and cross-cutting concerns. A concern can be cross-cutting without being pervasive, and in this sense, for instance, I don't really agree that AOP is not for singletons (see my old post Some notes on AOP).
Also, I wouldn't dismiss the distinction between spectators and assistants so easily, especially because many pervasive concerns can be modeled as spectators. Overall, the paradigm seems indeed a little immature when you look at the long-term maintenance effects of aspects as they're known today.

Still, I think the time I've spent pondering on AOP was truly well spent. Actually, I would suggest that you spend some time learning about AOP too, even if you're not planning to use AOP in the foreseeable future.

I don't really mean learning a specific language - unless you want/need to try out a few things. I mean learning the concepts, the AOP perspective, the AOP terminology, the effects and side-effects of an Aspect Oriented solution.

I'm suggesting that you learn all that despite the obvious (or perhaps not so obvious) deficiencies in the current approaches and languages, the excessive hype and the underdeveloped concepts. I'm suggesting that you learn all that because it will make you a better designer.

Why? Because it will expand your mind. It will add a new, alternative perspective through which you can look at your problems. New questions to ask. New concepts. New names. Sometimes, all we need is a name. A beacon in the brainstorm, and a steady hand.

As I've said many times now, as designers we're shaping software. We can choose many shapes, and ideally, we will find a shape that is in frictionless contact with the forcefield. Any given paradigm will suggest a set of privileged shapes, at macro and micro-level. Including the aspect-oriented paradigm in your thinking will expand the set of shapes you can apply and conceive.

Time for a short war story :-). In the past months I've been thinking a lot about some issues in a large CAD system. While shaping a solution, I'm constantly getting back to what I could call aspect-thinking. There are many cross-cutting concerns to be resolved. Not programming-level concerns (like the usual, boring logging stuff). Full-fledged application-domain concerns, that tend to cross-cut the principal decomposition.

Now, you see, even thinking "principal decomposition" and "cross-cutting" is making your first step into aspect-thinking. Then you can think about ways to bring those concerns inside the principal decomposition (if appropriate and/or possible and/or convenient) or think about the best way to keep them outside without code-level tangling. Tangling. Another interesting name, another interesting concept.

Sure, if you ain't using true AOP (for instance, we're using plain old C++), you'll have to give up some oblivousness (another name, another concept!), but it can be done, and it works fine (for a small scale example, see part 1 and part 2 of my "Can AOP inform OOP?")

So far, the candidate shape is causing some discomfort. That's reasonable. It's not a "traditional" solution. Which is fine, because so far, tradition didn't work so well :-). Somehow, I hope the team will get out of this experience with a new mindset. Nobody used to talk about "principal decomposition" or "cross-cutting concern" in the company. And you can't control what you can't name.

I hope they will gradually internalize the new concepts, as well as the tactics we can use inside traditional languages. That would be a major accomplishment. Much more important than the design we're creating, or the tons of code we'll be writing. Well, we'll see...