can barely remember the days when objects were seen like a new,
shiny, promising technology. Today, objects are often
positioned between mainstream and retro,
while the functional paradigm is enjoying an interesting renaissance. Still, in the last
few months I stumbled on a couple of blog posts asking the
quintessential question, reminiscent of those dark
old days: “what
is an object?”
most recent (September 2012) is
a pointer to a stripped-down definition provided by Brian Marick:
a clump of name->value mappings, some functions that take such
clumps as their first arguments,
and a dispatch function that decides which function the programmer
meant to call”.
Well, honestly, this is more about a specific implementation of
objects, with a rather poor fit, for instance, with the C++
implementation. It makes sense when you’re describing a way to
implement objects (which is what Marick did) but it’s not a
particularly far-reaching definition.
slightly older one(July
much more ambitious and comprehensive. Cook aims to provide a
“modern” definition of objects, unrestricted by specific
languages and implementations. It’s an interesting post indeed, and
I suggest that you take some time reading it, but in the end, it’s
still very much about the mechanics
of objects ("An
a first-class, dynamically dispatched behavior").
may seem ok from a language design perspective, defining objects
through their mechanics leaves a vacuum
in our collective knowledge: how do we design a proper
used to be taught that, by spending enough time thinking about
a problem, we would come up with a "perfect" model, one
that embodies many interesting properties (often disguised as
principles). One of those properties was stability, that is, most
individual abstractions didn't need to change as requirements
evolved. Said otherwise, change was local, or even better, change
could be dealt with by adding new, small things (like new classes),
not by patching old things.
school didn't last; some would say it failed (as in "objects
have failed"). At some point in time, another school prevailed,
claiming that thinking too far into the future was bad, that it could
lead to the wrong model anyway, and that you'd better come up with
something simple that can solve today's problems, keeping the code
quality high so that you can easily evolve it later, safely protected
by a net of unit tests.
is common, one school tended to mischaracterize the other (and
vice-versa), usually by pushing things to the extreme through some
cleverly designed argument, and then claiming generality. It's easy
to do so while talking about software, as we lack sound theories and
this picture instead:
if you don't know squat about potential energy, local minima and
local maxima, is there any doubt the ball is going to fall easily?
A week ago or so, Ralf Westphal published yet another critique of my post on living without a controller. He also proposed a different design method and therefore a different design. We also exchanged a couple of emails.
Now, I'm not really interested in "defending" my solution, because the spirit of the post was not to show the "perfect solution", but simply how objects could solve a realistic "control" problem without needing a centralized controller.
However, on one side Ralf is misrepresenting my work to the point where I have to say something, and on the other, it's an interesting chance to talk a bit more about software design.
So, if you haven't read my post on the controller, I would suggest you take some time and do so. There is also an episode 2, because that post has been criticized before, but you may want to postpone reading that and spend some time reading Ralf's post instead.
In the end, what I consider most interesting about Ralf's approach is the adoption of a rule-based approach, although he's omitting a lot of necessary details. So after as little fight as possible :-), I'll turn this into a chance to discuss rules and their role in OOD, because guess what, I'm using rules too when I see a good fit.
I'll switch to a more conversational structure, so in what follows "you" stands for "Ralf", and when I quote him, it's in green.
So, this is not, strictly speaking, the next post to the previous post. Along the road, I realized I was using a certain style in the little code I wanted to show, and that it wasn't the style most people use, and that it would be distracting to explain that style while trying to communicate a much more important concept. So this post is about persistence and a way of writing repositories. Or it is about avoiding objects with no methods and mappers between stupid objects. Or it is about layered architectures and what constitutes a good layer, and why we shouldn't pretend we have a layered architecture when we don't. Or it is about applied physics of software, understanding our material (software) and what we can really do with it. Or why we should avoid guru-driven programming. You choose; either way, I hope you'll find it interesting.
When we interact with the physical world, we develop an intuitive understanding of some physical forces. It does not take a PhD in physics to guess what is going to happen (at a macroscopic level) when you apply a load at the end of a cantilever:
You can also devise a few changes (like adding a cord or a rod) to distribute forces in a different way, possibly ending up with a different structure (like a truss):
Software is not so straightforward. As I argued before, we completely lack a theory of forces (and materials). Intuitive understanding is limited by the lack of correlation between form and function (see Gabriel). Sure, many programmers can easily perceive some “technological” forces. They perceive the UI, business logic, and persistence to be somehow “kept apart” by different concerns, hence the popularity of layered architectures. Beyond that, however, there is a gaping void which is only partially filled by tradition, transmitted through principles and patterns.
Still, I believe the modern designer should develop the ability to see the force field, that is, understand the real forces pulling things together or far apart, moving responsibilities around, clustering them around new concepts (centers). Part of my work on the Physics of Software is to make those forces more visible and well-defined. Here is an example, inspired by a recurring problem. This post starts easy, but may end up with something unexpected.
This post should have been about power law distribution of class / method sizes, organic growth of software and living organisms, Alexandrian level of scales, and a few more things.
Then the unthinkable happened. Somebody actually came up with a comment to Life without a controller, case 1, said my design was crappy and proposed an alternative based on a centralized, monolithic approach, claiming miraculous properties. I couldn’t just sit here and do nothing.
Besides, I wrote that post in a very busy week, leaving a few issues unexplored, and this is a good chance to get back to them.
I suggest that you go read the whole thing, as it’s quite interesting, and adds the necessary context to the following. Then please come back for the bloodshed. (Note: the original link is broken, as the file was removed; the "whole thing" link now points to a copy hosted on my website).
The short version
If you’re the TL;DR kind and don’t want to read Zibibbo’s post and my answer, here is the short version:
A caveman is talking to an architect. Caveman: I don’t really like the architecture of your house. Architect: why?