Saturday, May 21, 2011

The CAP Theorem, the Memristor, and the Physics of Software

Where I present seemingly unrelated facts that are, in fact, not unrelated at all :-).

The CAP Theorem
If you keep current on technology, you're probably familiar with the proliferation of NoSQL , a large family of non-relational data stores. Most NoSQL stores have been designed for internet-scale applications, where large data stores are preferably deployed using an array of loosely connected low-cost servers. That brings a set of well-known issues to the table:

- we'd like data to be consistent (that is, to respect all the underlying constraints at any time, especially replication constraints). We call this property Consistency.

- we'd like fast response time, even when some server is down or under heavy load. We call this property Availability.

- we'd like to keep working even if the system gets partitioned (a connection between servers fails). We call this property Partition tolerance.

The CAP Theorem (formerly the Brewer's Conjecture) says that we can only choose two properties out of three.

There is a lot of literature on the CAP Theorem, so I don't need to go in further details here: if you want a very readable paper covering the basics, you can go for Brewer's CAP Theorem by Julian Browne, while if you're more interested in a formal paper, you can refer to Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services by Seth Gilbert and Nancy Lynch.

There is, however, a relatively subtle point that is often brought up and seldom clarified, so I'll give it a shot. Most people realize that there is some kind of "connection" between the concept of availability and the concept of partition. If a server is partitioned away, it is by definition not available. In that sense, it may seem like there is some kind of overlapping between the two concepts, and overlapped concepts are bad (orthogonal concepts should be preferred). However, it is not so:

- Availability is an external property, something the clients of the data store can see. They either can get data when they need it, or they can't.

- Partitioning is an internal property, something clients are totally unaware of. The data store has to deal with that on the inside.

Of course, given the fractal nature of software, in a system of systems we may see lack of availability at one level, because of the lack of partition tolerance at a lower level (or vice-versa, as an unavailable node may not be distinguishable from a partitioned node).

To sum up, while the RDBMS world has usually approached the problem by choosing Consistency and Availability over Partition tolerance, the NoSQL world has often chosen Availability and Partition tolerance, leading to the now famous Eventually Consistent model.

The Memristor, or the Value of a Theory
As a kid, I spent quite a bit of time hacking electronic stuff. I used to scavenge old TV sets and the like, and recover resistors, capacitors, and inductors to use on very simple projects. Little I knew that, theoretically, there was a missing passive element: the memristor.

Leon Chua developed the theory in 1971. It took until 2008 before someone could finally create a working memristor, using nanoscale techniques that would have looked like science fiction in 1971. Still, the theory was there, long before, pointing toward the future. At the bottom of the theory was the realization that the three known passive elements could be defined by a relationship between four concepts (current, voltage, charge, and flux-linkage). By investigating all possible relationships, he found a missing element, one that was theoretically possible, but yet undiscovered. He used the term completeness to indicate this line of reasoning, although we often call this kind of property symmetry.

Software Entanglement
In Chapter 12 of my ongoing series on the nature of software, I introduced the concept of Entanglement, further discusses in Chapter 13 and Chapter 14 (for the artifact world).

Here, I'll reuse my early definition from Chapter 12: Two clusters of information are entangled when performing a change on one immediately requires a change on the other.

Entanglement is constantly at work in the artifact world: change a class name, and you'll have to fix the name everywhere; add a member to an enumeration, and you'll have to update any switch/case statement over the enumeration; etc. It's also constantly at work in the run-time world.

Although I haven't formally introduced entanglement for the run-time world (that would/will be the subject of the forthcoming Chapter 15), you can easily see that maintaining a database replica creates a run-time entanglement between data: update one, and the other must be immediately (atomically, as we use to say) updated. Perhaps slightly more subtle is the fact that referential integrity is strongly related to run-time entanglement as well. Consistency in the CAP theorem, therefore, is basically Entanglement satisfaction.

Once we understand that, it's perhaps easier to see that the CAP theorem applies well beyond our traditional definition of distributed system. Consider a multi-core CPU with independent L1 caches. Cache contents become entangled whenever they map to the same address. CPU designers usually go with CA, forgetting P. That makes a lot of sense, of course, because we're talking about in-chip connections.

That's sort of obvious, though. Things get more interesting when we start to consider the run-time/artifact symmetry. That's part of the value of a [good] theory.

A CAP Theorem for Artifacts
My perspective on the Physics of Software is strongly based on the run-time/artifact dualism. Most forces apply in both worlds, so it is just natural to bring ideas from one world to another, once we know how to translate concepts. Just like symmetry allowed Chua to conceive the memristor, symmetry may allow us to extend concepts from the run-time world to the artifact world (and vice-versa). Let's give the CAP theorem a shot, by moving each run-time concept to the corresponding notion in the artifact world:

Consistency: just like in the run-time world, it's about satisfying Entanglement. While change in the run-time world is a C/U/D action on data, here is a C/U/D action on artifacts. If you add a member to an enumerated type, your artifacts are consistent when you have updated every portion of code that was enumerating over the extension of that type, etc.

Availability: just like in the run-time world, is the ability to access a working, up-to-date system. If you can't deploy your new artifacts, your system is not available (as far as the artifact world is concerned). The old system may be up and running, but your new system is not available to the user.

Partition tolerance: just like in the run-time world, it means you want your system to be usable even if some changes can't be propagated. That is, some artifacts have been partitioned away, and cannot be reached. Therefore, some sort of partial deployment will take place.

Well, indeed, you can only have two :-). Consider the artifacts involved in the usual distributed system:

- one or more servers
- one or more clients
- a contract in between

The clients and the server (as artifacts – think source code not processes) are U/D entangled over the contract: change a method signature (in RPC speak), or the WSDL, or whatever represents your contract, and you have to change both to maintain consistency.

What happens if you update [the source code of] a server (say, to fix a serious bug), and by doing so you need to update the contract, but cannot update [the source code of] a client [timely]? This is just like a partition. Think of a distributed development team: the team working on the client, for a reason or another, has been partitioned away.

You only have two choices:

- let go of Availability: you won't deploy your new server until you can update the client. This is the artifact side of not having your database available until all the replicas are connected (in the run-time world).

- let go of Consistency: you'll have to live with an inconsistent client, using a different contract.

Consequences
Chances are that you:
- Shiver at the idea of letting go of Consistency.
- Think versioning may solve the problem.

Of course, versioning is a coping strategy, not a solution. In fact, versioning is usually proposed in the run-time world of data stores as well. However, versioning your contract is like pretending that the server has never been updated. It may or may not work (what if your previous contract had major security flaws that cannot be plugged unless you change the contract? What if you changed your server in a way that makes the old contract impossible to keep? etc). It also puts a significant burden on the development team (maintaining more source code than strictly needed) and may significantly slow down development, which is another way to say it's impacting availability (of artifacts).

It is important, however, that we thoroughly understand context. After all, any given function call is a contract, and we surely want to maintain consistency as much as we can. So, when does the CAP Theorem apply to Artifacts? Partition tolerance and Availability must be part of the equation. That requires one of the following conditions to apply:

- Independent development teams. The Facebook ecosystem, for instance, is definitely prone to the artifact side of the CAP Theorem, just like any other system offering a public API to the world at large. You just can't control the clients.

- A large system that cannot be updated in every single part (unless you accept to slow down deployment/forgo availability). A proliferation of clients (web based, rich, mobile, etc) may also bring you into partitioning problems.

If you work on a relatively small system, and you have control of all clients, you're basically on a safe field – you can go with Consistency and Availability, because you just don't have significant partitions. Your development team is more like a multi-core CPU than a large distributed system.

Once we understand that versioning is basically a way to pretend changes never happened on the server side, is there something similar to Eventual Consistency for the artifact side?

TolerantReader may help. If you put extra effort into your clients, you can actually have a working system that will eventually be consistent (when source code for clients will be finally updated) but still be functional during transition times, without delaying the release of server updated. Of course, everything that requires discipline on the client side is rather useless in a Facebook-like ecosystem, but might be an excellent strategy for a large system where you control the clients.

Interestingly, Fowler talks about the virtues of TolerantReader as if it was always warranted for a distributed system. I tend to disagree: it makes sense only when some form of development-side partitioning can be expected. In many other cases, an enforced contract through XSD and code generation is exactly what we need to guarantee Consistency and Availability (because code generation helps to quickly align the syntactical bits – you still have to work on the semantic parts on both sides). Actually, the extra time required to create a TolerantReader will impact Availability on the short term (the server is ready, the client is not). Context, context, context. This is one more reason why I think we need a Physics of Software: without a clear understanding of forces and materials, it's too easy to slip into the fallacy that just because something worked very well for me [in some cases], it should work very well for you as well.

In fact, once you recognize the real forces, it's easy to understand that the problem is not limited to service-oriented software, XML contracts, and the like. You have the same problem between a database schema and a number of different clients. Whenever you have artifact entanglement and the potential for development partitioning (see above) you have to deal with the artifact-side CAP Theorem.

Moving beyond TolerantReader (which is not helping much when your client is sending the wrong data anyway :-), when development partitioning is expected, you need to think about a flexible contract from the very beginning. Facebook, for instance, allows clients to specify which fields they want to receive, but is still pretty rigid in the fields they have to send.

This is one more interesting R&D challenge that is not easily solved with today's technology. Curiously enough, in the physical world we have long understood the need to use soft materials at the interface level to compensate for imperfections and unexpected variations of contact surfaces. Look at any recent, thermally insulated window, and most likely you're gonna find a seal between the sash and the frame. Why are seals still needed in a world of high-precision manufacturing? ;-)

Conclusions
Time for a confession: when I first thought about all the above, I had already read quite a bit on the CAP Theorem, but not the original presentation from Eric Brewer where the conjecture (it wasn't a theorem back then) was first introduced. The presentation is about a larger topic: challenges in the development of large-scale distributed systems.

As I started to write this post, I decided to do my homework and go through Brewer's slides. He identified three issues: state distribution, consistency vs. availability, and boundaries. The first two are about the run-time world, and ultimately lead to the CAP theorem. While talking about Borders, however, Brewers begins with run-time issues (independent failure) and then moves into the artifact world, talking (guess what!) about contracts, boundary evolution, XML, etc. Interestingly, nothing in the presentation suggests any relationship between the problems. Here, I think, is the value of a good theory: as for the memristor, it provides us with leverage, moving what we know to a higher level.

In fact, another pillar of the Physics of Software is the Decision Space. I'm pretty sure the CAP theorem can be applied in the decision space as well, but that will have to wait.

Well, if you liked this, stay tuned for Chapter 15 of my NOSD series, share this post, follow me on twitter, etc.