This is a transplant from the original Irrational Exuberance, and was written in mid 2007: nearly two years ago.
Learning to program in this era is a thorough indoctrination into object orientation (unless you started with PERL, Scheme, or Haskell, but we’re talking about human beings here). The first--and only--programming paradigm I was taught at college was OO, and its easy to see why--if you only have the resources to teach one paradigm well--that OO was taught. It excels at creating the layers of abstraction that are required by large projects, and it also facilitates compartmentalizing portions of code such that they can be implemented by relatively autonomous groups (if they conform to the agreed upon API... ha..haha...I should have been a poet).If we look at the OO trend, one language has ridden that wave harder than any other: Java. Admittedly Java is the spawn of a four-way tryst between C++, garbage collection, object orientation and mediocrity, but there are precious few recent languages that are not doe-eyed for object orientation. Python is almost complete object oriented (some weird implementation stuff once you get low level), Ruby is completely object oriented, Smalltalk (sire of OO) inspired Objective C... its hard to find any language being used to develop large scale projects that hasn’t chosen OO as its paradigm of choice.<!--more-->
OO makes intuitive sense
The single most important asset that I see driving object orientation’s acceptance is that it just makes sense. OO allows us to map the real world onto our programs. Shaking someone’s hand becomes a.shake_hand(b). Opening a file becomes file.open(“x.txt”, ‘r’). Writing to the file becomes file.write(“out.txt”).A Person has a Head and a Heart. A Heart has an Aorta. We are used to thinking in connection hierarchies already: my car has four wheels, and an engine; my engine has 100 horsepower. These connections lay themselves out and scream “this is the right way to implement me.”
Perhaps a bit too intuitive?
One of the great dangers of object orientation is the casual ease with which we map the behavior of the real world onto the behaviors of our programs. Its easy to forget that object orientation is only a tool we use to reduce the complexity of our solutions. However, the most intuitive choses are not necessarily the most effective ones.This is the idea that is discussed in the fantastic paper Collaborative Diffusion: Programming Antiobjects. The paper is very approachable, and a great read (if nothing else it has some pretty pictures of the pac-man game they created using their anti-object design concept).If, as 37signals argue in their excellent book Getting Real, a program is the result of thousands of small decisions, then we need to make those decisions deliberately, and <em>not</em> subconsciously. The subconscious is a fantastic thing (while I’m flailing wildly with book recommendations, Malcom Gladwell’s Blink is another great book, it discusses the powers and limitations of the subconscious mind), but it usually does what seems right--and this is a situation where the most natural fit is not always the best.
Getting more concrete
In the Programming Antiobjects paper they discuss implementing a game of Pac-Man, but instead of the ghosts using complicated algorithms to decide where to move, the majority of decision making is transfered to the floor tiles. By doing so they create a sophisticated enemy who moves intelligently, but they bypass actually creating any artificial intelligence code to guide the ghosts, instead ghosts simply move to the accessible square with the highest diffusion score (the square containing Pac-Man has a very high diffusion score, and then a recursive algorithm is used where squares receive a fraction of surrounding squares’ scores, such that as you get further from Pac-Man the score goes down. There are a few more details in the article, but this the gist of it).Instead of the ghosts making one complex decision, they make two simple decisions: one by the floor tiles, and one by the ghost. Thus they create something advanced using a very simple system (isn’t emergent behavior great?)The past several days I have run into a (superficially) similar situation where I am able to create simpler and shorter code by reconsidering where to store the decision making code.
At this point I haven’t done any real programming with Ruby, but I have been meaning to give it a try (Wow Will, you’ve been meaning to try? You’re so fantastic! Let me scrounge up a cookie...). Fortunately I--blinded by a fit of irrational exuberance--bought the Pragmatic Programmer’s Pickaxe, which is the (self-proclaimed) definitive guide to Ruby. So I opened the glorious tome and started reading. After a few chapters I realized that I needed to have a real project to work on instead of passively reading through the language specification and contemplating the strategy for my next game of Desktop Tower Defense.Well, an old (and moderately humiliating) hobby of mine was mudding, as in “Multi-User-Dungeon”ing--as in telnet with some debatably entertaining game-play mixed in (MUD : Game ; Java : Programming Language). I have always wanted to write a mud server from scratch, and this seemed like as good of a project as any to start using Ruby with.If this example is a bit too geeked out for you (and I profusely apologize for it), just remember that it is illustrating a concept, and that the actual purpose of the code is immaterial.
Who does what?
One of the first implementation questions that comes up with this project is deciding where which methods should be stored. There are players who interact with items, areas interacting with players, players interacting with classes, items interacting with items, skills interacting with items... its a mess.My initial thought was to organize the methods as belonging to players (this is, as far as I can tell, the standard approach taken by mud implementations thus far). Following in that fine tradition, players would contain all functionality, and items would contain only data about themselves. The implementation would look something like this:
However, as I sat and considered implementing hundreds more methods in that manner, I realized that this was an awfully complex approach. Sure, it was completely intuitive, but it was going to be about as entertaining as rereading Prelude the Foundation a fifth time. As a bonus, it was going to be a real pain to make changes in, because the behaviors and the data the behaviors acted upon were being stored separately.At about this time the anti-object article drifted up into my conscious mind, and I decided that I would consider a different implementation. I decided that the methods that acted upon items ought to be contained by the items themselves: I would see how well things might work out if I moved all the complexity into the items. Here is my implementation using this approach:
There are a variety of benefits over the previous implementation: we are now keeping track of whether or not something can be worn/wielded/got and how to wear/wield/get it in the same spot. This means changes only require looking at one section of code, instead of two are required by the earlier implementation. It also allows us to take advantage of polymorphism (ChestArmor and LegArmor respond differently to the method of the same name).In addition, the default item initialization is sufficient for all the subclasses, because all other data about the object will be stored within its methods (nice and Lispy). Not having to chain together a handful of super calls strikes me as a pleasant improvement (Random Question: do you think if you had a sufficiently deep class hierarchy, you could overflow the Ruby stack? I am thinking yes.).I have some concerns about how the second design appears to depend too much upon the implementation of the Player class; the Java programmer in me wants to build a copious API that completely encapsulates the implementation details. My slightly saner half thinks it is cleaner to leave it as it is. If necessary I can alter the implementation by creating a hashmap-like API over whatever datastructure I would replace the @body hashmap with.Although I’m not one to consider lines of code as a metric for quality, the second version is 41 lines to the first version’s 68 (I am, however, apparently one to have their cake and eat it too). In addition to being shorter, it also strikes me as being simpler and more understandable (no flow control, and no need to explicitly raise any exceptions, we’ll simply use Ruby’s reflection capabilities to ask item.respond_to?(“wield”) and have a simple error message we return to the player if the item has no method corresponding to their command).
Ending Thoughts Take Home Message
After operating in the object orientation paradigm for a while, grouping functionality into classes often becomes more reflexive than intentional, and this is a danger we have to be aware of. The most obvious solution is usually sufficient, but that doesn't mean it is good. Hopefully in the future I will do a better job of remembering that programs are built of dream-stuff, and that I don't have to succumb to reality’s preconceptions.