Monday, December 17, 2007

Creatively Common

creative commons license logoIf you look closely at any of my presentations, you'll see the Creative Commons logo at the end. I'm using the "Attribution-Noncommercial-Share Alike" license, which means that you can use this presentation for whatever purposes you like as long as:
  • you say that it came from me originally
  • you don't make money directly from the work
  • you have at least as liberal license on your derived works (meaning that you have to share what you create as well)
For those not familiar with the Creative Commons, it's an alternative to copyright which gives consumers options. The problem with copyright law is that it is very restrictive, based on old models of interaction. It is in fact based on what Lawrence Lessig refers to as a "read-only" view of works. This makes sense before the technology we have today. Copyright prevents anyone from taking a book and going to the neighborhood Kinkos, making copies of it, and selling it as their own. And, it prevents anyone from taking a copyrighted presentation and saying that they created it. The problem with copyright is that it is binary: a work is either copyrighted or not (although I'm sure that intellectual property lawyers can haggle endlessly on the nuances of this). That means that I cannot allow people to base their derived work directly on mine through copyright.

Creative Commons is an attempt to create what Lessig calls a "read-write" culture, allowing specific rights for derived works. We live in much more of a mash-up kind of world. Musicians frequently use samples and other pieces of other music to create something genuinely new, not just a low-fidelity copy. The place where this is playing out in a fascinating way is in Japan, around manga (see the article entitled Japan, Ink in the November issue of Wired magazine). In Japan, there are well established serial manga stories (graphic novels) with original characters. But there is a huge market for derived works, where new (generally amateur) artists take the characters from a well known series and create new stories, frequently in directions that the original author wouldn't go (these are called dojinshi - non professional, self published manga). Imagine creating a Star Wars variant where Jar Jar Binks and C3PO ended up in a gay relationship. It is doubtful that you could sell many copies of this before a platoon of IP lawyers from LucusFilms were crawling up, well, you get the picture. Yet that's exactly what's happening with manga in Japan.

But, Japan's copyright law is about as strict as ours. Here's the interesting part: the publishers tacitly agree to allow this to continue. Why would they do this? The market for manga is huge and fickle. The dojinshi show them which series are waxing or waning in popularity. And, it provides a breeding ground for new authors. The very best of the dojinshi authors can become the next generation of legitimate manga authors. Because there is no structure in place in Japan, the publishers allow millions of dollars of dojinshi sales, with the looming threat of a lawsuit if they ever become too aggressive. The publishers are doing the smart thing: allowing a "read-write" culture which provides them lots of benefits: real-time market research based on actual sales, new authors, and a vibrant culture around their works.

Clearly, with the abilities created by digital media, copyright is going to have to change. I'm doing my part by allowing anyone to use my stuff with fewer encumbrances than copyright because I think this points to a new attitude about all kinds of intellectual property. For an eloquent presentation about this (and an incredible presentation for it's own right), check out this captured keynote by Lawrence Lessig.

Tuesday, December 11, 2007

The 2G Experience


Is Groovy the future of Java? It may well be. The worst thing about Groovy these days is it's name. Want to terrify your boss? Go up to him and tell him you want to switch all your development to something called "Groovy". And don't let the door hit you in the ass on the way out. I'm lobbying the Groovy community to make a subtle name change, and it only applies when you are standing near some manager/adult. When they are around, always refer to Groovy as "The Enterprise Business Execution Language" (or even better, ebXl). That just sounds like something that a manager would go for (especially with that sexy capital "X" in the name -- what manager could resist?)

You too can have your chance to lobby the Groovy world at the upcoming 2G Experience, the first major North American Groovy/Grails conference. All the big names are going to be there, including folks from over the ocean. I'm going to be there as well, talking about Design Patterns in Groovy, Groovyizing Your Day Job (or how to start using it without frightening your boss), and a JRuby/Groovy Comparison, which should raise some hackles on both sides of the aisle. If you care at all about Groovy, this is the place to be in February.

Tuesday, December 04, 2007

ThoughtWorks DSL Podcast

ThoughtWorks is starting an interesting experiment: because there are so many passionate people here, it spills over to the outside world a lot. The only problem is that it mostly just spills onto the people who are standing near ThoughtWorkers when they start talking passionately about subjects that only software geeks get passionate about. To control the spillover, ThoughtWorks has started a series of podcasts, combining subjects and those who love to talk about them. First out of the gate: myself, Martin Fowler, Rebecca Parsons (the ThoughtWorks CTO), and Jay Fields, in a 2-part podcast about Domain Specific Languages. Part 1 is now live.

Monday, November 26, 2007

Virtual Neighbors

I was talking to my friend and fellow roaming-around-the-world speaker Jason Hunter a little while ago. He made the observation that the definition of distance has changed. He lives in San Francisco and I live in Atlanta. Yet, we see each other on a fairly regular basis. It's almost like we're neighbors, except that the common element is that we travel and work in similar places, not that we live next to one another. This is really true with guys like Scott Davis and Venkat Subramaniam, who I see about 1/2 the weekends of the year, always in a different city (driven by the No Fluff, Just Stuff schedule). I consider them my virtual neighbors. During the busy No Fluff, Just Stuff times, I see them a lot more (and more reliably) than my physical neighbors.

And it gets even more like living in the same physical neighborhood. The other day, I walked into the Red Carpet Club at O'Hare airport and heard someone call my name. Brian Sletten was waiting for a flight and we sat and chatted a while. What's funny is that Chicago is home for neither of us, and we were not in Chicago for the same reason. We both happened to be in Chicago at the same time. Just like you bump into your down the street neighbor at the hardware store, I bump into my virtual neighbors in random airports.

If you travel as much as I do, this is inevitable, I guess. As a company, ThoughtWorks travels a lot. I've bumped into co-workers in airports 2 different times when it was not the home city for either of us and we weren't going to the same place. The world is indeed smaller.

Wednesday, November 21, 2007

JRuby Podcast on JavaWorld

My good friend Andy Glover interviewed me for a pod-cast for the Java World site recently, and it has magically appeared. Here is the site blurbage:

Neal Ford and Andrew Glover are both well respected Java developers, as well as big fans of Ruby. Neal FordIn this in-depth discussion, Ford talks about why he believes Ruby is the most powerful language you could be paid to program with today, and explains the particular benefits of programming with JRuby. Ford also reveals why he believes Java developers will continue to migrate to languages other than Java, even as many continue to call the Java platform home. This is an essential, engaging discussion for those interested in learning more about JRuby and the trend toward what Ford calls polyglot programming.


It was a lively conversation, and Andy asked me about lots of stuff I've been thinking about a lot lately. As in all good conversations, the time flew by, and before I knew it, the guy recording it was shutting us down.

Thursday, November 15, 2007

Ruby Matters: Frameworks, DSLs, and Dietzler's Rule

As an industry, we've been engaged in an experiment for the last decade or so. This experiment started back in the mid to late 90's, largely driven by the fact that the demand for software vastly outstripped the supply of those who could write it (this wasn't a new problem then -- we've had this problem almost since the idea of business software started). The goal: create tools and environments that would allow average and/or mediocre developers to be productive, regardless of the messy facts already known by people like Fred Brooks (see Mythical Man Month). The reasoning goes that if we create languages that keep people out trouble by restricting what damage they can do, we can produce software without having to pay those annoying software craftsman ridiculous amounts of money (and you'd probably never be able to find enough of them even then). This thinking gave us tools like dBASE, PowerBuilder, Clipper, and Access: the rise of the 4GL's.

But the problem was that you couldn't get enough done in those environments. They created what my colleague Terry Dietzler at the time called the "80-10-10 Rule" for Access: you can get 80% of what the customer wants in a remarkably short time. The next 10% of what they want is possible, but takes a lot of effort. The last 10% is flat out impossible because you can't get "underneath" all the tooling and frameworks. And users want 100% of what they want, so 4GLs gave way to general purpose languages (Visual BASIC, Java, Delphi, and eventually C#). Java and C# in particular were designed to make C++ easier and less error prone, so they built in some fairly serious restrictions, in the interest of keeping average developers out of trouble. The problem is that they created their own version of the "80-10-10 Rule", only this time the stuff you couldn't do was much more subtle. Because they are general purpose languages, you can get pretty much anything done...with enough effort. Java kept bumping into stuff that would be nice to do but was way to much work, so frameworks were built. And built. And built. Aspects were added. More frameworks were built. It is so bad that meta-frameworks were built: the Avalon framework was a framework for...building other frameworks!

We can see what this trend has done to productivity when building complex software. What we really want is the productivity of 4GLs with the generality and flexibility of powerful general purpose languages. Enter frameworks built with Domain Specific Languages, the current exemplar being Ruby on Rails. When writing a Rails application, you don't write that much "pure" Ruby code (and most of that is in models, for business rules). Mostly, you are writing code in the DSL part of Rails. That means that you get major bang for the buck:

validates_presence_of :name, :sales_description, :logo_image_url
validates_numericality_of :account_balance
validates_uniqueness_of :name
validates_format_of :logo_image_url,
:with => %r{\.(gif|jpg|png)}i,


You get a huge bunch of functionality with this little bit of code. 4GL levels of productivity, but with a critical difference. In a 4GL (and the current mainstream statically typed languages), it is cumbersome or impossible to do really power stuff (like meta-programming). In a DSL written on top of a super powerful language, you can drop one level of abstraction to the underlying language to get done whatever you need to get done.

This is the best approach currently available. The productivity comes from working close to the problem domain in the DSL; the power comes from the abstraction layer simmering just below the surface. Expressive DSLs on top of powerful languages will become the new standard. Frameworks will be written using DSLs, not on top of statically typed languages with restrictive syntax. Note that this isn't necessarily a dynamic language or even Ruby tirade: a strong potential exists for statically typed type-inference languages that have a suitable syntax to also take advantage of this style of programming. For an example of this, check out Jaskell and in particular the build DSL written on top of it called Neptune.

Saturday, November 10, 2007

My Horse Scale of SOA

I've been giving some SOA talks over the last few years, and I struggled for a while finding a good metaphor to describe the evolution from most people's existing enterprise architecture to the magical, mysterious enterprise architecture described in most of the marketing material around SOA. Then, on one of my talks, I stumbled upon it, and later created an image that sums it up: Neal's Horse Scale of SOA:


Neal's Horse Scale of SOA



You see, the marketing literature describes something that doesn't exist in the real world: they are describing a unicorn. You've seen paintings, drawings, and movies featuring unicorns. If you came from another planet, you would assume that unicorns lived here because there are so many representations of them. The problem is that most company's enterprise architecture looks more like a broken-down donkey. The SOA experiment is to see how close you can get to a unicorn before you run out of money. Maybe you'll get to Shetland pony and stop. Or perhaps you'll make it all the way to a thoroughbred racehorse. There are even a few that'll create unicorns, but they are exceedingly rare.

The point is that you can't trust the magically vision marketed by pundits and (especially) vendors. Building unicorns is expensive, and the more donkeys you have around, the more it will cost. SOA isn't a zero-sum game. It's should be a spectrum towards improving the communication and interoperability between all your disparate equines (i.e., applications and services).

Tuesday, November 06, 2007

Language Spectrum

It came up the other day in a conversation as to which programming language I would use absent the messy constraints like "Must make money to continue to eat". I think it would look something like this, from most preferred to least:

  • Ruby (I'm quite fortunate that I'm getting to use this language for money right now)

  • Lisp (I've never gotten paid to write Lisp, but would like to)

  • Smalltalk (note that I've never done "real" Smalltalk development, but I know about its cool features)

  • Groovy

  • JavaScript

  • Python

  • Scala

  • Java or C# (or any other mainstream statically typed language)

    Interestingly enough, I think C# has the edge on language features (the new stuff they're adding for LINQ, and not doing stupid stuff like type erasure for generics) but the libraries are awful. Java the language is getting really crusty, but they have the best libraries and frameworks in the world (and the most of them too). If you could write C# code with Java libraries, you'd really have something. Of course, they are still statically typed, so you have to pay the static language productivity tax.

  • boo

  • Haskell

  • O'Caml

  • Perl

  • Language_whose_name_I_cant_write_here_because_all_filters_in_the_world_will_block_it

  • Cobol (I've never done any real development here either, and don't plan to)

  • assembler

  • Jacquard Loom (whatever that language looks like)

  • Flipping switches for 0's and 1's

  • Universal Turing machine (infinite paper strip with a read/write head that moves forwards and backwards). It's just hard to find infinitely long paper strips these days.

Clearly, this represents my relatively recent evolution towards dynamically typed languages. They are simply much more productive if you assume that you write tests for everything, which I always do. Notably absent from the list is Delphi, which is so yesterday's news to me. It became deprecated as soon as C# grew all of its good features and left it behind.

This doesn't mean that I think that Ruby embodies the perfect language (haven't seen one of those yet). But, given the landscape, it feels pretty good, and I keep learning cool new stuff about it.

Thursday, November 01, 2007

Building Bridges without Engineering

One of the themes of my "Software Engineering" & Polyglot Programming keynote is the comparison between traditional engineering and "software" engineering. The genesis for this part of the talk came from the essay What is Software Design? by Jack Reeves from the C++ Journal in 1992 (reprinted here), a fissile meme that Glenn Vanderburg tossed into the middle of a newsgroup conversation about that very topic. Even though the essay is quite old, it is every bit as pertinent today as when it was written. The update that Glenn and I have given this topic is the addition of testing, which gives us professional tools for designing software. We don't have the kinds of mathematical approach that other engineering disciplines do. For example, we can't perform structural analysis on a class hierarchy to see how resilient to change it will be in a year. It could be because those types of approaches will just never exist for software: much of the ability for "regular" engineers to do analysis has to do with economies of scale. When you build the Golden Gate bridge, you have over one million rivets in it. You can bet that the civil engineers who designed it know the structural characteristics of those rivets. But there are a million identical parts, which allows you to ultimately treat them as a single derived value. If you tried to build a bridge like software, with a million unique parts, it would take you too long to do any kind of analysis on it because you can't take advantage of the scale.

Or it may just be that software will always resist traditional engineering kinds of analysis. We'll know in a few thousand years, when we've been building software as long as we've been building bridges. We're currently at the level in software where bridges builders were when they built a bridge, ran a heavy cart across it, and it collapsed. "Well, that wasn't a very good bridge. Let's try again". There was a massive attempt at component based development a few years ago, but it has largely fallen by the wayside for everything except simple cases like user interface components. The IBM San Francisco project tried to create business components and found (to the non-surprise of software developers everywhere) that you can't build generic business components because there are far too many nuances.

Manufacturing is the one advantage we have over traditional engineers. It is easy and cheap to manufacture software parts, by building the parts of software. So why not take advantage of that ability and manufacture our software parts in both the atomic, small pieces and then the larger interactive pieces and then test them to make sure they do what we think they do. It's called unit, functional, integration, and user acceptance testing. Testing is the engineering rigor of software development.

Here's the interesting part. If you told an engineer that you needed a large bridge and that you needed it so quickly that he doesn't have time to apply any of the best practices of bridge building (e.g., structural analysis), he would refuse. In fact, he would be liable for the bad things that would happen if he was foolish enough to proceed. We have none of that liability in the software world.

Responsible software developers test, just as responsible engineers use the tools of their trade to create robust, well designed artifacts. But we still have too much stuff that is untestable, along with pressure to write code that isn't tested because testing takes time. One of my litmus tests for deciding how to spend my time looking at new things (frameworks, languages, user interface approaches) is the question "is it testable?" If the answer is no (or even "not yet"), then I know that I needn't bother looking at it. It is professionally irresponsible to write code without tests, so I won't do it.

Wednesday, October 31, 2007

Impending QCon

I'm speaking at QCon next week in San Francisco. Martin Fowler and I are pairing on an all day tutorial on Building Domain Specific Languages, a topic that he and I have much much interest. The tutorial takes an interesting format. The first part of the day will be a normal lecture/Q&A kind of affair. However, at some point in the afternoon, we're going to switch into workshop mode and actually build some DSLs based on suggestions from the audience. I think this is a great way to get people to not only understand what DSL's look like but get a bit dirty in designing some from scratch, working through the inevitable headaches that come up.

The other talk I'm giving is a "regular" presentation on Thursday about (you guessed it) Building DSLs. Looking forward to it.

Sunday, October 28, 2007

What High Coupling hath Wrought

A long time ago, I wrote a blog entry about the high price of coupling as it pertains to Internet Explorer and Windows. In it, I lambast Microsoft for the business decision to tie Internet Explorer so tightly into the underlying operating system. Developers all know that highly coupled systems are bad. And here's a stellar example.

Microsoft is finally going to release a "headless" version (called "server core") of their premiere server operating system (currently called Longhorn Server). Coincidental to this, they have built a new batch language for Windows called Windows Power Shell (nee Monad), which is simply brilliant. They have raised the bar on what a shell language can be. I can't heap enough praise on the coolness of WPS. What a great thing to have just as they are going to release a headless version of the OS.

Only one small problem. They want to make the footprint of the headless version of Windows Server smaller than the footprint of Vista (currently about 8 Gb, and that's just for the OS). The problem is that the headless version won't include .NET 2, because the .NET framework and libraries are coupled to just about every nook and cranny of Windows. If you include .NET 2, you pretty much include all of Windows. And there's the rub: Windows Power Shell is written on top of .NET 2. Which means that this brilliant tool for managing tasks, services, and general scripting of the operating system won't be available for the one system that could benefit from it the most.

I leave it to the reader to form their own conclusions.

Thursday, October 25, 2007

Developer Productivity Mean vs. Median


10 .
9 . .
8 . . . .
7 . . . . . .
6 . . . . . . . .
5 . . . . . . . . . .
4 . . . . . . . . . . . .
3 . . . . . . . . . . . . . .
2 . . . . . . . . . . . . . . . .
1 . . . . . . . . . . . . . . . . . .
0 . . . . . . . . . . . . . . . . . . . .
___________________+_+___________________
1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 2
0 1 2 3 4 5 6 7 8 9 0



OK, first a little remedial math: mean is defined as the mathematical average of a series of numbers. For the set of numbers above, the mean = 4.55. The median is defined as is the number defined as lying at the midpoint of a sorted number series, which is 5.5 in this example.

Why in the world am I talking about this? Well, it pertains to developer skills. You see, really good developers are much more productive than average ones (this has been documented in several places, including the Mythical Man Month and Joel). In fact, some statistics say that really good developers are orders of magnitude better than poor ones.

Consider this version of the graph:

10 .
9 . .
8 . .
7 . .
6 . . .
5 . . .
4 . . .
3 . . . .
2 . . . . . .
1 . . . . . . . . . . . . .
0 . . . . . . . . . . . . . . . . .
___________________+_+___________________
5 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 1
0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0
0


For this one, the mean = 2.7 and the median = 2. If the height of the graph line represents programmer productivity, the second one is closer to reality.

So what does this mean to development? Managers used to think that this is a nice straight 45 degree angle, and that even if you can't get the best and brightest, you still get usefulness out of mediocre developers. But building software isn't like digging a ditch. Even poor ditch diggers can make a hole. In software, what you (and others) write today becomes the foundation for tomorrow. If you have bad developers building your foundation, the really good developers have to go back and fix it before they can build more useful stuff. Hiring mediocre developers (or just average ones) slows project velocity. Couple this with the fact that adding people to a late project makes it later, and you can understand why most enterprise development moves at a glacial pace. Building software in the large with lots of average developers emphasizes two negative project metrics: trying to scale by adding people and that average developers produce code on the average scale. But the truth is more like the lower graph: overall productivity is dragged down to the median level, not the mean level.

An entire industry has been built to solve this problem. It was thought that you can fix this problem in two ways. First, build restrictive languages that keep bad developers out of trouble. But, Glenn Vanderburg has a dead-on quote that reflects the reality that he (and I) have seen on projects: Bad developers will move heaven and earth to do the wrong thing. Command and control computer languages aren't like the governor on a car, forcing you to drive more slowly and thus more safely. At the end of the day, weak languages slow down your best developers and don't prevent the bad and mediocre ones from writing horrific code.

The other attempt at fixing this was to take restrictive languages and build really smart tools to help developers generate code faster. This is the classic implicit sales pitch for Visual Basic: no one can hurt themselves, and you can hire cheaper developers to write your code, without paying a good salary to those annoying software craftsmen. It is widely known that Microsoft internally calls mediocre developers "Morts" and targets them with some of their tools. Of course, this doesn't apply to all Visual Basic developers and not to good developers in the .NET space. But it is the reality that this dual strategy is the way vendors have sold tools for the last couple of decades.

And now we see that it doesn't work. Part of the ascension of dynamic languages is the realization that the experiment of command and control languages don't make average developer's code any better, and it hampers your really good developers. Why not figure out a way to unleash the good developers and let them go as fast as they can? Ruby on Rails is a good example of using domain specific languages to create a simpler language on top of Ruby for things like ActiveRecord relationships. For everyday coding, you can stay at the high, simple abstraction layer. If you really need it, you can dip below that abstraction level and get real work done in Ruby. For many of the Ruby projects on which I've been involved, using open classes is done strategically, accounting for between 1.5% and 4% of the total lines of code. Yet that surgical use of one of the kinds of meta-programming that Ruby allows eliminates hundreds of lines of code.

So what does all this mean? First, give good developers powerful tools and you'll get higher quality software faster. Second, as an industry we would become more productive if 30% of the existing developers were laid off tomorrow. Having warm bodies doesn't help projects, and having to baby-sit poor developers cuts the productivity of your good developers. Software is too complex to turn into an assembly line manufacturing process, at least for now. Good software developers are still craftsmen, no matter how much that annoys companies that wish that wasn't true.

Sunday, October 21, 2007

Ruby Matters: Unleash the Brainiacs!

ThoughtWorks was early in the game in the Java world. Lots of the standard parts of the Java stack (like CruiseControl, Selenium, etc.) were built by clever ThoughtWorkers from project work (there is a long list here). Working with smart, passionate people who have damn good ideas, coupled with the corporate culture that nurtures open source, is one of the advantages of working at ThoughtWorks.

Fast forward a decade, and the same thing is playing out again. Lots of ThoughtWorkers really like Ruby. And one of the things that I suspected early on was that the combination of ThoughtWorkers and a super powerful language would be a good mix. And I was right.

As we've been doing Ruby projects, ThoughtWorkers have been generating great ideas like mad. Because Ruby is such a rich language, really smart people build really cool things. And, of course, ThoughtWorks likes to open source stuff, which is a good combination. The Ruby part of this equation means that you can build stuff that, while sometimes possible in other languages, is too much trouble, so you just don't do it. Imagine that you were traveling to Chicago a decade ago, and you wondered about the weather forecast. How would you find it? You could call the hotel and trust that someone there knows it. Or you could watch the weather channel and wait for the Chicago forecast to come on. Or you could go to the library and find a Chicago newspaper and look there. All of these were so much trouble that you were likely just to skip it and guess. Now, you can find out in about 10 seconds by using the Internet. Convenience opens possibilities.

Here is a perfect example. Jay Fields, Dan Manges, and David Vollbracht has released a Ruby gem called dust, which makes it easy to do write better test names (and other stuff). Or what about deeptest, which allows you to run unit tests in parallel, written by Manges, Vollbracht, and anonymous z. Or what about Mixology. This is a gem created by Pat Farley, anonymous z, Dan Manges, and Clint Bishop in the Something Nimble crowd that allows you to dynamically change mixins. If you aren't sure what that means, think about the ability to change what interfaces a class implements at runtime, based on some criteria. Not just add new ones, but remove them as well. While not something you do everyday, it does come up and there are good uses for this capability.

As ThoughtWorkers are building Ruby projects, we're helping spawn lots of cool infrastructure (like CruiseControl.rb). Of course, ThoughtWorkers aren't the only ones. But you get a bunch of smart people together combined with a powerful language and good stuff happens.

Check out Something Nimble, a blog maintained by ThoughtWorkers working on Ruby projects, spawning cool stuff left and right.

Wednesday, October 17, 2007

Mocking JRuby

Testing is one of the easy ways to sneak JRuby into your organization because it is easier to write tests in dynamic languages (especially mock object tests) and it isn't code that is deployed, so it eases the minds of the furniture police somewhat. Here is an example, adapted from Martin Fowler's essay on mock object testing.

In Martin's example, he creates Order and Warehouse classes (and a Warehouse interface), then writes tests with them in jMock. Here is one of those tests, rewritten in JRuby, using the Mocha mocking library:

require 'test/unit'
require 'rubygems'
require 'mocha'

require "java"
require "Warehouse.jar"
['OrderImpl', 'Order', 'Warehouse', 'WarehouseImpl'].each { |f|
eval "#{f} = com.nealford.conf.jmock.warehouse.#{f}"
}


class OrderInteractionTest < Test::Unit::TestCase
TALISKER = "Talisker"

def test_filling_removes_inventory_if_in_stock
order = OrderImpl.new(TALISKER, 50)
warehouse = Warehouse.new
warehouse.stubs(:hasInventory).with(TALISKER, 50).returns(true)
warehouse.stubs(:remove).with(TALISKER, 50)

order.fill(warehouse)
assert order.is_filled
end

def test_filling_does_not_remove_if_not_enough_in_stock
order = OrderImpl.new(TALISKER, 51)
warehouse = Warehouse.new
warehouse.stubs(:hasInventory).returns(false)
order.fill(warehouse)

assert !order.is_filled
end

end

The order object is a pure Java object, tested with a mocked out version of the Warehouse interface. Now, this isn't exactly what Martin wrote in in excellent essay about mocks vs. stubs, but the intent is preserved in this example.

Most of the really crufty stuff in mock objects in Java is getting the compiler to agree with you that the mock object type is what you want it to be, and of course that problem goes away in JRuby. Also notice that I can create the mock object directly off the interface. Because JRuby creates proxy objects for the Java classes, you can create a mock from the interface directly by calling the new method on it. The other cool thing about this example is the require "Warehouse.jar" line at the top. JRuby allows you to require a JAR file, which gives you access to the Java classes within it.

Saturday, October 13, 2007

Ruby Matters: Contracts vs. Promises

In teaching Ruby to Java developers, two things seem to shock and annoy developers: open classes and the lack of interfaces. Because Java developers are inundated with interfaces, it's hard to imagine a language whose libraries are strictly defined in this manner.

Interfaces in Java and .NET are exactly like legal contracts (where the compiler is the all-powerful judge). If you implement an interface yet don't meet the terms of the contract, the judge refuses to allow you to proceed.

Mixins in Ruby are more like promises than contracts. When you agree to pick up a friend at the airport, you don't sign a contract, you promise you'll be there. The punishment is purely social: if you break a promise, your friends won't do stuff for you. Here's an example.

In Ruby, a common Mixin is Comparable, which gives you the same behaviors as the Comparator interface in Java. The promise in Comparable is that you will implement the "spaceship" operator (<=>), which returns negative if the left-hand side is less than the right, 0 if they are equal, and positive if the LHS is greater than the RHS (which is exactly like the similar compare() method in Java).

class Employee
include Comparable

attr_accessor :name, :salary

def initialize(name, salary)
@name, @salary = name, salary
end

def <=>(other)
name <=> other.name
end
end

list = Array.new
list << Employee.new("Monty", 10000)
list << Employee.new("Homer", 50000)
list << Employee.new("Bart", 5000)

list.sort!
# Monty vs. Homer
list[0] < list[1] # => true

# Homer vs. Monty
list[0] > list[1] # => false

# Homer is between Bart and Monty?
list[1].between?(list[0], list[2]) # => true

If you violate the promise by mixing in Comparable and yet don't implement the spaceship, nothing bad happens...until you try to ask your friend Comparable to perform a service for you (like compare two things with between? or call the sort! method). Then, you get an error message complaining "undefined method `<=>' for Employee". And this happens at runtime, not at compile time.

This is a fundamentally different mindset than the legalistic strongly contracted languages. It demonstrates one of the reasons that you can't slack off on writing your unit tests in Ruby: you don't find out important things until runtime. It is certainly a different way of thinking about implementing APIs. While lots of Java developers think this leads to chaos in large development teams, I have not found that to be true (and I think that most Ruby developers would agree with my assessment). The trade-off is strict control vs. flexibility. If there is one thing that my experience has taught, it is that flexibility is the most important thing for strong programmers, echoed by many of Paul Graham's essays.

If you must have stronger contracts in Ruby, there is a framework called Handshake that allows you to create pre- and post-conditional invariants ala Eiffel. While I understand the appeal of Handshake, I would be loathe to use it on a project because I prefer the power of flexibility rather than the strictures of constraints.

Monday, October 08, 2007

Ruby Matters: Language Beauty Part 2


A dead chrysanthemum
and yet - isn't there still something
remaining in it?
Takahama, Kyoshi

Let me not to the marriage of true minds
Admit impediments. Love is not love
Which alters when it alteration finds,
Or bends with the remover to remove:
O no! it is an ever-fixed mark
That looks on tempests and is never shaken;
It is the star to every wandering bark,
Whose worth's unknown, although his height be taken.
Love's not Time's fool, though rosy lips and cheeks
Within his bending sickle's compass come:
Love alters not with his brief hours and weeks,
But bears it out even to the edge of doom.
If this be error and upon me proved,
I never writ, nor no man ever loved.
William Shakespeare

Can computer languages be beautiful? Is it possible to create a beautiful COBOL application? I think it is possible, but you can't compare different languages as if they all had the same constraints. For me, these comparisons are like comparing poetry styles. Within a particular poetry style, you can clearly have beauty. But some poetry styles have serious constraints. For example, a Haiku has exactly seventeen syllables, in three lines of five, seven, and five. A sonnet is a poem with fourteen lines, using any formal rhyming scheme, having 10 syllables per line. Shakespeare wrote some stunning poetry within these pretty serious restrictions!

Computer languages are closer to poetry than prose. The syntax of the language imposes order, just as the conventions of different poetry styles constrain what you can create. Within those constrains, though, you can create wonderful things. But the strictures of the poetry style necessarily prevent you from creating more ambitious works. William Faulkner famously said that he wanted to be a poet and failed, then he wanted to be a short story writer and failed, and eventually became a novelist because he couldn't say what he wanted to say within the boundaries of the other styles.

A lot of developers who find themselves "at home" in a particular language can relate to this. For many developers, Java (the sonnet) feels just right. Yes, there are restrictions, but they are well known and seem right. Other developers chafe at the imposed limitations and seek more expressive media, like Perl or Ruby. And if it was just about aesthetics, we could all live happily, each developer writing code in whatever medium he or she liked.

But with computer languages, expressiveness == power. Being able to say what you mean succinctly and effectively matters. The only reader who you must satisfy completely is the execution environment. Thus, when making a language choice, it makes sense to pick the most powerful language you can stand. Of course, different developers rate power differently. For me, meta-programming is extremely valuable. It's one of those features that, once you've used a language that supports it extensively, it's hard to go back to a weaker one.

Which brings me around to Paul Graham again. When I first read Hackers and Painters, I remember thinking that about 80% of what he said was dead on, and the other 20% was either just crazy or intentionally inflammatory. Over the last few years, though, as I've learned more about languages myself, I go back to his writings (most of which are online at paulgraham.com) and realize that he's exactly right about something I used to disagree with him about. Now, I'm down to thinking only 5% is crazy or inflammatory, and that number is shrinking all the time. Anyway, one of those things was his language power scale, which I discounted at first but now believe to be an outstanding yardstick by which to measure language power.

For me, Ruby has a great combination of succinct expressiveness and super powerful meta-programming support. I think that it is currently the most powerful language you can get paid to write. Until Smalltalk makes a come back or Lisp eventually takes over the world, it's the best game in town. But that doesn't mean that I can't still write sonnets in Java and C# from time to time, when the circumstances warrant.

Thursday, October 04, 2007

Ruby Matters: Language Beauty Part 1

Beauty without expression is boring.

Ralph Waldo Emerson


I was talking to a colleague (named Vivek) recently who is originally from India. He was telling me about "Hin-glish", the merging of English an Hindi. I am already familiar with "Singlish", the flavor of English tinged with Malay, Hindi, Chinese, and several other languages spoken in Singapore. Vivek was telling me that it is quite common when talking to someone else who knows both English and Hindi to freely flow back and forth between the languages. That got me to thinking about expressiveness in language. We do this all the time in English without specific foreign influences: we use phrases like "Joie de Vivre" and "Cul de sac" all the time. Expressions in other languages just make more sense; some languages have found a useful, concise label for common human experiences and other languages don't bother creating their own term for it. Phrases like this encompass much more than the literal translation. They manage to encapsulate tons of connotation, not just the denotation of the meaning of the words. (For a supreme distinction between connotation and denotation (and none too shabby language either), check out this Robert Graves poem The Naked and the Nude).

Expressiveness of language makes a big difference. Having concise, nuanced representations of common concepts means less context establishment. This is one of the compelling arguments for domain specific languages: if you encapsulate high-level meaning in code, you spend less time talking about it. This is the vocabulary that describes your business, and everyone talks the same language so that you don't have to start from scratch every time.

Talking about beauty in programming languages sounds silly, because they are after all just ways to drive Turing machines. But people still talk about it. O'Reilly has a popular recent book named "Beautiful Code", which has essays by famous programmers. Reading this book clearly indicates that this opinion is purely subjective. My friend Brian made a great observation about this book: the essays written about other people's code really do illustrate beautiful code, while the ones written by developers about their own code are mostly about cool hacks. Most of those examples I would not consider beautiful by any measure, even if clearly highly functional.

Glenn Vanderburg has a great talk entitled "The Beauty of Ruby", discussing the elegant language design entailed in Ruby. For example, it is necessary in object-oriented languages to distinguish member variables from local variables. Java does this either via scope (which can be ambiguous) or the this keyword. Python does this with self (which is required in some places). Ruby uses the @ symbol. It is unobtrusive, unambiguous, and small. And it conveys exactly the meaning you need it to convey without being too chatty. Similarly, class-level artifacts in Ruby are identified with two @ symbols.

For me, Ruby does in fact have beautiful syntax. It is easy to write, easy to read, just succinct enough to keep you from typing too much but not so succinct to become cryptic. Many Perl developers think that Perl is beautiful (and, Perl was after all developed by a linguist). But I think it goes too far, bordering on cryptic (and I'm not the only one: many people refer to Perl as a "write only" language because it is so hard to read later).

Interestingly enough, even when I was primarily using Java, I never considered it beautiful. Utilitarian, but lacking beauty. But most of the post-Java developers I know still talk about the beauty and elegance of Ruby. I've never heard those phrases so much about other languages.

Language beauty will always be subjective. I explore this idea further in part two of this topic, coming up soon.

Saturday, September 22, 2007

Travel: Biking Adventures Through the Woods Outside Frankfurt

Warning: this post is entirely about travel stuff: there is no software or technology here, so feel free to avoid this if that's all you care about.
Terry Dietzler and I have spoken at the Entwickler conference for 9 years in a row (that's how I always spend my last week of September). On the first trip here, the hotel had bicycles you could borrow for the day, and we did several bike trips through the German countryside (the conference is in a suburb of Frankfurt, named Morfelden, near a national forest). We were hooked. Biking in Europe is a blast: there are bike routes simply everywhere (you can buy maps of just bike routes in bike stores). Our most elaborate trip to date was in 2003, when we biked about 200 miles in 2 days, from Frankfurt (actually, Morfelden) to Metz, France. In 2002, we made a deal with the conference organizer: in lieu of payment for pre-conference tutorials, buy us bikes and keep them in Frankfurt for us, in the basement of their offices. They kindly agreed, so Terry and I have bikes that live year-round in Germany.

This is all prelude to the adventure we had yesterday (we tend to have lots of adventures when we are here in Germany). We took the train (the S-bahn, or regional train) into the city and picked up the bikes. We had thought about bringing them back to Morfelden on the train, but decided instead of ride them back: follow the Main river trail until it gets out of the city, them find our way back to Morfelden. The biggest problem: the Frankfurt airport is between us and Morfelden, but we had maps, so we can find our way. Here is the route we ended up taking (Terry has a GPS, and he turned it on, so we got a record of the entire route):



It was 9 miles (according to the GPS) from where we picked up the bikes to our destination...as the crow flies. We estimated more like 20 miles because of the major obstacle, the airport. About 2 hours should to it; we started out at about 5.

As you can tell, we made good progress along the river, and we got minorly lost a few times as we lost the trail then reacquired it later. Very typical. Then, as we got near the airport, we started having to cross more major autobahns and train tracks. There are abundant bike bridges and crossing, you just have to find them. And, notice the backtracking as we neared the airport. We got on one trail that literally dead-ended at the junction of two sets of train tracks. As we're thrashing around, the sun is setting. Many of the trails are through the woods, so it's getting darker and darker. Our bikes have headlights, but mine was the only one working. So, picture this: we're biking through dense woods in pitch blackness following one headlight. Fortunately, the trails are very good, but it was still spooky and hard to do because the light only reached a little ways ahead of us, and it looked like we were biking into a black wall, which made me claustrophobic.

Fortunately, we've biked around Morfelden a lot, and as we got close, we recognized a landmark from a former trip. We followed the trail through a really dark and increasingly cold section of trail until, to the right, we spotted our hotel. 4 hours after we started, we ended up cycling 23 miles (including all the switch backs). What fun; we're going out again tomorrow.

Monday, September 10, 2007

Ruby Matters: "Design Patterns" in Dynamic Languages

As I mentioned in my last post (Ruby Matters: Meta-programming, Synthesis, and Generation), the Gang of Four design patterns book should have been named "Palliatives for C++". One of the authors finally admitted as much in public at a roast. So, why would design patterns be any different in a dynamic language (like Smalltalk or Ruby)?

In the GoF book, design patterns are 2 things: nomenclature and recipes. The nomenclature part is useful. It's a way of cutting down on repetition of similar code across projects, and it gives developers a way to talk to one another, using shorthand. Instead of saying "I need to create an object that can only be instantiated once", you say "singleton" (we'll leave aside for the moment why Singleton is evil -- the subject of another blog entry). Nomenclature good.

But the fatal flaw in the GoF book was that they included recipes. And many people thought they were the best part. Even now, you see books on Java design patterns that blindly mimic the structure of the examples in the GoF book (even though Java has some better mechanisms, like interfaces vs. pure virtual classes). Recipes bad. Because they suggest more than just a way to name common things. They imply (and put in you face) implementation details.

Because of meta-programming, many of the design patterns in the GoF book (especially the structural ones) have much simpler, cleaner implementations. Yet if you come from a weaker language, your first impulse is to implement solutions just as you would from the recipe.

Here's an example. Let's say you have a simple Employee class, with a few fields (which could be properties in the Java sense, but it makes no difference for this example).


public class Employee {
public String name;
public int hireYear;
public double salary;

public Employee(String name, int hireYear, double salary) {
this.name = name;
this.hireYear = hireYear;
this.salary = salary;
}

public String getName() { return this.name; }
public int getHireYear() { return this.hireYear; }
public double getSalary() { return this.salary; }
}


Now, you need to be able to sort employees by any one of their fields. This is a flavor of the Strategy Design Pattern: extracting an algorithm into separate classes so that you can have different flavors. Java already includes the basic mechanism for this, the Comparator interface. So, here's what the code to be able to compare on any field looks like in Java:


public class EmployeeSorter {
private String _selectionCriteria;

public EmployeeSorter(String selectionCriteria) {
_selectionCriteria = selectionCriteria;
}

public void sort(List<Employee> employees) {
Collections.sort(employees, getComparatorFor(_selectionCriteria));
}

public Comparator<Employee> getComparatorFor(String field) {
if (field.equals("name"))
return new Comparator<Employee>() {
public int compare(Employee p1, Employee p2) {
return p1.name.compareTo(p2.name);
}
};
else if (field.equals("hireYear"))
return new Comparator<Employee>() {
public int compare(Employee p1, Employee p2) {
return p1.hireYear - p2.hireYear;
}
};
else if (field.equals("salary")) {
return new Comparator<Employee>() {
public int compare(Employee p1, Employee p2) {
return (int) (p1.salary - p2.salary);
}
};
}
return null;
}
}


You might protest that this is overly complicated, and that this will do the job:

public Comparator<Employee> getComparatorFor(final String field) {
return new Comparator<Employee>() {
public int compare(Employee p1, Employee p2) {
if (field.equals("name"))
return p1.name.compareTo(p2.name);
else if (field.equals("hireYear"))
return p1.hireYear - p2.hireYear;
else if (field.equals("salary"))
return (int) (p1.salary - p2.salary);
else
// return what? everything is a legal value!
}

}
}

but this one won't work because you must have a return, and returning any value is mis-leading (every possible integer here means something: 1 for greater than, 0 for equal, -1 for less than). So, you are left with the bigger version. I've actually attempted to optimize this several ways (with more generics, reflection, etc. but I always get defeated by the requirement to return a meaningful int from the comparison method. You could find some kind of minor optimization, but you are still building structure to solve the problem: a class per comparison strategy. This is the typical structural approach to solving this problem.

Here's the same solution in Ruby, using a similar Employee class:

class Array
def sort_by_attribute(sym)
sort {|x,y| x.send(sym) <=> y.send(sym) }
end
end

In this code, I'm taking advantage of several Ruby features. First, I'm applying this to the Array class, not a separate ComparatorFactory. Open classes allow me to add methods to built-in collections. Then, I'm taking advantage of the fact that Ruby method calling semantics are message based. A method call in Ruby is basically just sending a message to an object. So, when you see x.send(sym), if sym has a value of age, we're calling x.age, which is Ruby's version of the accessor for that property. I'm also taking advantage of the Ruby "spaceship" operator, which does the same thing that the Java compare method does: return a negative if the left argument is less than the right argument, 0 if they are equal, and positive otherwise. Wow, it's nice to have that as an operator. I should add that to Java...oh, wait, I can't. No operator overloading.

Perhaps it makes you squeamish to add this method to the Array class (open classes seem to terrify Java developers a lot). Instead, in Ruby, you could add this method to a particular instance of array:

employees = []
def employees.sort_by_attribute(sym)
sort {|x, y| x.send(sym) <=> y.send(sym)}
end

Now, the improved employees array (and only this instance of array) has the new method. A place to put your stuff, indeed.

When you have a powerful enough language, many of the design patterns melt away into just a few lines of code. The patterns themselves are still useful as nomenclature, but not as implementation details. And that's the real value. Unfortunately, the implementation seems to be the focus for many developers. Strong languages show you that the implementation details can be so trivial that you hardly even need the nomenclature (but its still useful as a conversation shortener.)

Next, I'll talk about language beauty.

Wednesday, September 05, 2007

Ruby Matters: Meta-programming, Synthesis, and Generation

In my last Ruby Matters blog post, I talked about meta-programming in Ruby, contending that Ruby gives you "places to put your stuff". I always wondered about meta-programming in Smalltalk and how that compares to Ruby, and Where to Put Stuff in Smalltalk. The final piece of the puzzle came after I talked to Glenn Vanderburg (the Chief Scientist of Relevance). I was puzzled as to why the Gang of Four book (which had examples in both C++ and Smalltalk) didn't have more meta-programming. Lots of the design patterns are almost trivially easy to implement with meta-programming, but they didn't do that in the Smalltalk examples. They used the same structural approach as C++. It looks more and more to me that the Design Patterns book was really just a way to solve problems in C++ that should have been easier to solve in a more powerful language like Smalltalk. Which is why I was puzzled about the lack of of more meta-programming solutions in Smalltalk. Glenn enlightened me. One of the overriding characteristics of Smalltalk is the way code is stored, in an image file, which allows for really smart tools. The program and the environment all reside in the binary image file. There are no source files as we know them today, just the image.

A comparison about implementation details is in order, on how Ruby differs from Smalltalk. Let's talk about has_many in Ruby on Rails. Typical Rails code looks like this:

class Order < ActiveRecord::Base
has_many :lineitems
end

For those not familiar with Ruby, this is a method, defined as a class-level initializer (just like an instance code block in Java, a chunk of curly-brace code in the middle of a class definition, which Java picks up and executes as the class is instantiated). So, ultimately, this is the Ruby equivalent of a static method call, which gets called as the class is created.

Let's talk about Smalltalk, which has first-class meta-programming. You could easily build has_many in Smalltalk, implemented as a button you click in the browser which launches a dialog with properties that allow you to set all the characteristics embodied in the Ruby version. When you are done with the dialog, it would go do exactly what Ruby does in Rails: generate a bunch of methods, add them to the class (stuff like the find_* and count_* methods). When you are done, all the methods would be there, as instance methods of your class.

OK, so at this point, the behavior is the same in Smalltalk as in Rails. But there is one key difference: The Smalltalk version using code generation. It's a sophisticated version of a code wizard, generating the code using meta-programming techniques. The Ruby version uses code synthesis: it generates the code at runtime, not build time. Building stuff at runtime means more flexibility. But that is a minor point compared to this one: In the Smalltalk version, you use the dialog and properties to generate all the methods you need. The original impetus for the has_many intent lives only while you are running the dialog. Once you are finished, you are left with lots of imperative code. In the Ruby version, the intent stays right where you put it. When you read the class again, 6 months from now, you can clearly see that you still mean has_many. Smalltalk has the same meta-programming support, but the intent of code synthesis remains forever. That's why it's important to have a place to put your stuff. It isn't accidental in Rails that many of the DSL characteristics appear as class methods rather than code you call in the initialize method. Placing them as part of the class declaration declares intent in a big way, and keeps the code very declarative.

To summarize the similarities and differences:

  1. Both Ruby and Smalltalk give you a place to put your meta-programmed stuff

  2. Both give you a place to put the declaration of intent (the tool in Smalltalk, the shadow meta-class in Ruby)

  3. A time in the lifecycle of the class when things happen. In the Smalltalk version, it's a one-time deal, as you use the tool to generate the code. In Ruby, the synthesis takes place at class load time. This leaves the clean, declarative code right where you put it, rather than generating a bunch of much less clear imperative code.


Glenn made an excellent point here: the Smalltalk version is a great example of accidental complexity, not essential complexity. Software is full of essential complexity: writing software is hard. But we end up subjecting ourselves to lots of accidental complexity in our tools and languages. And it should be stamped out. The Ruby version eliminates accidental complexity by providing a great abbreviation for the intent of has_many. This blog does a great job of illustrating the differences between abbreviations and abstractions.

Smalltalk had (and has) an awesome environment, including incredible tool support. Because the tool is pervasive (literally part of the project itself), Smalltalkers generally shied away from the kind of meta-programming described above because you have to build tool support along with the meta-programmed code. This is a conscious trade off. One of the best things about the Ruby world (up until now) is the lack of tool support, meaning that the tools never constrained (either literally or influentially) what you can do. Fortunately, this level of power in Ruby is pretty ingrained (look at Rails for lots of examples), so even when the tools finally come out, they need to support the incredible things you can do in Ruby.

Next up, "design patterns" in Ruby.

Thanks to Glenn for supplying me Smalltalk information, as a sounding board for this, and all the interesting bits, really.

Monday, September 03, 2007

Ruby Matters: A Place to Put Your Stuff

Why are so many people into Ruby? I get this question a lot because I speak at a lot of Java conferences and talk about Ruby and JRuby. In this and several more blog entries, I'm going to explore this question in depth.


For myself and many of my peers, it was a rite of passage to learn all the intricate tricky stuff in C++. Once I got over my fascination with C++ and got tired of banging my head against all of the arcane nonsense in the language (I used to know the details of all the different ways you could use the word const, but I've fortunately purged that knowledge), I asked one of my colleagues why he continued to love it so much. "Knowing C++ makes me part of the C++ priesthood. When people ask me questions, I'm purposely cagey when I give them answers because I think they should work as hard as I did to figure this stuff out. I don't want to give my advantage away." I shuddered and was glad I wandered to greener pastures.

After a brief visit to the refugee camp of Delphi, I made it to Java and thought I was in heaven. Finally, a language that made some sense. I liked the locked down nature of the language because I thought that Java helped prevent mistakes, especially for large projects. This is before I had embraced unit testing, so that seemed like a good thing. I would gladly give up flexibility for perceived safety. This was when I used to think that one of the most important duties of a computer language was to prevent mistakes first and support building interesting stuff second. Interesting stuff must be hazardous to large projects, and I was an Enterprise Developer. After more than a decade in Java, I started playing with Ruby because I'm a bit of a language geek and I was taking the advice in The Pragmatic Programer to learn a new language every year. But, at the time, Ruby was just a cool little scripting language with nice syntax.

When Ruby on Rails came out, I started looking at it with great interest because it was so vastly different from the Java frameworks with which I was so familiar (this was just after I had written a book comparing them). And the more I looked at Ruby, the more puzzled I got. Wow, there's some funky stuff in there, and I couldn't understand what some of the code meant. So, I dug deeper. And slowly, I gained enlightenment. At first, I thought that it was just more of the same "obfuscated code" syndrome from C++. But I later understood that it wasn't.

The epiphany really came when I understood what some of the Rails code was doing with meta-programming, which made me look closer at the core Ruby stuff. Meta-programming was supposed to be scary stuff that only wild hackers did. But it made problems melt away, in very few lines of code. If you look at how attr_accessor works, it's so simple. I thought for a long time that attr_accessor was a keyword or something infra-structural that I shouldn't bother looking at (before I realized that lots of stuff in Ruby looks like keywords but they aren't).

I have come to understand that meta-programability is an extremely important part of a language. So many things that we solve in languages like Java with complex structures and hierarchies are much more elegantly solved with meta-programming. This realization led me to start looking around at other languages, like Smalltalk and Python.


That's when it hit me: one of the things that makes Ruby so powerful is that it already gives you a place to put your stuff. If you don't think this is any different than Java, check out whytheluckystiff's Seeing Meta-classes Clearly article. I'll wait... Yes, there is a shadow meta-class behind every object in Ruby, giving you the syntax and location to put much of the interesting meta-programming stuff in Ruby. You can do lots of the same meta stuff in, for example, Groovy, but you have to create a Groovy builder object to do the equivalent of eval. And, doing class_eval would take more building a place to put it. I was talking to a Python programmer recently, talking about the same subject, and I got the same response: "Yeah, Python can handle that, but you'll have to build a place to put it." One of the elegant features of Ruby is that it comes pre-defined with places to put your important stuff, rather than forcing you to build a place to put your stuff. Which is why Paul Graham says that Lisp is the most powerful language available; because you are writing code in the abstract syntax tree, you are living in the place where stuff gets put.

In the next post, I'll talk about the special case of Smalltalk, tools, and code generation vs. synthesis.

Friday, August 24, 2007

Martin and Me and DSLs

Back in June of this year, Martin Fowler and I did a keynote at The ServerSide Symposium in Barcelona on a subject for which we both have some interest (and dare I say passion): Building Domain Specific Languages. I saw cameras at the back of the room, but it didn't actually dawn on me that they were filming the whole thing...and that they would put in on the web. But, sure enough, here it is. It took them two months to post it and I think I know why: some poor person had to go and transcribe Martin and me yammering away for an hour. I hope they got paid well!

This keynote pretty much summarizes much of the thinking that I've been doing about DSLs over the past couple of years, and it is nice to see the agreements and disagreements that Martin and I have on some nuances of this topic. Clearly, though, we both think this remains a Big Deal.

Monday, August 20, 2007

Lights, Cameras, Geeks!

At the Agile 2007 conference this year, they asked for submissions for videos featuring agile development themes call the Agile Advert contest. ThoughtWorks submitted a couple (created in ThoughtWorks UK) and won the top prizes. You can see them on YouTube under Developer Abuse and Being Agile is our favourite thing. Hilarious!

Wednesday, August 08, 2007

Pair Programming Spellchecking

My colleague Jay Fields has a nice post about pair programming, when it's good, and when it's not so good. I wanted to add to and amplify his remarks a bit. Basically, anytime you pair two developers of vastly different skill sets (what Jay calls a "spell checking pair"), that's not pair programming: it is mentoring. That doesn't mean that it is wasted time, but it is a different activity.

I find that mentoring is tiring for both the advanced and novice developers: tiring for the advanced developer because they have to go slower than their natural pace and tiring for the novice because they are having to peddle faster to keep up. However, mentoring is a great way to greatly increase the skill level of the novice. If you are going to mentor, it should be for just part of a day. After the mentoring is done, switch pairs with more matched levels.

A lot of companies seem to think that's what pair programming is: take a intern from college and pair them with an ace to build up their skill level. All that does is stress both.

We are generally careful when engaging with clients to pair program with their developers to determine if the project is enablement or delivery. If it is delivery focused, we tell them that the more senior developer may well leave their pair in the dust, and should be reorganized. You can't usually do both enablement and delivery at the same time except in rare circumstances.

Monday, August 06, 2007

Dependency Injection in One Sentence

Jim Weirich issued a challenge to describe dependency injection in once sentence. My friends at Relevance had a good version. Here's mine:


Dependency Injection enables a vitally important but nevertheless weak, limited, syntactically confounding, and dauntingly complex form of one of the kinds of meta-programming that should exist in the language.


This is fun -- can we do annotations(C#)/attributes(Java) next?

No Fluff, Just Stuff eXchange: London 2007

Way back in the late nineties, I was working for The DSW Group, which was a Borland partner. We did Borland training classes worldwide, and that's ultimately how I got into the business of speaking at conferences. At the time, I was all geeked up about Delphi and at the first few BorCons I spoke exclusively about it and C++ (in the form of C++Builder). However, in the late 90s, I stopped having that lovin' feeling for Delphi and started really dedicating my efforts in the rising tide of Java. But what was interesting to me was the adoption level in different countries. I was still traveling to BorCons around the world, talking about JBuilder. And some countries I would have huge audiences. And in others, you could see tumbleweeds blowing through the aisles. I remember the first few Java talks I did at the Entwickler Conference in Frankfurt, no one even knew what "Java" was.

It was interesting because the UK seemed to be one of the last holdouts for Java. Even into the early 2000s, I would go to Borland conferences in London and be in the small room, while the Delphi groups were packed. Of course, these were Borland conferences, so the crowds were self selecting, but it seemed to me like Java was a slow burn in the UK. Then, of course, overnight, it was the cool thing to do.

London is now a first-class Java city, with lots of Java work going on (Java is very popular in the financial sectors, and London sure has lots of that). The only thing missing from the London Java scene finally appears this August: No Fluff, Just Stuff. What better place to make the cross-ocean debut of the premiere Java conference. In the past, people have literally flown from the UK and mainland Europe to see No Fluff, Just Stuff shows in the US, in random cities like Columbus, OH. Now, you can ride the tube to see a No Fluff, Just Stuff show!

The traveling circus that is No Fluff, Just Stuff invades London on August 29th, for 3 days. It's the normal cast of speakers from the US (including Venkat Subramaniam, Ted Neward, Brian Sletten, David Geary, and I've managed to stow away as well) mixed with some great guys from the UK (my ThoughtWorks colleague Erik Doernenburg, Graeme Rocher of Groovy/Grails fame, and some wild cards).

To help pump up the enthusiasm for this event, No Fluff, Just Stuff and Skills Matter have a cool promotion. Every attendee who registers before August 17th with the special code of NFJS-NEF666 will receive a Nintendo Wii. Not "entered into a drawing for a Wii" -- "will receive a Wii". Wow.

Come see us in London -- it should be a blast.

Wednesday, July 11, 2007

Polyglot Programming at erubycon

In less than a week, I'll be doing a special version of my Polyglot Programming keynote at erubycon in Columbus, OH. In it, I talk about how to make Ruby palatable to conservative enterprises, and some arguments against "conventional wisdom".

If you are into Enterprise Ruby, there are still slots available at this conference. This conference will have a great combination of speakers who are actively pushing Ruby and Rails into the Enterprise. Lots of us see bright futures for dynamic languages and their potential in traditional IT; come see a bunch of like minded speakers (and attendees) who are on the cutting edge of making it happen.

Sunday, July 08, 2007

The Rich Web Experience

The web is changing. Just a couple of years ago, we had reconciled ourselves to treating the browsers as a graphical dumb terminal. Ajax and related technologies have changed all that. And anytime there are seismic changes in the technology landscape, developers have to learn the whole new world. The preferred way to do this quickly is to attend a highly focused conference.

That's what The Rich Web Experience is all about: rich application development that running in the browser. This conference is beyond just a single technology's APIs. It also covers security, usability, and other topics. I'm speaking on testing and debugging the web tier, including a bunch of best practices with Selenium.

Come join us in San Jose for what should be a rocking good gathering of cool web based technologies.

Saturday, July 07, 2007

Travel Broadens You

Not too long ago, I was in India to speak at JAX India. The conference was held at a science center in downtown Bangalore, and the speakers were staying in a hotel several miles away. One of the interesting things about India is that you can hire car services for cheap, so the conference organizers had a platoon of cars at our disposal, for trips to and from the conference center, which took about 20 minutes (it would be about 5 minutes if there was no traffic, but apparently that state never exists in Bangalore).

While driving to the conference center a couple of times, we passed a part of town where there were several cows lounging in the road, in amongst the traffic. No one even noticed: the traffic flowed right around them as if they were any old standard barrier in the road. Of course, it struck me as very odd, because I almost never see cows on the main roads of Atlanta.

It made me realize that people can get used to and habituate anything. During the conference, I talked to Java developers about some of the odd things in Java. Java has its share of strange behaviors (generics, anonymous inner class syntax, the recursive Enum definition) that look to strangers as odd as a cow standing in the road. Because Java developers are accustomed to seeing these cows all the time, they don't even notice anymore. But it strikes tourists as very odd and surprising.

Here is a concrete example. Groovy makes it really easy to interact with Java objects, including those that follow the standard JavaBean specification for get and set methods to form properties. In Groovy, when you create a new class, you can create your properties using the get and set style of coding, yet when you call the property, you can leave the get and set off.

class Foo {
def name

def getName() {
return name.toUpperCase()
}

def setName(value) {
name = value
}
}

foo = new Foo()
foo.name = "Homer"
println foo.name

Now, when I refer to what looks like a regular public field (to a tourist), I get the uppercase version because of the automatic mapping to a getter. To a Java developer, this auto-magic mapping from the property methods to actual properties seems as normal as can be. However, to a Java tourist, it looks as strange as a cow in the road.

This suggests 2 things to me. While every language has its own bizarre quirks, it is the seemingly irrational stuff that turns off tourists to the language. Do you really want to go into a history of the JavaBean specification to the poor Python programmer who's vacationing in JavaLand so that they can understand this weird but expected behavior? And will they possibly stay awake for it? Probably not, so they'll just say something like "Wow, Java's broken" and move on. I think this hampers attempts to expand Groovy's meme outside the Java world. I know there are efforts to port Groovy to the .NET platform, but will it possibly attract any .NET developers because of its unabashed Java-ness?

The other thing this suggests to me is the advice in The Pragmatic Programmer to learn a new language every year. Just as physical travel broadens you, traveling to a strange language broadens you too, so that when you see cows in the road back home, you understand that it's not normal for everyone. Stranger in a strange language, anyone?

Thursday, July 05, 2007

Fair and Balance Essays?

An interesting review of the 2007 No Fluff, Just Stuff Anthology appeared on Amazon the other day. Generally, I don't bother to reply to Amazon reviews (because it is after all someone's opinion, and I can't dispute someone's else's opinion), but this one has some interesting points. First and foremost, I would like to thank the reviewer for giving us 4 out of 5 stars, so his complaints should be kept in perspective (he did at the end of the day enjoy the book, and I don't want to gloss over that by way of indicating that we don't appreciate that immensely).

The reviewer mostly takes me to task for not forcing the authors to be more balanced against what he calls "agitation for the new age software movement". If you look closely at the cover of the book, you'll notice I'm not listed as the editor, but as the compiler (as in "Compiled by Neal Ford"). This distinction is important and planned. I did not edit this book in the normal sense (of vetting what the authors want to write), I merely compiled the essays. Frankly, it would be an awful job trying to get this group of authors to bend to my will! All the authors of this anthology are quite passionate about their subjects (which is what makes the book interesting, in my opinion).

The reviewer takes Brian to task for not providing the WS-* case against REST, but I don't think that was the purpose of the essay. Where in the vast ocean of information about WS-* do you even see a mention of REST? Unless you are specifically comparing two technologies, you frequently don't, well, compare them. I notice that David Geary (whose excellent essay was praised by the reviewer) didn't compare and contrast it with Struts, which is the de-facto market leader.

But the more interesting issue for me is the clear disdain the reviewer has for what he calls "new age dogma": REST vs. SOAP, dynamic languages, and Agile development. It's no secret that many of the No Fluff, Just Stuff speakers do prefer agile development and looser contracts. While not suitable for all applications, this is the cutting edge of software development right now. It is contingent on "thought leaders" to point out the latest trends in software development, so that the attendees and readers know what's on the horizon. Software and software development continues to evolve at a furious pace. While no one can predict the future (except maybe Bruce Tate), it is interesting to see where the people who were really into Java in 1996 are spending their time now (a hint: mostly with loose contracty type stuff).

Oh, and the reason there is more information about IntelliJ than Eclipse? Because most of the authors use IntelliJ (because we think it's the best tool available), I got inundated with cool IntelliJ tips and tricks (and ended up cutting a bunch of them). We had to ask over and over to even get tips for Eclipse, which is not to say that Eclipse is bad, it just doesn't generate as much passion than IntelliJ. There's that word again: passion. All the writers of this volume are passionate about technology (which is extraordinary in and of itself), and want to write about it for very little remuneration.

Given the amount of other open source in the book, why wouldn't we prefer Eclipse over IntelliJ it they were essentially the same? As a group, we tend to choose things that we think are best of breed, whether web framework or IDE. Hopefully, that's at least some of the appeal of both the anthology and the No Fluff, Just Stuff tour.

Monday, July 02, 2007

PragMactic-OSXer

New Mac OS X users are like recently reformed smokers: they can't stand to see it when someone indulges in their old vice (in this case, Windows). I made the total plunge about 1 1/2 years ago, and more and more of my friends are doing it too. People who attend No Fluff, Just Stuff often note that most of the speakers are using Macs: the answer I frequently give is "because we can". It is crystal clear to me that I am simply more productive on the combination of Mac hardware and Mac OS X. I'm paid for how much I can produce in a given amount of time, so it makes sense for me (even if I have to buy it myself) to pick the sharpest tool I can find.

However, just picking up a Mac won't instantly make you more productive. In fact, if you just scratch the surface, you have no idea that Mac OS X is a deep ocean. There is always a learning curve when you switch something as important as an operating system. To that end, several of the No Fluff, Just Stuff speakers and myself have started a blog devoted to making the best use of the Mac. It is called "PragMactic-OSXer". If you are switching, or are just curious to see how the other half lives, come over for a visit (or attach your blog reader to it.)

Thursday, June 28, 2007

TSS DSL Keynote Recap

Martin Fowler and I just finished delivering the keynote on Language-Oriented Programming at The ServerSide Symposium in Barcelona, Spain. He and I have never presented together before, but we are both passionate about this subject, so it sounded like fun. And it was. I created a skeleton slide show in Keynote, to which he added and made suggestions, and we paired for about 1/2 hour the night before to make sure that it represented not only a consensus view but also good leaping off points for the presentation. We approached it like a public conversation: we had pretty minimal slides, and we took turns talking about the subject (along with minor disagreements) along the way.

There was a little bit of a frenzy just before it started. For some reason, some projector switching boxes just don't seem to like the Mac. My laptop worked fine in all the other breakout rooms (that only had one projector). In the main room, my machine could see that it was attached to an external monitor (I could see the resolution, the refresh rate, and every other detail about the external projectors), but for some reason they refused to show the image. I worked on this with an increasingly frantic IT guy from the hotel. We tried a Dell laptop and it worked fine. He kept insisting that there was something wrong with my machine, and I kept demonstrating to him that I could see his equipment just fine. I've encountered this problem before (see this entry, and it always turns out to be the switching box for the projectors, which for some bizarre reason has an affinity for Windows). Finally, as a last resort (time was running out), I exported the presentation from Keynote to a PDF file, copied it to a thumb drive, and moved it to the Windows laptop. I lost all the animations and other effects, but the slides looked OK. By this time, it was time to start speaking, so I opened the slide show full screen on the laptop. Unbeknownst to me, someone had set the machine to operate in kiosk mode when Acrobat is full screen, so the slide show starting playing without me doing anything. Finally, rather than try to figure out how to turn that off in front of 300 people, I just left it with the Acrobat Reader border showing and launched into the talk. As much as I love the Mac, it seems that the rest of the world still harbors ill-will for anything non-Windows. And it's not major stuff anymore -- it's the strange little things like switching boxes for projectors.

For anyone who saw the presentation, that was why it was in non-full screen Acrobat Reader view. Despite this hiccough, I thought the talk went reasonably well. Martin and I both have lots to say on this topic. When we started, I feared that we wouldn't have enough material, but he and I are both prodigious talkers, so we ended up having to rush a little to get to the end. I feel like we presented a pretty good summary view of how we both feel about this important subject.

Saturday, June 23, 2007

Speaking at TSS Barcelona Next Week

I spoke at the The ServerSide Symposium in Las Vegas earlier this year. Next week, I'm traveling to Barcelona (for the first time) to speak at the European edition. Martin Fowler and I are pairing on the keynote, about Domain Specific Languages (a topic for which he and I share considerable enthusiasm). I'm also doing a regular talk on Selenium.

Friday, June 08, 2007

Coalmine Canary Tests

I was pairing the other day and we started talking about the scope of tests. I'm a huge advocate of writing the simplest, just slightly more than trivial tests when doing test-driven development, especially in a dynamic language (we were using Ruby). My pair, Kristy, and I got into a discussion of around 2 areas: just how simple should they be and should you keep them around after you've written the code? Lots of developers use these ultra simple tests, then make the test more robust once they've gotten the first incarnation to pass. I'm a fan of what I call "Coalmine Canary" tests. Back in the old days, coal miners used to take caged canaries down in the mines with them. The birds were much more sensitive to the gas that can build up in mines; if the bird died or starting struggling for breath, the coal miners knew it was time to high-tail it out of the mine. In the same way, very simple tests help you judge the safety and semantics of the code under test. We created a very simple test (basically, verify a method that included a semi-complicated ActiveRecord query returned a non-zero number of rows). We decided to keep it around, because it verified that we were calling the infrastructure correctly (in other words, that the semantics of the method were correct). Sure enough, as we started adding more code to the class, we introduced bugs. The first test we ran after making changes was the Canary test. If it failed, we knew that we had a fundamental problem with our infrastructure and didn't even bother looking at the more complex, nuanced behavior we have added until the Canary test passed again.

Having (and keeping) the simple Coalmine Canary test is a great sanity check on your code. If it fails, you've got serious problems, and you'd better fix those before you do any more mining in your code.

Tuesday, June 05, 2007

Neal and David Talkin' 'Bout Agile

As a sneak peak to the workshop David and I are doing at The Agile Experience, No Fluff, Just Stuff caught us late last year in a video podcast about agility, how companies react to it, and it's future. It's a pretty accurate representation of how I feel about agility (minus the faux pas of saying that the Agile Manifesto was signed in Colorado, not Utah).

Tuesday, May 29, 2007

Are Open Classes Evil?

When doing my JRuby talk at No Fluff, Just Stuff, one of the consistent questions I get is the morality question about open classes: are they evil? Open classes in dynamic languages allow you to crack open a class and add your own methods to it. In Groovy, it's done with either a Category or the Expando Meta-class (which I think is a great name). JRuby allows you to do this to Java classes as well. For example, you can add methods to the ArrayList class thusly:
require "java"
include_class "java.util.ArrayList"
list = ArrayList.new
%w(Red Green Blue).each { |color| list.add(color) }

# Add "first" method to proxy of Java ArrayList class.
class ArrayList
def first
size == 0 ? nil : get(0)
end
end
puts "first item is #{list.first}"

Here, I just crack open the ArrayList class and add a first method (which probably should have been there anyway, no?). When you define a class, Ruby checks its classpath to see if another class of the same name has already been loaded. If it has, it adds the new behavior to the class.

It it's too frightening to add a method to the entire class, Ruby gives you the option of adding it to an object instance instead. Consider this:
# Add "last" method only to the list object ... a singleton method.
def list.last
size == 0 ? nil : get(size - 1)
end
puts "last item is #{list.last}"

Here, I add the last method just to the instance of the list class (i.e., this list object). That way, you don't add it to the entire class.

Many Java and C# developers are shuddering in horror right now. The consensus seems to be that this is just too dangerous. And, like all advanced language features, it can be abused. But here is a counter argument. How many Java and C# developers have a StringUtils class in their project? Pretty much everyone. Why? Because the language designers for both languages no only won't allow you to add your own methods to String, they won't even allow you to subclass it to create your own String class. Thus, you are forced by the language design to switch from object-oriented coding to procedural coding, passing Strings around like you are writing in C again.

Open classes allow you to make your code much better, when used responsibly. One common argument against this feature from paranoid developers is that they don't trust "junior" developers with this kind of power. So, tell the junior developers on the project not to do it! And, if you really hate the additions that have been made, you can also reopen a class and remove methods at runtime, using either remove_method (which removes it from a class) or undef_method (which removes it from the entire inheritance tree).

You can add new methods to classes in Java if you want to use Aspects, but:
  • uses an entirely different syntax from Java

  • you give up tool support in your IDE for the new methods (in fact, it probably won't even compile it properly without some plugins and such)

  • it's so much trouble that you just suffer through the ugly, non-OOP StringUtils class


In JRuby, you can add methods to existing classes:
  • using the natural syntax of the language

  • it is supported as much as anything else (we don't have IntelliJ for Ruby...yet), but nothing chokes either

  • it is so easy that it is natural


I think this level of power makes developers used to non-dynamic languages queasy. But in the right hands, it can make you code much more expressive, and not pollute your namespaces with lots of made up class names with Util and Helper tagged on the end. Like all advanced features, when used correctly, it makes your code much better. Are open classes evil? No more evil than any advanced language feature.