“If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens?”. I’m the first to admit is what one of the more bizarre conversations with developer and long-time friend Chris Cogdon. If memory serves it was 1989 and he was paraphrasing Seymour Cray and the tradeoffs inherit in scale-up vs scale-out. Scale-up (aka the oxen) won that round but in 2012 I believe their chickens might come home to roost (pun very much intended) with real-world planning implications for IT leaders in terms of improved price/performance and energy efficiency but also application development and team skills. First, let me shed some light of Chris’ analogy in order to set some context for the next couple of posts that can all trace their roots back to this idea. 

Back when I was a lad (please, no abacus jokes), the doomsday crowd were predicting that we’d very soon be bumping up against the upper limits of physics and progress observed by Moore’s “law” would soon grind the a halt (sounds . While we now know that didn’t happen, at the time computer  scientists and vendors were scurrying around trying to determine how they harness micro-processor s could create mainframe style “grunt” by scaling workloads across multiple CPUs. 

As a developer, Chris’ point was that each architecture represented a choice, a choice bounded by the type of work you wanted to accomplish (for those inclined to get their geek on, see this Wiki article). That simple choice then drove everything else – operating system, programming languages and storage architectures. 

The two competing camps – Seymour’s “two strong oxen” model championed by the mainframe/midrange system types (powerful RISC and CISC CPUs well suited to a handful of big, batch tasks) versus the “1024 chickens” camp (thousands of lower power CPUs doing large number of simple tasks repeatedly). Enterprises largely chose the former and the academic/science community chose the latter, the most famous being Danny Hillis’ now defunct Thinking Machines 65,536 processor Connection Machine beast beloved of biochemists and nuclear weapons researchers everywhere. Virtually no-one except their creators could program the 64K CPU beasts and they all but died out – almost… 

The relevance to today’s CIO is that engineers at “.com” companies such as Facebook, Google and their progenitors recognized that lots of users, performing the same basic task (like search or updating friends on what they just had for breakfast) was architecturally similar to the use-case of a physicist simulating thousands of chemical reactions and as a result borrowed liberally from those architectural and programming language toolkits. Toolkits which are now migrating into the enterprise.

Programming scaled-out architectures no longer requires a PhD. The great news is the arrival of  modern development languages and architectures, in particular Erlang  along with Ruby (including Ruby on Rails), Struts, PHP and Python. Having been developed or derived from the days of massively parallel systems, these languages behind the rapid growth of scale-out applications capable of tapping the power of the 1024 chickens without requiring an army of resident computer scientists. Like COBOL and Fortran before them, this doesn’t spell the end of Visual BASIC, C and Java, but it does mean that CIOs should be assessing their future orientation readiness with HR and their leadership team and prepare plans to hire or develop skills in for both developers and, operations staff. 

What’s true of software is also true of hardware. With Web 2.0 applications and cloud architectures now designed to exploiting distributed, scale-out arrays of cheaper compute, network and storage I believe CIOs should be using early 2012 to re-evaluate their datacenter plans and standards to ensure that they’re building on a modernized, converged architecture that’s capable of accommodating both scale up and scale out models without requiring a rip and replace between the two (not one to plug product, I’ll break with policy here and recommend you look at HP’s 3PAR storage as a great proof-point  of what “2.0” hardware architectures can bring to the table).

Granted, most of these skills are in highest demand by “.com” startups everywhere, is anyone in the enterprise having trouble ramping their skills fast enough or is everyone getting by with the IT equivalent of Latin? (ie: dead, but stable languages)

P.S. both of these topics naturally lead us to the big-data and ARM vs Intel discussions, each of which I’ll cover in separate posts – stay tuned