Book Review: The Goal

The Goal: A Process of Ongoing ImprovementThe Goal: A Process of Ongoing Improvement by Eliyahu M. Goldratt

My rating: 4 of 5 stars

I wanted to love this book. I very nearly loved this book. Unfortunately, I read “The Phoenix Project” first.

I keep flipping between 3 and 4 stars for this. The book deserves 5 for its place in business history, and I flip to 4 for it because it will communicate on a general-purpose level far better than a book like “Phoenix.”

But having been around people who understood about bottlenecks and the Theory of Constraints (if you don’t know what those are, put down this review and go read the book) for some time, the book seems less revelatory to me. It’s impossible to state what the impact of this would have been on mid-80′s American manufacturers, let alone what its impact should be on our industry. The book essentially introduces the reader to TOC and many of the practices that were later encoded in the fabric of the lean and agile movements through a Socratic dialogue – posing a series of challenges to its characters and then asking them (and you, the reader) to extrapolate from past lessons and determine the next appropriate course of action just ahead of the characters.

If you’re in IT, “Phoenix” will speak more clearly to your situation and will translate more directly to your work and world. Read “The Goal” afterwards to gain a deeper/fuller understanding of the Theory of Constraints – some of the explanations and the translation of WIP to inventory will help you visualize practices you struggle to describe daily.

If you’re not in IT, just read it – it’s a breezy, light book, and is written to slightly below the level of an airplane novel. There are some really gendered and racially-insensitive notes that are likely injected to reflect the book’s imagined audience, a factory foreman. This dates the novel somewhat, but the struggles the characters are facing – both interpersonal and work-related – continue to hit home, and overall the book executes its core mission competently.



View all my reviews

Updates: Portland and a Podcast

More updates soon, but the short version: I’ve taken on a new role as Methodology Lead at Puppet, and in concert with this I’ve moved from my longtime home base in the New York area to Portland.  Great city, awesome tech scene, and I finally get to spend face time with the people at Puppet.

I’ll be updating this blog more frequently soon, but to tide you over, a podcast I did with Kelsey Hightower, Dawn Foster and Mike Stahnke as part of Puppet’s DevOps December on bootstrapping a brownfield environment into Puppet:

http://puppetlabs.com/blog/devops-culture-podcast/

The cast is about 30 minutes long and is worth a listen.

 

Don’t Lay the First Stones of a Cathedral, Make a Blueprint of the Bazaar

Recently I spent time working in a large IT organization that was undergoing an internal reorganization and was attempting to re-invent its IT environment, trying to develop an elastic cloud environment for their web services.  And yet we spent the first few weeks fighting through a morass, trying to get servers ordered and provisioned, pleading for repositories to be made available, bargaining over updates to RPMs.  It was very clear that there was no way that this organization could conceivably expand into any type of elastic provisioning environment and retain its current system – over the first six weeks of this engagement, the organization was unable to provision a single server for me to work with.  My liaison inside the company was baffled, distressed, and seemed lost – he thought that without getting internal servers provisioned, he’d never be able to get anyone to invest or believe in his project.

When introducing a new set of technologies or workflow to an organization, whoever it is doing the introducing has to deal with a company that already has a way of doing things.  People fought hard for that way.  They were likely smart, well-intentioned people who were trying to develop a system that did its best to cope with the situation with which they were confronted. There are many who remember this system as much better than some horrific, older alternative.  The organization has invested huge amounts of time, money, training and political capital in trying to implement and enforce the system.  And now you want to break it.  Don’t be surprised that you aren’t greeted with flowers and cheers.

What’s worse, by attempting to modify or eliminate the existing system, you are aligning yourself with the shadow ops folks – people who have been implementing their own in-house workarounds for years and who bedevil every IT organization.  We’ve all had our own encounters with variations of The Excel Worm, and hopefully have learned from them that there are reasons that may not be immediately apparent as to why certain technologies or procedures are in use in other parts of our organization.  To some extent, people who have worked within a system for several years have become institutionalized – they are so used to things working in a particular way that they have lost their ability to see outside it.  This is not true just for large organizations.  I have seen very small companies that are sufficiently wrapped up in their own work environment that they’ve lost sight of developments happening right in front of them.

So if you don’t want to alienate the people fighting for the status quo, and you don’t want to join the shadow ops rogue squadron, you are left with the question: how do you radically change the slow, bureaucratic system in which you are entrenched?

The answer is that you need to remember what the goal was in the first place – radical change.  What will you need to show your colleagues to get them to buy into your radical change?  In my experience system design is much like product design – people aren’t called to action by a sweeping, optimistic description of a new system.  They need a mockup, a Proof of Concept, that enables them to visualize a new workflow and how they can map existing processes to it.  Unless you have a phenomenally accurate history of process engineering within your organization, producing a Minimum Viable Product of your own will speak much more loudly than promises that your new, radical redesigned system will meet everyone’s needs.  In this, as in development, Code Wins Arguments.

In the case of the large IT organization I discussed, I eventually set up a parallel environment in EC2. This was not done to encourage them to move sensitive internal data to the cloud, or to sidestep procedures. No network tunnel was established between the organization’s IT network and the EC2 VPC. Rather, the cloud environment was set up to demonstrate to a disbelieving, institutionalized and jaded workforce that these weren’t just words – THIS STUFF IS POSSIBLE.

Once my liaison within the large organization got his hands on this environment, I could watch the change play over his face.  Rather than an endless series of “but how will we….” questions, he was able to play with a model, to ask much more specific and pointed questions, to try out new ways of thinking.  Rather than spending my time at his company fighting the political battles to provision servers in a new way, I was able to provide him with an end goal and a series of steps to get there.  He can now provide realistic answers or even his own Proof of Concept implementations for colleagues in his organization, based on an understanding of a system he would not have been able to create purely from a  conceptual model of where he was going.

On a smaller scale, when I develop solutions for clients, I myself set up micro-simulations of their environment, usually in VMs on my laptop.  When we’re trying to achieve a deliverable, rather than proceeding down a linear path toward what we think is the goal, it is often far more beneficial to quickly mock up a solution, then come back to the problem from the endpoint – are we in fact on the road to this solution?

If you are that lone sysadmin or team lead trying to wage a battle to introduce DevOps practices or tools to your environment, remember that one foot in front of the other is not always the best path.  Produce something small that works, and let people get excited about the tangible new product.  Disrupt your organization not by competing with it from within, but by producing the effect that a faster, smaller competitor has on a large organization.  Force it to confront what it’s really trying to achieve, and to ask itself whether or not what it’s doing will get it there.

The Ops Control Funnel

Reading Jeff Sussna’s post Ops Ignorance Isn’t Dev Bliss, I was split between agreeing heartily with his thesis and disagreeing on the remedy.  Jeff posits that all development should be done in an environment that closely, if not exactly, resembles production, ensuring that materials produced by developers are ready for installation in production environments.

The problem here is that it is important for developers to be able to work freely and to explore new environments that do indeed require greater system control than we as ops managers are comfortable with providing.  We can debate whether this should extend to OS flavor, but certainly developers should be able to install and test newer versions of software, experiment with new libraries, and generally act as internal disruptors.  At the same time, they must produce artifacts that can be consumed by the larger organization when it is time to move to production.

My solution to this has always been to treat environments as a funnel, where dev is not the lowest tier, but rather another environment which I’ll usually call “test.”  This is a sandbox environment – usually it consists of VMs, and the best SLA the developers get from me on it is “if you screw it up, I will flatten the sandbox and hand it back to you as it looked on the day it was born.  I will spend zero hours trying to debug why your attempt to force install RHEL 4′s glibc onto Fedora 17 is causing sudo to fail.”

One step above this is dev, which is still loosely controlled but for which I now hold the reigns via Puppet – OS version is stable, login & LDAP creds are used – but the developer still has root.  Once again in this environment our SLA is low, but we do make a greater attempt to assist in troubleshooting.  The closer the environment is to a production-type system, the more energy we will expend, but it remains best-effort.

The real work is done at the next phase – QA.  Historically this is where I have forced strict compliance, and doing so serves several purposes.  Dev and QA typically engage in an iterative cycle that is largely independent of Ops – in fact, needing Ops to be involved in this level is usually a sore point for all teams and causes both delays in rapid deployment and undue stress on Ops (who are supporting pre-alpha grade software) and Dev (who are just trying to get code rolled out).

The key difference between QA and Production, in most environments I manage or develop, has been control of the software repository from which packages are installed.  In QA I provide that control to Dev.  In Production, control is throttled either by Ops itself, or by an automated process if the organization is implementing Continuous Integration.  I then instruct my configuration management software (always Puppet, but then you already knew I was going there) to install the latest version of the software from the repo.

Doing things this way accomplishes several goals:

  • Development can play freely in a test sandbox without feeling constrained and unable to innovate
  • When it is time to check code into source control, however, that code has to run successfully on a platform supported by Ops.  This forces discussion about upgrades to libraries, OS version, and other external components to happen early in the development cycle
  • When Development turns to iterate with QA, it must produce a build artifact that is in the same format that will eventually be used in Production.  Since Ops is now out of the day-to-day picture in the Dev/QA release cycle, the burden is on Dev to produce packages that will successfully install in a Production-like environment.  But they can bang on this as long as they like without having to poke an Ops guy at each iteration
  • When they are done the software they produce functions in a Change Management-controlled environment identical to Production.  Which means it can be run through a test suite and ideally dropped into a repository, ready for Production deployment – and bringing the organization, if it isn’t using CI, one step closer.

The key is to allow as much freedom as you can for each part of the organization to innovate, while allowing the structure of the environment itself to encourage compliance and early discussion of structural changes.  This leaves Ops in the role of curating the entire environment, ensuring that the funnel is continually updated (you can’t just say “no” to every platform change request, guys), and generally looking over the health of a system as a whole, rather than battling with tarballs built on an Ubuntu laptop.

In order for DevOps to work both sides need to leave the other freedom to operate in the most comfortable habitat while using tooling designed to make collaboration easier.  Locking down developer workstations isn’t going to accomplish that, and the moment an organization’s administration hears “I couldn’t develop this feature because Operations won’t let me install xyz software on my workstation” your attempt at lockdown will fail – or the organization will stagnate.  We must be facilitators, guardians of stability and shepherds of good systems practices – not the Black Knight trying to stop everyone from passing.  Because we all know what happened to him.

Thoughts on Certification

UPDATED 2:28 pm with minor grammatical edits

Kris Buytaert, with whom I normally agree, has a post out today entitled Open Source Certification , Friend or Foe (go on, read it, I’ll wait) suggesting that certification in Open Source products isn’t worth very much in the long or short run.

This has not been my experience.  While I have not felt the need to acquire many certifications myself, I have hired lots of folks with them, and have seen the effects of their implementation both from the consumer and producer side of the industry (disclosure: my company, Puppet Labs, just announced its own certification program, scheduled for release in September.  My feelings about certification have not, however, changed since Puppet announced its certs, and most of my experience around certification was gathered before my work for Puppet).

There are many things certification represent, and your view on whether or not certification is worth it is heavily influenced by the direction from which you approach.  I’ll take a very simplistic approach today and discuss the impact certification had on my own career.

Early in my career, I was a fairly strong UNIX/Linux zealot.  I had gotten my start working on Digital UNIX, my first Linux was Slackware, I abhorred RedHat for being “too commercial” (these are the RedHat 5.x days – not RHEL, but RedHat) and I refused to touch Windows or MacOS at all.  I stubbornly ran Linux on my laptop long before such was easy or even fully workable.

At work, however, the environment was skewed far more toward “enterprise” software.  Debate as you like the merit of that phrase, there are companies that believe it exists, and who will only invest in products that purport to deliver it and its litany of intangible benefits.  As such, while DEC and Solaris had managed some UNIX penetration, I was having no luck at all getting Linux and Slackware traction even within my own group.  Further, Microsoft was making heavy plays for my company’s business, and while we ran a huge Sendmail infrastructure, the move to Exchange seemed more imminent by the day.  My daily arguments about freedom, about vendor lock-in, about extensibility were gaining me absolutely no traction at all.

I realized very quickly – and I was still in my late teens at this point – that I was going to have to learn everything I could about this Microsoft infrastructure in order to counter the arguments they were making, arguments frequently couched in terms like “vertical integration” and touting the benefits of the nascent Active Directory – this was ca. 2000 – for which I had no response.  And so much to the derision of the peers in my group, I embarked on getting my MCSE.  I had access to a small VMware environment – this was pre-ESX, so GSX, their primary environment also ran atop Windows – and using this I set myself an Active Directory environment and proceeded to use Microsoft’s curriculum to teach myself Windows administration.  Say what you want about the validity of MCSE’s in those days – yes, they were paper certs – but I was not attempting to memorize test answers to get a credential.  I was attempting to scope and learn a new technology for which I had no teacher and no guidebook, and to understand it as completely as the technologies I used daily.  I built networks and broke them, learned RRAS, removed Domain Controllers in completely unacceptable ways, torched DNS, and then learned to fix the entire environment without starting from scratch.

By the end of that process – which stretched around a year – I was done, I had passed seven exams, my boss jokingly threatened to reduce my salary for having achieved MCSE status – and I began being taken seriously by the higher-ups.  I was soon invited to and able to participate substantively in conversations about the technological direction of my organization.  Now granted, MCSE certification, particularly in those days, was more about teaching you how to sell MS products to your boss than it was about teaching you how to use them, but I had taken the trouble to understand and not just aim to pass, and so I had something to contribute – and I understood exactly what arguments were being made by my MS-oriented peers, and how closely they aligned with reality.

But something else had happened to me, something that in the long run turned out to be far more important.  In exposing myself to a product I didn’t like through a certification process I considered ridiculous, I learned some of the strengths of MS products, and I learned to understand where they had my own systems of choice licked.  I didn’t change my views on software freedom, but I now understood what RedHat was doing – making FOSS acceptable for the “enterprise-only” club – and I began to realize that there were elements of this “vertical integration” that weren’t just sales-speak.

Ultimately, neither side won the day at that organization – both began to work together to build a hybrid solution – and I think that was very likely the best outcome.  For my own personal growth, though, something more interesting happened.

Sometime around 3 years later, after I had left that job, I was contacted by its sister organization.  They had a gigantic Windows infrastructure, their head of servers was a UNIX guy and detested it, and they needed somebody who could come in and broker a peace between both sides.  As a result, after 5 or 6 years of championing UNIX and then Linux, I was suddenly a Windows administrator with nary a Linux box under my control.  And this began a career direction for me of entering organizations which were experiencing contentious technological disagreements and brokering a peace – what would become for me, in time, a focus on DevOps.  An opportunity and a direction that never would have opened for me without those four letters, MCSE, on my resume.

I’m not even close to saying that certification will bring anything close to that direction to most people.  Ultimately what made me successful was my initial willingness to pursue the MCSE and to better understand my adversaries, and then to embrace the stronger technologies on both sides at the end of that process.  But certification was a means to an end, and more importantly, with years of Linux/UNIX experience behind me, it was a way to signify to potential employers that I had knowledge about a subject that didn’t show up directly in the experience section of my resume.

Certs are not a panacea.  But they aren’t worthless for the aspiring SA either.  They are an element of an ecosystem…and they can be worth pursuing.

Another Book Review: Hackers by Steven Levy

More Puppet and Ruby content very soon, but in the meantime, yet another review…

HackersHackers by Steven Levy

My rating: 4 of 5 stars

Hackers is a classic work of computer journalism. In 1982-3, Steven Levy discovered what he termed the “Hacker Ethic” – a code of conduct around information sharing that is still entirely recognizable in today’s IT circles – and began to delve deeply into its subculture, beginning with the TMRC at MIT in the late 50′s and progressing up to RMS and the dawn of the FSF. Along the way we spend long periods of time around the early MIT hackers, observe their move to Stanford and the California hardware hackers of the 60′s and early 70′s, a young Steve Jobs and Woz, and the founding of Sierra On-Line by Ken and Roberta Williams in Oakhurst.

The latter story was in fact what drew me to the book. I was a longtime fan of the Sierra games as a kid, and loved the homespun feel of the letters from Ken and Roberta in each box. As I got older and Sierra was sold and atrophied, it was an occasional relief to see his mails leaked out decrying the day that Oakhurst was closed, and when he then opened a fan forum with which to answer community questions, I was overjoyed. I knew that both Williamses appeared prominently in Hackers and that Ken seemed embarrassed by his behavior as described in it, so I was curious to read some of the less positive material around Sierra’s early days.

As it turned out, the Williams material was some of the least offensive in the book. In general, he was treated as a shrewd, money-conscious businessman. He may have been portrayed as a bit of a sellout, but as a whole, it seemed that Levy empathized a great deal more with Williams than he did with the other hackers.

Throughout the work, Levy takes a less-than-complimentary tone toward the profession of hacking. This is more than journalistic detachment – he penetrates deeply into the subculture, but never takes that last step to ask “why?” Why do these hackers feel the need to control this machine? What inspires them to stay up all night for weeks on end? Is it really just that they are poorly socialized and find comfort there? This is as deep as Levy’s analysis gets. He prefers instead to wrap quotation marks around hacker explanations that don’t fit within his model – “all information should be free” – without really getting down to the difficult questions of what this means to hackers, what the distinction between the physical and electronic world is (he deals with this to some degree in the MIT story, but even then describes actions rather than interviewing subjects about the reasoning), and why they are willing to go to such lengths to free that information.

In short, I felt that Levy never got to the heart of what makes a hacker a hacker. He identifies and correctly describes and characterizes the drive behind hackerdom, but misses a golden opportunity to discuss the founders of the ethic, its key early practitioners, and its later disciples what it was in their lives that caused this ethic to develop.

Why four stars then?

Even with this material missing, Hackers is an irreplaceable chronicle of the stories behind the people who made computing what it is today, launching a massive industry despite massive resistance along the way. It is in many ways a chronicle of how the key ethics in hackerdom were passed down to today’s hackers, and many of the phrases and habits that are woven into our DNA as a culture find their origins chronicled here.

Whatever its shortcomings, the book manages to be an invaluable historical document. Worth a read, but if you already know many of the stories, expect no deep insight. This is the Cliff’s Notes version of early computers and is essential reading if you don’t know anyone who was there.



View all my reviews

Book Review: The Design of Everyday Things

The Design of Everyday ThingsThe Design of Everyday Things by Donald A. Norman

My rating: 3 of 5 stars

It’s very difficult for me to give this book anything less than five stars, because the subject matter is sublime. Norman delves into the nature of design by first addressing how we as humans take in and process information. By examining how it is that we learn and recall information, he is able to dissect why seemingly simple designs are counterintuitive, and why what we classify as “human error” is really attributable to poorly-designed interfaces that do not map reality (or our understanding of it) to the underlying system.

Norman is very sure of his point, and has many years’ experience researching and learning about it. Somewhere, however, the material begins to become pedantic. The examples and illustrations are all fine, but I can’t help but feel that some of them are entirely redundant. By the time he’s apologizing for “someone must have won an award for it” (based on the premise found throughout the book that award-winning designs seem to be the least accessible and user-friendly) and talking about his frustration with the installation of his living room lights, I felt I’d had enough. We’d already been through how Norman reorganized the lights in his lab – a valuable example – and we didn’t need a second or third.

Nonetheless, the book is short enough and its content strong enough to merit a read. Most of the meat is in the first half, but he draws strong conclusions and recommends a litmus test for designers with which to determine whether or not their work is likely to be usable.

Overall, a recommendation with caveats on this one.



View all my reviews

Review: Pattern Recognition by William Gibson

Pattern Recognition (Blue Ant, #1)Pattern Recognition by William Gibson

My rating: 4 of 5 stars

Close, so close…

I have mixed feelings on this one. Some of this may be due to reading it after Girl With the Dragon Tattoo, which is thematically similar, and also desperately unfair as Pattern Recognition was written first. I also believe Pattern Recognition to be the superior novel – while Girl often plays out as adolescent fantasy, Gibson here is careful to keep his book as soundly grounded in realism as it can be given its constraints – and wrapping in bizarre viral marketing, a woman with an allergy to bad design, Russian mafia, Chinese hackers, a Belgian mogul named Bigend and Italian thugs attempting to accost the lead character in Japan, it’s fair to say that he does quite an impressive job.

What keeps you moving through PR is the protagonist, Cayce’s (it’s subtly hinted that she may or may not be an ancestor of Case, the lead in Neuromancer – Gibson introduces another potential character in passing who may be as well) total disbelief that any of this is happening. She’s a well-grounded character and an unusually realistic portrayal of a female lead by a male author. Cayce keeps us involved in the story mainly through her own disbelief. When Gibson needs to move the plot in a seemingly absurd direction, he allows Cayce, and by extension her circle of friends, to voice this disbelief. This serves to quiet our own misgivings and follow Gibson through the paces of the story.

So, characterization – absolutely. And as a result of Gibson’s immaculate writing style and the inventive, realistic, breathing characters he creates, the book’s an absolute page-turner. These are the good points.

The unfortunate bit is that it’s all a bit of a letdown, as the story amounts to a bunch of running around in circles to get to the bottom of something that doesn’t provide much of a return. Without spoiling the story, I found the nature of the art behind the campaign to be disappointing, and although one could argue that this too is realistic, there’s the question of whether or not I want to go through the exercise of making it through the entire novel to get to such a shallow pay-off. I suppose it’s all about your definition of a good book – is it the journey or the destination? For me it’s a bit of both, but if the journey is exceptionally good (as this one is), I will forgive the destination disappointing slightly.

There are two more books in Gibson’s quasi-trilogy that follow this one, Spook Country and Zero History, and with PR he’s managed to get both onto my immediate reading list. It’s also entirely possible that some of what I found dissatisfying in PR is really setup for a more tangible payoff over those novels.

So, qualified thumbs up.



View all my reviews