repeatable process

Past discussions on this blog have suggested that a process fixation such as ITIL's engenders an inflexibility and ponderousness in an organisation. Whilst I am a process fan, there is an important point to be addressed here about how ITIL relates to nimbleness and adaptability.

From the Introduction to Real ITSM:

"Repeated procedures ingrain habit, which stifles staff creativity and flexibility. The more different ways a procedure is performed, the higher the probability that one of those ways will still work after cataclysmic change. This is simple Darwinism. Repeatable processes are like the agricultural monoculture of a cloned crop: highly productive in good times but highly vulnerable to disease, pests, climatic extremes or other stresses."

This is of course tongue-in-cheek but there is a grain of truth here. Does repeatable, managed process reduce flexibilty and adaptability in the face fo changing conditions? Does evolutionary theory have a warning for us not to be too rigid?


Monsanto monoculture

The level of discussion on this thread is awesome. the IT Skeptic feels a bit humble around some of you folk (yes it is possible).

A more specific question from the original premise: for a species to survive in the face of abrupt change it relies on the diversity born of mutation in order to rapidly adapt. Monocultures of cloned identical crops fare very badly in the face of new diseases or changing environments. Do approaches such as ITIL stamp out diversity and thereby limit our options in the face of challenging change? If we only have one way of doing things and "punish" those who don't follow it, are we creating a Monsanto monoculture of process?

Yes and no. I think it was

Yes and no.

I think it was the movie Jurassic Park where the complexity mathematician remarked, "Life will find a way." Said differently, higher complexity provides greater survivability. Let me explain:

Imagine your kitchen table has 20 legs. Call that a complex system. If we hurl a high-speed bowling bowl under the table and knock out 10 legs, the table remains standing. If the table had only 3 legs (simple system) and we knocked out 1 leg, then the table falls over.

Monoculture crops are like the 3-legged table. If one component of the system fails then the table falls. Take the Irish potato famine where plantings consisted of just one type of potato. When a disruption hit (disease) all the fields collapsed. The famine might have been avoided with a system that treated diversity as a virtue rather than a defect.

However, the conditions for this Irish Potato Famine was created by social and economic forces. A host of economic, demographic, and social pressures marked the decades leading to the famine and meant that the Irish peasantry had no food options when the potato crop failed. Without those conditions, the famine would have simply meant more low-carb dinners.

In other words, they had to work really long and really hard to remove the complexity from the system.

Large adoption rates for ITIL, or any framework or best practice for that matter, serve to create a monoculture of sorts. However, organizations are highly complex; it is in their essence. You've got to work really long and really hard to remove it in such a manner that the organization cannot adapt.

This is what makes change programs so risky. The harder you push a complex system, the harder it pushes back.

My point exactly

My point exactly

The more we standardise process and punish those who have their own variants (which may be less efficient in the current conditions), the more we remove adaptability from the system

Again, yes and no. In any

Again, yes and no. In any complex system, we always have to be mindful of the unintended consequences (non-linear effects).

For example, biologists have a term called “phenotypic plasticity.” It means an organism can adapt to a changing environment without changing its genetic make-up.

Snails in one pond may be genetically identical to its cousins in another pond. But it has developed a thicker shell because of the presence of crabs. So one is faced with the question of whether identical expressions of a capability may lead to the emergence of a new type of beast altogether.

The analog in ITIL is "substitution." When the creators of ITIL launched their work twenty years ago, I'm sure they did not foresee outsourcing. But their work has become an enabler of it, in particular the many variations of multi-vendor sourcing. New beasts (organizational diversity) are an unintentional consequence.

But to your point, the intentional removal (or perceived removal) of diversity and complexity is ultimately self-defeating. Even when the evidence is staring you in the face, the point is easy to miss.

For example, Charlie Betz earlier wrote, "We build complexity upon foundational constraints. TCP/IP and HTTP demand total compliance, yet look at the blossoming of the Internet."

This is dead wrong and misses the point. There were many prolific network standards before TCP/IP. Remember SNA and DECNet? They stifled diversity and removed complexity by demanding total compliance. Where are they now?

TCP/IP differed because of one critical decision made by Kahn and Cerf. They intentionally split TCP and IP into two separate layers. Why? So the Internet would allow reliable and unreliable protocol services (TCP and UDP).

They called this simple construct the "End-to-End" principle. This network design philosophy allows third-party innovators to express and develop their ideas without the constraints of authorization, regulation or proprietary technologies. This “stupid” network treats all packets as equals. As a result, much of the innovation created in the past decade has come not from corporate R&D or the American government but from young innovators from around the globe--including New Zealand.

Look at the phone company's network of the 1980s and 1990s. They stifled variety. You had to ask permission to use their networks and deploy an application. We saw little innovation as a result.

This ITSkeptic website doesn't exist because TCP/IP is a foundational constraint. It exists because it removed them. It removed the most important one of all: you don't have to ask permission.


I've just been re-reading my review copy of Goldratt's new book "The Choice"

Now bearing in mind that TOC is hardly the simplest of approaches ( I have a little test for all the project managers I work with. I ask them if they know about TOC and if they think it is a good approach. 90% say they do and think it is. I then ask them to explain it to me. 45% pass that test. Then I ask them how they have applied it ....I'm stiill waiting for one to pass that test. ) I was rather taken by this statement:

"...the attitude of most people is that the more sophisticated something is, the more respectable it see since complicated solutions never work people tell themselves that they don't know enough"

My question to the panel is this:

Disregarding that the theories we often discuss here are useful and insightful, do any of them make an actual d**n bit of difference in successfully applying ITSM?


Have they been tried in ITSM? By who?

Charles T. Betz

In reality?


In reality I suspect they haven't. at least not on any meaningful scale.

I've applied TOC to a few work flow scenarios, but without the client being aware that is what I was doing because it would have been to difficult to explain and get buy in. My attempts to use it in planning ITIL implementations have always been thwarted by in house project managers wanting to use MS Project to do everything.

Go to any ITSM conference in the UK at the moment and you get the impression that every single on going transformation is using Lean, but I'm not sure what is done differently as a result.

Use of maturity models is always talked about at the start and end of a project, but again I'm not sure to what extent using them has led to different decisions than if they had not been used, or a more successful implementation. I would qualify this by saying use of the maturity models is a good way of giving something to make the transformation appear tangible. In my dark days I suspect "Move from CMMI level 2 to level 4" is the nearest that most organisations come to having an ITSM driven strategy.

I would love to hear some actual war stories.



War stories have to stay in the bar. But this post was not idle speculation:

Charles T. Betz


I've worked on the basis in the past that the biggest constraint on most IT shops is the change/service transition process, and within that the biggest bottlenecks are around security and risk assessment because too often they are done just before implementation is due.

Architecture, in theory

In theory, having an architecture practice should trim some cycles there, by limiting the number of platforms needing assessment. That's if the architects are focused on value and not off playing with toys. A big if... But this might be a way to measure the value of architecture, in a high maturity shop... interesting...

Charles T. Betz

Architecture in practice

If organisation A has an effective architectural function, but poor change management how will their performance compare against organisation B that has an ineffectual architectural function but good change management? Is it even possible to have an effective architectural function without first establishing a strong change management capability?

I think you are right in that at some point the architectural practice becomes the constraint, but I don't think it is in most organisations

Can we move to a model where the capability/capacity of change management drives the resources dedicated to design build and test and the volume of changes in the system? I had one client whose KPI was the backlog of value adding and regulatory changes in the change queue. And in the long term might we want to move to a state where we can pretty well do change on demand?

Architecture *is* change management

If one has fewer platforms to assess, then changes become less risky and hopefully quicker.

I think that one of architecture's most important responsibilities actually *is* change management writ large - adds, changes, and removals of core computing technologies.

Charles T. Betz

In the long term and at high levels of maturity


In a way yes. When I was a Service Assurance Manager I bridged the gap between architects and change, sitting on the Architecture Review Board, carrying out reviews throughout the V life cycle, and then making authoritative recommendations to the change manager. The SAM role was eventually absorbed into the architectural function when I left, but at the cost of less focus on NFRs.

it's all about the least restrictive

I don't disagree with what you are saying. Things bite back. I learned about the concept of feedback thirty years ago, while goofing around with loud guitars.

TCP/IP and HTTP still demand absolute compliance within their scope. The innovation was to reduce the scope, to make networking more modular (analogous to microkernel architecture on the OS side). The previous standards were too monolithic, they were proprietary, etc., etc. But "third party innovators" even in New Zealand are still constrained by the global DNS and standards of packet formulation and addressing.

The art is to figure out the minimal number of constraints. Like DNA and the carbon cycle. Libertarians get this.

You still have not answered my fundamental question: how do we manage performance if processes are non repeatable, in the general case? I still think we need lifecycles, value streams, value chains, some kind of invariants at the foundation. Even understanding the things that are changing more slowly relative to others might be a basis. The service lifecycle is a value chain.

Positing a value chain concept does not mean I believe it is fractal, i.e. composed only of smaller linear and rigid chains. Analyzing a repeatable business process does not mean I believe that the entire context it operates in, can also be understood as a process. These may be the bases of our disagreement. Thanks in part to you, I am thinking more and more about the possibilities of systems dynamics to fill in the "white space" problems in Rummler terms. But I am also trying to manage a toolbox for analysis and design. We don't throw out the screwdriver because it is not a drill.

I'm also trying to figure out how industrial theory intersects with system dynamics. For example, I picked up the most recent APICS certification text which talks much about constraints, production scheduling, etc. I expected some mention of Forrester but nada. What is the intersection?

On another cross disciplinary topic, here is an interesting paper on integrating Systems Dynamics with ORM. I'm sure someone will try something similiar with SD & Semantic Web.


Charles T. Betz

" do we manage

" do we manage performance if processes are non repeatable, in the general case?"

There are a number of methods. Since it appears you are searching for a practical approach to bounding the measurement problem, I'll offer a basic but flexible method.

First, some assumptions:
1 - Measurements are a mechanism for creating organizational memory and an aid for answering a variety of questions associated with the enactment of a process.
2 - Measurements should not be metrics-based but goal-based.
3 - Goals and measures should be tailored to the organization. This requires that the organization make its goals explicit.

The method is called the GQM Framework [Basili]. Organizational goals are identified, questions are developed to determine if the goals are being met, and metrics identified to help answer the questions. It is a mechanism for formalizing the tasks of characterization, planning, construction, analysis, learning and feedback. While useful for many situations, it is particularly concerned with improvement issues.

GQM can be applied to all life-cycle products, processes, and resources. The approach was developed by Basili while at the NASA Software Engineering Laboratory (SEL). It was refined during the 1990s and serves as a foundation framework for many measurement initiatives.

The GQM framework helps solve some of the problems associated with empirical data (measurements do not jive with the reality of desired outcomes). "Dynamic GQM" is an learning based approach for process modeling. It integrates individual GQM models from a global perspective. It is a means of integrating real-world empirical results with simulation experiments.

GQM isn't close to perfect. But it is straightforward and should get you thinking down the right path for the advanced methods.

[Basili] "Software modeling and measurement: The goal/question/metric paradigm" 1992


The ambiguity of the semantcs actually provides some defense against this - there are more degrees of freedom in an imprecise discourse.

Is the entire biosphere a "monoculture" in that it all is restricted to the carbon cycle?

Charles T. Betz

Process: The Language of Work

We've done some interesting work with process over the last year which directly addresses the issues raised in the posting above.

We stepped back from process a bit and asked some very fundamental questions about what it really is. The answers we came up with were both eye-opening and had highly practical implications. Basically, process behaves a lot more like language than like activity or work itself. On the other hand, the implicit assumption in virtually every treatment of process I've found is that it somehow maps directly to activity or work itself, or even that it is the activity itself. To wit, the standard ITIL definition of process is "a structured set of activities", e.g. the work itself.

Such formulations have I think put us in a bind inasmuch as they foster some more or less pernicious myths about what we can and cannot do with process. For example, assuming that process is basically the work itself tends to strongly imply that all activities must be executed and in sequence in order for outcomes to be supported. In actual fact, most real life processes don't behave that way at all. (Most human executed processes are asynchronous. And, of course, concerns about 'rigidity' are only germaine when treating human executed processes.) Similarly, assuming that process is the work itself also implies that there's one way to get the job done. Again, not true in the vast majority of situations.

By contrast, we take the view that process is essentially a form of language we use to facilitate or refer to actual work. It is not the work itself. Among other things, this view allows us to take significant liberties with process and still get great things done with it. The most important difference betwen process as work and process as language is whereas all the work needs to get done in order to succeed (more is better), with language less is generally better. The one word on a a STOP sign, for example, highly performative and really cannot be improved upon through elaboration. Again, with language, brevity counts most of all. Process behaves in a similar fashion in that it only succeeds when actually implemented/adopted and adoption is almost perfectly inversely proportional to the complexity of the process.

In addition to abbreviation, process understood as a language is also interestingly amenable to synonymization, syntactical variation, metaphorization, fluency, translation, etc. As we learned about language a long time ago, grammar really isn't the main show. It's all fine an good to talk about 'rules', grammar (rigidity), etc., but in actual practice a more pragmatic approach is the bottom line. As linguists such as J.L. Austin have noted, language is essentially performative. Process behaves the same way: it helps us get things done.

So, to abbreviate here (sorry, couldn't resist!), in our approach we've developed a method for radically simplifying processes without sacrificing results. We presented a workshop on the approach at this year's itSMF Fusion in San Francisco and got really excellent feedback for our trouble. The cool thing is that we've found a way to dodge the bullets of rigidity and non-adaptability, and I think solved the most basic problem of process in any organization which is how to get it implemented. We have a manuscript in the works on the topic.

I'm interested in one of the poster's discussion of more 'organic' or 'biological' metaphors for process, but would offer one very basic counterpoint which is that no matter how you slice it, process is essentially a human artifact, not a natural object. There again, it bears more resemblance to language than to work itself. Approaches which 'naturalize' (I think that 'reify' is proper term) process are in my view fundamentally misguided.

Interested parties are welcome to contact me for more information at We're very happy to work with organizations who've run to the end of their process complexity rope and who are seeking a better and simpler way to get things done.

in order to move away from a

in order to move away from a process and seek for a simpler way to get things done it is of utmost importance to have an organisation that can handle that freedom. Many organisations have chosen for a business model based on either "operational excellence", "customer intimacy" or "product leadership" (Traecy & Wiersema). These strategy choice leads to an supporting organisation structure where sometimes the process-approach is a better solution over non-process-approach.

The non-process-approach is more suitable for highly knowledge driven organisation, where small groups of creative people work together to create new and innovative solutions (productleader). Look for the article of Quinn J.R. 1996 on Managing professional intellect.

There is, so to speak, not only one solution. Depending on the type of organisation you as a service manager will need to decide to what extend you implement processes rigidly ...or not.

Eppo Luppes
Getronics Consulting Netherlands

How would cyclops play darts...?

As I think David offers, processes are containers we humans put thoughts and deeds into a container - sometimes termed a 'process' - to help us organize, then break down a decision and/or action.

Don't get me wrong here - there is nothing wrong with 'processes'. Process thinking will have its moment, I just don't believe its the starting point. No, its about what many called, and call again 'systems thinking', but not from the engineering perspective, from the whole system or holistic view.

Its a mix of systems thinking, common sense and a bit of Lean for me... By the way, for those fans of ITIL, all of the circular diagrams in Strategy, the ones like spin-wheels, are actually from the 'best practice' of systems thinking - Google 'Introduction to Systems Thinking / Gene Bellinger' - I just wish they would give the guys involved credit once in a while....

What is a problem, is if the documentation and polishing of a process becomes the focus and trumps the focus on achieving the desired result, and in many cases a specific level of customer satisfaction. Customer focused management of services mandates we understand the customer and their 'service encounters' (also termed moments of truth). All this smacks of the Lean Thinking... and yes I admit my current bias :-)

Lean rightfully relegates process thinking to a subsequent conversation after we understand what a customer does when they encounter our service, and what we might do as a service provider in actually servicing the customer, termed Lean Consumption and Lean Provision, respectively. As for value, its fine for all parties concerned to want some, not just the customer!

Both views are mapped as a 'value stream'... basically a diagram of what happens from a particular point of view. It starts with a very simple listing of events. Its easy to do - try it for yourself, what was the sequence of events the last time you went to a movie (flicks/cinema). What made you decide to see that movie? What happened on your journey and when you got there (parking, meal prior?), the ticket buying experience, the movie, and afterward. See how much of the experience is out of the control of the cinema manager.

Now do the same from the cinema manager perspective. Oh its different, very different....

This simple exercise will begin to explain why being a service provider is like cyclops playing darts.... cricket, basketball, baseball, or any other activity requiring excellent spatial perception skills...

As might become evident, in each case the basis for satisfaction is different, as is the view of the desired end result... herein lies the challenges of being in the service industry... After we understand the space in which we live and work and the relevant value equations, we can then apply some process thinking to partition and scope what we can affect and improve and what we cannot. That may require us to go back and reset the mutually agreed desired end result, and our role in helping that be achieved....

Does this all sound a bit like service management theory - it should - IT's rallying to the cause is just a recent epiphany - its been around in teh enterprise world for a long, long time... ?

Gotta run... after all it is Sunday....

Excellent, Ian. We don't

Excellent, Ian. We don't often agree but I find my head nodding at your notes.

One glaring exception, though:

"all of the circular diagrams in Strategy, the ones like spin-wheels, are actually from the 'best practice' of systems thinking - Google 'Introduction to Systems Thinking / Gene Bellinger' - I just wish they would give the guys involved credit once in a while...."

Gene's a great guy and an incisive thinker, but he didn't invent causal loop diagrams or the other insights on his web site. The key source is Jay Forrester. He applied it to some very big problems back in the 1950s and 1960s.

Many of his students became famous over the years on applying the concepts to "smaller" problems: John Sterman (business, global warming), Peter Senge (organizations), M.K. Hubbert (oil). Recent stars like John Morecroft, Nelson Repenning and Kim Warren have taken it to TQM, Lean and strategy, to name a few.

I don't have the books in front of me at the moment, but I think you'll find the ITIL references peppered with their citations.

Early this year, Raymond Madachy took these ideas and produced a marvelous book on Software Process Dynamics. It's a nice example of what can be accomplished when we let go of the "countable and repeatable" process constraints. It has become a cult hit with top application managers/developers. None other than Ed Yourdon called it the most important software engineering book of the 21st century.

Not bad for a book that was only released about 10 months ago. I think we need something identical for ITSM and IT operations. Clearly it is needed.

[Side note: I attended a Pink Elephant conference last year where Jack Probst presented some very nice causal loop analysis of some common ITIL structures. I walked away with a whole new level of respect for Pink.]



Any chance of some references to readily available books/on line articles along these lines?


There was!

I guess there are two related streams of interest to me. One continues to be non-process driven approaches to ITSM, and the other is the hot topic of lean applied specifically to ITSM.

I'm sure Madachy's book is excellent, but I don't have the time to read 601 pages whilst doing the day job.

Charles mentioned Rickett's "Reaching the Goal" - I found that interesting, but rather dense for a general readership. Going off on a tangent I'm halfway through Goldratt's latest "The Choice" which I think should be essential reading for management consultants.

Data availability

tangent: "hot topic of lean applied specifically to ITSM."

As I look more and more at applying various management theories to large scale ITSM, the issue of data availability keeps coming up. How can we understand the true dynamics if we don't have the full bill of materials (CMDB with effort & activity tracked by CI) and associated operational telemetry (event management, from raw signals up through incidents, changes, releases, projects, and ultimately the service lifecycle)?

On the shop floor you can walk over to the CAM device and watch it. We have thousands of CAM devices in an all too obscure cloud.

Charles T. Betz

local and global optimisation

I wonder if this is why we often struggle with balancing local and global optimisation. We have specialists who (claim to) know a lot about their specific part of the system, but struggle to bring the various point solutions together in harmony with the business strategy. We lack both the data and the tools to make the link between local actions and global results.

process as state changes on long lived entity

Exactly. The data foundation is why I periodically ask whether we are discussing process or procedure. (Not that there is a clear industry distinction.) What people criticize as "process" - brittle, overly constraining - I tend to see as lower level procedure.

One way I understand/define true process is "the most significant set of state changes on a major enterprise conceptual entity." Those state changes may be driven by any of a number of procedural variants. The mortgage may be signed with a quill pen or a digital certificate, but there is a state change of "signed" that has remained stable for centuries. Is that the sort of stability we fear leads to the monoculture problem?

Charles T. Betz



I so, so agree with you. My working definition has always been that process is (at least) technology agnostic. I don't care what accounting package you are using, what I care about is that x days after the end of your financial year we can produce a set of accounts that adhere to basic accounting rules. The IT world is so often driven by the"yes but" view of the world "Yes but SAP does it this way" when the business is driven by "how much tax do we have to pay next month?"

My experience is people start having procedural arguments before the process has been agreed

There was a lot of ground

There was a lot of ground skimmed in the above posts. Was there something in particular?

Food for thought

The notion that processes should act as an evolutionary system is a remarkable idea, especially since it directly contradicts much of the standard thinking developed during the 1990s by folks like Hammer (RIP) and Davenport. ITILv2 was a direct descendant of this 20th century thinking.

But it is an old idea. In fact, it was a “process thinker” named Malthus who helped spark Darwin’s breakthroughs. His brilliant writings portrayed economics as a competitive struggle for humankind’s survival. He predicted humankind would lose. (These writings have been rediscovered in an attempt to deal with the current financial crisis.)

In his autobiography, Darwin wrote, “…I happened to read for my amusement ‘Malthus on Population’…Here then I had at last got a theory by which to work.”

Amazing. My jaw fell to the ground upon reading that passage.

The circle slowly turned around from biology back to organizations. Over the decades, great thinkers like Thorstein Veblen, Alfred Marshall, Joseph Schumpeter and Frederick Hayek came to the conclusion that the answer to many business problems was to be found in biology.

Unfortunately, for most of the 20th century, the basic model for solving the process challenge was not to be found in biology, but in physics, most notably the physics of equilibrium-based motion and energy. If Newton and Galileo could get away with perfect vacuums and idealized spheres, why couldn’t process engineers get away with stable requirements, linear models and perfectly rational people?

Ironically, at the same time, the field of physics began to look at the world not as a stable system, but as one that was dynamic and complex which never settled into equilibrium.

Though physics, biology and processes may focus on different things, they should all be consistent with empirical and experimental evidence. And much of 20th century process thinking is based on made up assumptions with the sole purpose of making the models work.

Much like the current financial crisis has exposed the false assumptions of free markets; the many project failures have exposed many of the false assumptions of process models. But we don't give up delusions easily.

The good news is that much of the last ten years has shown remarkable progress in moving away from the physics-based process thinking, with its “countable” and "repeatable" criteria. I look forward to what the next ten years will bring.

Meanwhile, I applaud Sharon Taylor for introducing some of these ideas into ITILv3.

[Side note: Just two weeks ago, a team of Princeton scientists discovered that chains of proteins found in most living organisms act like adaptive machines, possessing the ability to control their own evolution.

The discovery answers an age-old question: How can organisms be so exquisitely complex, if evolution is completely random? The answer appears to be that evolution is operating according to complexity engineering principles (feedback control).]

HBR and Process

The popular press seems to be picking up on this move away from the century old dogma toward rigid process standardization.

The latest Harvard Business Review contains an article which acknowledges that process standardization can, in many cases outside of manufacturing, undermine the very performance it’s meant to optimize.

They call out “professional work”, for example, where the common thread is variability in the process, its inputs and its outputs. The article is a bit basic (and contains a few conceptual errors) but worth a read for those facing process challenges.

Why I'm not a member of the Experimental Philosophy Club

What is 'physics-based process thinking' - did you mean systems thinking of circa 17th century? The latest thinking on biology has returned to holistic systems thinking - where it has accepted once again that one part of the system needs, and can affect any and all other parts, and diagnosis and decision-making should take this into consideration.

While we are offering theories on evolution, work structures and 'processes' (phew - felt like I was in the ITIL Expert exam again for a moment) can I sling in 'How Scientific Management is Applied" by Fred Taylor c 1911 (Taylorism). Its a corker or should I say choker. It is the cause of many of our woes and prompted some of the worst management 'best practices' favoring the employer and beating down the worker bee...

A process typically spans many 'functional work areas' (thats a process term)... each containing roles that perform activities, occasionally requiring specialized knowledge. Decision-making polices or rules (governance) provide a framework or backbone for all of this to operate as a 'system' in pursuit of a goal or two.... Thats just plain common sense.

And... as one very famous person once said something like - "process improvement is irrelevant unless it directly affects the quality of the outputs of the process, or the cost of performing the activities within the process".

Oh since you mentioned ITIL - we have had to wait 12 years for ITIL to offer a correct definition of a process and a function only to find that the books V3 ignore this definition when speaking to us! Unfortunately I am not sure many of us would recognize a process if we ran one over.

ITIL V3 is about 'systems' and seems to have generally abandoned the V2 theory of process.... or at least parked it as a good idea now best forgotten. Its a good move that will cause tectonic plate shifts inside the realm of infrastructure management. This is the most likely a reason why many folks are struggling to both understand and morph from V2 to V3 strategies. (My 8-ball predicted this in November 2006 in my infamous webinar that ruffled castle ITIL feathers entitled - "ITIL V3 - Strategic Adjustment of Killer Asteroid'.)

So exactly what ideas of all those you mentioned are you attributing to Sharon and ITILv3?

Don't get me on Descartes Reductionism and Martin Luther's school strata - thats what got me where I am today.... saved only by James Burke, Jeremy Clarkson and Lynn White Jr....


Malthus is re-discovered approximately every 3.2 minutes by someone on the planet. For a short period, they marvel at how prescient he was about population and economics before someone points out that he was wrong when he wrote it and is still wrong about it today. Still, that doesn't stop people applying malthusian principles to everything from evolution to ITIL :).

I'm being a bit faceteous in case you missed it. I actually agree wholeheatedly with a lot of what you have written there though I think you are giving too much credit to the architects of much of the world's IT processes. It would be nice to think that there was a guiding rationale based on one or another variation of the philosophy of science behind process development but I rather think there is not. Instead I find the only guiding principle I see regularly is to implement process to avoid actually having to do the thing the process is meant to support.

Change control as change avoidance is the classic example in most IT departments.

I see Malthus has become

I see Malthus has become quite popular.

As someone once said, all the clever thoughts have long since been thought. What matters is to think them anew.

"It would be nice to think that there was a guiding rationale based on one or another variation of the philosophy of science behind process development..."

Don't forget that process thinking began way before BPR, BPM and ITIL. Way, way before. I'm always astounded at what can be "discovered" when we peel back the layers of history that inspired the ideas that were considered either impromptu or original.

Physics-based process thinking means this: a carefully designed process will perform as designed unless some external force disrupts it. In the language of physics (physical systems) this is called equilibrium. Every time this process is triggered, we will achieve the outcomes we expect. It is predictable because we designed it to be.

The historical details are interesting but the real point is that this was a critical misstep. Forgivable, given the historical context. (The most complex system Darwin ever saw was the steam engine. Imagine if he had been exposed to automobile cruise control or thermostats.).

But it was based on faulty assumptions.

For example, this type of approach leads to breaking down processes into small, simplified, and repetitive tasks. By carefully sequencing and managing these activities, organizations can produce in less time and lower cost.

With these individual tasks now broken apart, managers can make them more repeatable and countable. Moreover, they could much better understand and manage the variation each time a step was performed, creating a high degree of consistency.

What's the problem, you may ask?

For one, operations involve complex interactions, even in a string of sequential tasks where each one depends on preceding operations. This brings a new complication, notably the need to deal with end-to-end variation—those overarching perturbations that span beyond individual steps.

To understand what this means, think of a car moving down an assembly line. Workers cannot begin their tasks until those ahead of them have finished. They might need to retrieve materials, stop to replace worn- out drill bits, or wait for others who might be similarly delayed. If by doing so one station’s output slows down, others cannot make up this lost time. Instead, their individual delays accumulate as the product moves down the line. The larger the system, the greater the disruptions (network effects).

The end result is a wave of disruption that overshadows even the greatest of process improvements. The local gains get lost in the much greater uncertainty that dominates the overall work flow. Such disruption similarly occurs when automating processes. (Now I've offended the tool vendors).

Second, what about those pesky external forces? Predictable or growing demand is a basic assumption of this type of process thinking. What's the point of a countable, repeatable Incident Management process if the business demands services that never break down?

The answer is increasingly coming from biology-based (non-physical systems) thinking. Rather than applying an approach of total control and stifling variety, variety is needed in order to compete, requiring a coordinated control of decentralized operations. It includes learning algorithms that adapt to changing environments and accumulate knowledge over time (evolution).

The fitness function is how the organization moves and counter moves to external forces while achieving desired outcomes.

That's why, for example, the ITILv3 Service Lifecycle has so much appeal. I don't think it was a coincidence given the advances in process thinking.

Exactly what "advances in process thinking"?

I am still unclear what "biology-based...advances in process thinking" you are referring to. Can you provide some citations?

I think that the process world as we are discussing here it is based less on Newton and continuous math, and more on the discrete math of Euler, Cantor, Petri, Harel, and Turing.

Accusations of "an approach of total control...stifling variety" are polemic and not useful. We build complexity upon foundational constraints. TCP/IP and HTTP demand total compliance, yet look at the blossoming of the Internet. Life does not disobey the fundamental constraints of DNA.

"individual delays accumulate as the product moves down the line. The larger the system, the greater the disruptions (network effects)."

This was well analyzed by Goldratt's theory of constraints, but he did not postulate "network effects" - rather, the issue was one of a constraining resource. I'm in the middle of a superb new book by John Ricketts, Reaching the Goal: How Managers Improve a Services Business Using Goldratt's Theory of Constraints.

"The end result is a wave of disruption that overshadows even the greatest of process improvements. The local gains get lost in the much greater uncertainty that dominates the overall work flow. Such disruption similarly occurs when automating processes. "

Most process people I know understand that local optimization does not lead to global optimization. Global optimization is more a question for the operating model, however, and not the responsibility of process engineers. And optimizing the operating model (e.g. through business performance management, perhaps based on a balanced scorecard approach) requires - in part - the kind of hard data provided by repeatable and countable processes.

"Predictable or growing demand is a basic assumption of this type of process thinking."

I think you need to support this accusation with some examples. A competent process engineer using modern BPA tooling would run a variety of scenarios, including radical changes in demand.

"What's the point of a countable, repeatable Incident Management process if the business demands services that never break down?"

We'll still be counting other things. Like the number of services in the pipeline. The number of discrete technology products supporting them. The number of assets forecast, procured, and retired. The number of changes. The number of releases. Etc.

I think the primary debate we are having is between systems dynamics vs. discrete event simulation approaches to understanding complexity. All I ask is that you present the current state of the DES art (and by extension BPA) accurately instead of a naive linear caricature.

Another issue we might more usefully discuss is that of semantics. What does it mean to have one or more "Incidents"? Are mine comparable to yours? Even if we are using the same tools and ITIL-originated processes?

Rigorously speaking, an "Incident" is nothing more than the agreement of a group of people that some state of affairs should be called an "Incident." The record in the database simply codifies that consensus. But if the semantics vary across organizations, how can we compare? An Incident means a service interruption. Since services are produced and consumed simultaneously, the Incident depends on how the Service is positioned and produced. Since the concept of Service is ambiguous across enterprises, the concept of Incident is also...

Charles T. Betz

Watch Out For Chokey...


Malthus is re-discovered approximately every 3.2 minutes by someone on the planet.

You'd better watch yourself -- Skep might sick Chokey the Chimp on you! Testy little one that Chimp is. LOL


Google tracks Malthus

As I often blog - I am not as well read as many others -but thats because I come from Catford London. So I had to look up Malthus. It was the first time I recall the name. We had so many smart Englishman it seems before they introduced comprehensive schooling for the masses....

Anyway, since that was my first recognition I claim my 3.2 minutes of fame!!!!

Oh - I think the clue was he married his cousin... not something we would have been allowed to discuss until we were at least 50...

Where did you get your statistic from - is Google counting all these rediscoveries????

how to measure performance?

We've had this discussion before and you chose not to answer my question. Without repeatability, how do we measure performance?

I have been wondering about related things: - but the data implications are still a sticking point.

Don't know that the biology and physics metaphors are all that helpful. I think the roots of the problem lie more in first order predicate logic.

Some of the comments confuse process with procedure. Granularity issues are pervasive. Gulf between "originate mortgage" (which I think we will be counting for a few decades yet) and "provide client with pen to sign contract at closing."

Charles T. Betz

Syndicate content