I saw it on a computer so it must be true

This is the 21st Century. We shouldn't be falling for that old "I saw it on a computer so it must be true" stuff. A model will always be the model-builder's view of reality.

In this blog today was this comment on the post "The Emperor Has No Clothes: Where is the Evidence for ITIL?" [incidentally that is now the second most popular post ever on this blog]:

In an internal company workshop, an ITIL author showed us quantitative models that showed the effect of each ITIL entity (process, function, cmdb and so on) on the overall performance of the IT organization. Generic and real-life examples.

What was fascinating was that he did this in real time with freely available desktop software. He modeled ITIL systems within a picture of larger business systems and demonstrated sensitivity analysis and how data, training, and process affected the behavior of the business over an extended period of time. Some made it better, others made it worse. It wasn’t always obvious.

The performance graph for an organization starting with Incident management, for instance, improved for a period of time and then fell into a downward trend. After Problem management was added to the model, the curve resembled more of an s-shape.

He then took the pony example from the strategy book and modeled it. He asked for recommendations from the group on how to fix it. Many suggestions made the performance worse. The right answers turned out to be simple but surprising. Now the CIO wants our organization modeled. The week before he was skeptical of ITIL.

Modelling is based on models. Models contain equations. Equations contain constants. Constants are either based on correlations derived from research or pulled out of somebody's ... er... head.

What you are seeing is a theoretical model, based as all ITIL "data" is on somebody's subjective experience.

The real world might work like that, it might not. Nobody can prove it. As that "Emperor" post lays out, there is no ITIL research in the scientific sense of the word. There is as much research for ITIL as there is supporting homeopathy, i.e. anecdotal reports with no controls. Actually homoepathy is different because real science shows it doesn't work. With ITIL we just don't know. ITIL could be as much about the placebo effect as diluted water is.

Models predict whether a new airplane design is going to fly. Sometimes the plane crashes. Models are a representation of somebody's view of the world, not of the world. The world is always more complex in fact than any model.

Whatever ITIL numbers you see out there are generated by the ITIL industry (analysts, vendors) based on asking people if their multi-million dollar multi-year project was a good idea or not.

GIGO (Garbage in, garbage out) is true of the data fed to a model, and of the data on which the models correlation constants are based.

If this author has some real data, let them publish it and their methodology so they can be subjected to peer review like real science.

This is the 21st Century. We shouldn't be falling for that old "I saw it on a computer so it must be true" stuff. If you want to pay someone to see your business in a movy-groovy computer model, that's nice. But it doesn't prove a thing.

A model will always be the model-builder's view of reality.

Comments

The nature of proof

This is an interesting thread. It may also be a hopeful sign of increasing maturity within the ITSM world.

It is understandable to want proof; empirical evidence based on controlled environments and observations. But, as Karl Popper taught us, searching for empirical proof is a trap. Without proper methods, empirical observations will lead you astray. For example:

- The problem of induction: how do we know that what we have observed suffices enough to figure out other properties? No matter how many white swans we count, as Hume so cleverly put it, it does not prove the non-existence of black swans (they were later discovered in Australia).

- Confirmation bias/errors: we focus on pre-selected segments of the observed and generalize it to the unobserved. "If org performance improves after an ITIL implementation, then ITIL raises org performance."

- The narrative fallacy: we fool ourselves with false causality because it makes sense and fills our thirst for a good story, as Taleb explained. "The ten most successful CIOs implemented ITIL. Ergo, if you want to be a successful CIO, implement ITIL."

…and there are several others.

Why don't managers effectively learn from experience? Why do they keep falling into traps like short-term thinking, firefighting and quality erosion? A key part of the answer lies in how our minds interpret data from complex systems. For example, there is the human tendency to assume cause and effect are closely related in time and space. We attribute events such as customer complaints, low org performance, or over-budget projects to poor tools, poor work habits or attitudes of employees, rather than the system itself. We falsely attribute to dispositional rather than situational factors. It more often turns out to be not the case. [See your thread on process improvement.]

You are correct on models, though. By definition, they are all wrong; a subset of the real deal. But every good model does one thing: it predicts. Popper also taught us the difference between science and pseudo-science: Falsifiability. The idea of Falsifiability claims that a model or hypothesis is scientific if and only if it has the potential to be refuted by some possible observation. In other words, it has to leave itself open to being tested and being proven wrong. It must take a risk. This is how science, or any field for that matter, progresses. Good models are eventually proven wrong and new ones built. Bad models are never proven wrong.

If a model takes no risks at all, because it is compatible with every possible observation, then it is pseudo-science. Astronomy is science; Astrology is pseudo-science. Many (aw heck, most) of the IT and ITSM models I see floating around are pseudo-science.

When Mendeleev crafted the periodic table of elements, his model opened itself up to falsifiability by predicting the existence and properties of several new elements. His model didn't "prove" anything but when its predictions came true it passed from pseudo-science to science.

Asserting a model is highly useful, pragmatic or that others agree, of course, is not proof - only forms of confirmation bias. When the assertions are particularly adamant, I sometimes call it snake oil. At one time many people agreed that the world was flat - a model which failed its falsifiability tests.

So, if someone is putting necks on the line with models that take risks by predicting something that can be proven wrong, I'd call that science.

If instead, as so often happens in IT, models are put forth that don't assert anything more than "it's correct” or “it's good for you", then that, my friend, is pseudo-science.

if the model can be measured to fit the observable facts

Ya got me.

You are correct: if the model can be measured to fit the observable facts and then is able to predict outcomes, then it is indeed good science.

This only changes the requirement for good data though - it doesn't eliminate it: in order to validate the model we need to measure real results and correlate them to the model.

I look forward to some real-world feedback on this one

I Saw It On a Screen -- So It Must Be True!

... even if the model proves to be valid I think there is a bit of I Saw It On a Screen -- So It Must Be True! about this: credibility is enhanced and sales advanced by anythign on a glowing screen

I'm turning into a post-modernist

"A model will always be the model-builder's view of reality."??? Arrgh! Help me Mommy, I'm turning into a post-modernist!!

Syndicate content