While ITIL V2 CMDB was silly, ITIL V3 SKMS is totally absurd

At the request of a fellow skeptic, I am asking readers to name one, just one, example of an ITIL V3 Service Knowledge Management System, SKMS, in the wild. Not the beginnings of one or part of one, or a bastardised version of one. Just one fully formed, grown up, functioning SKMS. Name one.

This was posted as a comment by Cary

Yesterday, on the plane back to San Diego from Connecticut I reread Sharon Taylor's "official introduction to ITIL."
You [the IT Skeptic] have a large readership, I'd like to ask a question:
Has anyone seen an implementation of a Configuration Management System that spans all of (or even substantially all of) the elements described in that "introduction" book or in ITIL v.3?
A CMS that covers ALL the items (not just related to service, all the items) IT owns or operates. Relationships among them. And any of the, "related incidents, problems, known errors, change and release documentation and may also contain corporate data about employees, suppliers, locations and business units, customers and users?"
A single Configuration Management System that is part of a larger Service Knowledge Management Systemm that is rsponsive for service provision and assists Service Desk, Event Management Incident Management, Financial Management, Availability and Continuity, Service Level Management and Change Management?"
If so, my sincere congratulations.
I'd like to see a case study. And, a demo.
And, I'd really like a study on the ROI of that investment.
Cary King
Minerva Enterprises
Managing Partner

So how about it readers? I too doubt that there is even one real SKMS anywhere. To be a SKMS it should:

  • provide full lifecycle management from acquisition to disposal for a 'complete' inventory of CIs ST p65
  • ...where those CIs include business cases, plans, managemkent, organisation, knowledge, people, processes, capital, systems, apps, information, infrastructure, facilities, people, service models, acceptance criteria, tangible and intangible assets, software, requirements and agreements, media, spares... ST p67-68
  • Contain the "experience of staff" ST p 147
  • contain data about "weather, user numbers and behaviour, organisation's performance figures" ST p 147
  • record supplier's and partners' requirements, abilities and expectations ST p 147
  • record user skill levels ST p 147
  • record and relate all RFCs, incidents, problems, known errors and releases ST p77
  • group, classify and define CIs ST p72
  • uniquely name and label all CIs ST p72
  • relate all these items with multiple types of relationships including compoentn breakdown structure, composition of a service, ownership, dependencies, release packaging, product makeup, supporting documentation... ST p72-73 including "part of", "connected to", "uses" and "installed on" ST p77
  • integrate data from document stores, file stores, CMDB, events and alerts, legacy systems, and enterprise applications, integrated via schema mapping, reconcilaition, synchronisation, ETL and/or mining ST fig4.39 p151
  • provide tools against this integrated data for query and analysis, reporting, forecasting, modelling and dashboards ST fig4.39 p151
  • take baselines and snapshots of all this dataST p77
  • perffrom verification and audit of all this data ST p81
  • be based on a Service Management information model ST p150
  • measure the use made of the data ST p151
  • evaluate usefulness of reports produced ST p151

...and so on and so on.

The IT Skeptic thought ITIL V2 CMDB was a silly idea but not everyone agreed. Surely a larger proportion of readers can see that ITIL V3's SKMS has gone to a new level of absurdity.

How many organisations blew a fortune trying to do data warehouse only to see little or no return on their investment. The idea of doing an equally ambitious exercise soley to service IT Operations is just daft.

This idealised techno-fanatasy of the SKMS is so detached from practical reality as to be ridiculous. Anyone who embarks on this journey is squandering resources with an irresponsibility that is breathtaking... and destructive.

More on SKMS:
Warning don't try ITIL V3's SKMS at work



I know about one. The mormon Church has a awsome SKMS. It include all requested by the metodology. It was developed by a BYU student.


Its time for IT people to open their eyes to the larger world. The word system was in use long before computers were even being used. Why is it that IT people only interpret the term System to mean a computer system. ITIL is certainly not using the term only in that narrow context. A Service Knowledge Management System is more than technology. It is a way to organize ideas and concepts so that they can be understood in a specific context.

The Solar System is a group of specific planets as they relate to a specific star, our sun. It is not a computer system and yet it is a system. Understanding it as a system creates understanding of concepts and ideas in context and in a manner that adds value. How we think about hiring and retaining talented people with knowledge of our current environment, and how it evolved to its current state is part of the SKMS idea. It is valuable to think of hiring and turnover in this context. It helps justify the higher salaries of those of us who have been around for a while and minimizes the impulse to eliminate valuable expertise and experience for lower salaries.

ST states, "... clearly the SKMS is a broader concept that covers a much wider base of knowledge, ..."
ST Glossary "System - A number of related things that work together to achieve an overall objective. For example ... A management system, including multiple processes that are planned and managed together."

The whole idea of ITIL is to elevate our thinking from bits and bytes to systems of interrelated systems that support our technology efforts and integrate those efforts with the business. This is a non-linear activity that requires wider knowledge, greater skill sets, and non-linear thinking, e.g. Systems Dynamics.

Concept vs. Tools

You are on track in pointing out that the broader definition of a system (that I would define with the old laundry list of people, process, data...) is what the SKMS is referring to. It seems the original poster and many subsequent contributors are falling into the common IT trap of trying to define everything in terms of software and denying the existence of anything that can't be demonstrated with a specific tool or product. To answer the challenge of providing a real world example, I would argue that anyone providing a service has a SKMS, but in various degrees of successful implementation, integration, automation, etc. and often characterized by thoughts in people’s heads vs. schemas in a database.

The concept that there is ultimately knowledge at the core of service management and that moving towards more mature and effective practices is dependant on effectively capturing and managing that knowledge seems a proven (and even intuitive) conclusion.

knowledge at the core of service management

On re-reading Service Transition I see you are right that SKMS is defined in the book as a concept not a technology, unlike CMS, and that is a fair criticism of my original post that it interprets SKMS too much as a thing not an activity. That will be a common mis-interpetation i suspect, especially as SKMS is presented alongside CMS and wrapped around it.

"The concept that there is ultimately knowledge at the core of service management and that moving towards more mature and effective practices is dependant on effectively capturing and managing that knowledge" is a powerful concept indeed, and KCS did it a thousand times better than ITIL V3. If ITIL represents a summation of industry best practice, the absense of input from KCS is notable.


I am prepping to take ITIL v3 Intermediate- Rel, Control & Validation and am confused on the dif btwn CMS and SKMS. The ST book gives me the impression that the CMS is only the data & information layer of the SKMS and therefore the knowledge processing layer and presentation layer are not in the scope of the CMS. Is this correct?

ITIL's blue-sky toys

ooh sounds like a question for the ITIL Wizard. Don't ask ME to sort out ITIL's blue-sky toys for you :D

Its getting there

Excellent points, Ron, but you may be a bit too hard on IT. I do observe these ideas/methods adopted more and more in IT. While the operations domain is a laggard, the use of systems-thinking in software development, for example, is accelerating.

For a long time, most traditional software development methodologies (i.e., waterfall) worked as defined, event driven processes. They were based on the assumption that the software development process is a stable system. This assumption means that every action, practice, and technique is discrete and repeatable; every activity in the process is visible and subject to suboptimization. Sound familiar?

The results have not been kind and, over time, software developers have positioned theories to explain why software is not a defined process but an empirical process (systems-thinking):
- Ziv's Uncertainty Principle in Software Engineering: uncertainty is inherent and inevitable in software development processes and products.
- Humphrey's Requirements Uncertainty Principle: for a new software system, the requirements will not be completely known until after the users have used it.
- Wegner's Lemma: it is not possible to completely specify an interactive system.

The consequence have been newer, black box type methods such as Agile, Scrum, Spring framework, etc. And the success stories keep coming. One of the most heavily trafficked sites on the net, www.nfl.com, was built in record time using these methods.

Though this shift didn't happen overnight. It was 1986 when developers first encountered Takeuchi and Nonaka's “The New New Product Development Game,” which offered some very unusual recommendations for product development, For example:
- Built-in instability
- Self-organizing project teams
- Subtle control
- Organizational transfer of learning
Sound familiar?

Agile methods

The success of Agile is at this point inarguable. However, Agile's roots far predate 1986, and this is the first I have seen Takeuchi and Nonaka ascribed any influence. "Waterfall" as referenced here is an uninformed caricature of Royce. No serious software engineering theorist ever proposed anything so linear; I think that the myth of deterministic software development probably originated more from an unholy alliance of PMI types with large consultancies.

A good concise overview is this article from IEEE Computer, June 2003 (not sure on what basis it appears on the UMass site - link not guaranteed to last). Fascinating quote from Gerald Weinberg:

"We were doing incremental development as
early as 1957, in Los Angeles, under the direction
of Bernie Dimsdale [at IBM’s Service
Bureau Corporation]. He was a colleague of
John von Neumann, so perhaps he learned it
there, or assumed it as totally natural. I do
remember Herb Jacobs (primarily, though we
all participated) developing a large simulation
for Motorola, where the technique used was,
as far as I can tell, indistinguishable from XP."

(XP in the above quote meaning eXtreme Programming, a prominent Agile variant.)

Also not sure why the Spring framework is mentioned. It's not a "method" and can be used by practitioners of varying development philosophies.

The broader question is whether Agile and ITSM/ITIL have anything to offer each other. There is nothing preventing Agile-developed software from being originated and operated under ITIL practices. But I think the boundary between tacit and explicit is a more fundamental problem, and not just a matter of the "operations domain is a laggard."

Charles T. Betz


- “"Waterfall" as referenced here is an uninformed caricature of Royce.”

Huh? Royce didn't invent waterfall. In fact, he argued against it, not for it (except in the most trivial cases). And the article you cite quotes Jerry on the Mercury Project as saying "...waterfalling a huge project was rather stupid ..."

And this caricature is well accepted across industry and government. For a long time, DoD standards, for example, outlined identical constraints and required a "strict, document-driven, single-pass waterfall model". It has since been revised, acknowledging that “...design is rarely the smooth linear sequence of development stages many models suggest.”

- “…first I have seen Takeuchi and Nonaka ascribed any influence.”

While the ideas had been around, this paper was a major event. It presented, for the first time, empirical case studies across half-a-dozen industries showing the efficacy of the ideas, as well as prescriptive guidance. Its influence can be detected in oft-found rugby terms found in iterative development (e.g. Scrum), as the paper uses a Rugby metaphor (“…tries to go to the distance as a unit, passing the ball back and forth”) to explain the concepts.

- “…not sure why the Spring framework is mentioned.”

Java is not a good language for agile development. It isn’t conducive to short iterations. Nor does it really let you move from one change to another without a cumbersome compile-deploy cycle. Developers often have to jump through hoops to test components in isolation outside the containers. Not to mention the desire of agile practitioners for more productive levels of abstraction as well as a more expressive syntax. Spring came about to address these concerns.

Spring displaces complex frameworks like EJB. The abstraction of services gives simple ways for agile teams to deal with transaction management, integrating with MVC frameworks, and object relational mapping. The notion of the application, for example, is a black box “system” designed using interfaces injected at runtime. Therefore, beans and POJOs do not have their dependencies explicitly constructed in the code. Versus, say, EJB 2.X, in which all the collaborators have to be wired together.

The interesting question

Beyond historical trivia, we don't seem to have any interesting differences on waterfall vs. agile in the context of the SDLC. I was trained by Accenture (then Andersen Consulting) in their linear development methods and found them excruciatingly ineffective.

The interesting debate is the latter point I raised. How is the operations domain a "laggard"? This raises the question of Agile perspectives in the context of ITIL and ITSM.

For those not familiar, the Agile Manifesto is here. Quote:

- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan

While it is true that SDLC practices have suffered from attempting to frame development as a "stable system" with repeatable, discrete activities, is it not true that the whole point of the SDLC is to create "stable systems"? And can we not then assume that repeatable, discrete processes (BPM as applied to ITSM) can serve to at least operate those systems?

While the concepts on the left are also meaningful for operations, the concepts on the right cannot be discarded (as the Agile authors even admit for the SDLC).

To James Finister's point above, certainly ITIL implementation projects might benefit from Agile methods. But the case is not as compelling, because the major point of ITSM is to codify standard practices. Even Agile practitioners admit that solving known problems (e.g. implementing a payroll system) can be effectively done through linear techniques.

It's the truly creative development that requires iterative methods. Is implementing ITIL particularly creative? And do we want to abandon countable, measurable, event-driven processes when system stability and business operations are at stake?

Now, I am falling into the trap of associating ITIL primarily with operations. The Service Design process areas are in particular where ITIL v3 intersects with the SDLC. I interpret those processes primarily in terms of non-functional requirements for the development project. And non-functional requirements such as operability, scalability, and availability are notoriously the areas where Agile development has little to say.

Part of the problem is the focus on the functional customer, who tends to not understand non-functional engineering, debates around platforms and coding languages, etc. The customer too often can be leveraged with arguments like "See, we have it working in the development environment. It's just that those data center people (or operations, or applications support, or architecture) won't let us put it into production because they don't like the hardware and software we decided to use."

The data center people respond by saying, "OK, give us another data center because we are out of capacity because people aren't aligning with our standard infrastructure that we can scale efficiently."

I am not sure how these tensions will resolve, and am interested in all views. Certainly cloud computing has something to offer, but its potential will be long in coming. Until then, for companies that actually have to buy and run hardware and infrastructure software, the Agile challenge will probably have no grand solution, and frictions will continue.

Charles T. Betz

The myth of stable operations

- "...can we not then assume that repeatable, discrete processes (BPM as applied to ITSM) can serve to at least operate those systems?"

To Messrs. Palmer’s and Finister’s point, the above assumption contributes to the problem and should be challenged.

For a long time, organizations were thought to be stable, to be in balance. That is, if left alone, organizations will seek and find equilibrium. Further, the process model used to control them should be “defined”. Every piece of work should be completely understood so that a well-defined set of inputs generates the same outputs every time.

Many no longer adhere to this idea. Instead, there is the growing realization that organizations are in "dynamic disequilibrium" or states of multiple equilibriums. In other words, organizations are *never* stable; they are always out of balance. Improvement initiatives are not organizational disrupters because organizations are always being disrupted anyway. And the process control models should instead be empirical; they are designed for a continuous cycle of inspection and adaptation as needed.

There are many reasons why but to elaborate each would require a level of detail not appropriate for a blog. I’ll just explain the simplest:

The “stable model” describes organizations as going through a sequence of shock, equilibrium, shock, equilibrium. Shocks are generally external demands: new services, new technologies, changes in leadership, layoffs, outsourcing, business cycles, and so on. As long as the shock isn’t too big, the organization will eventually go back to equilibrium.

Each shock presents choices and requires decisions. Even operational organizations in which decisions are rigidly centralized cannot eliminate ambiguity in all areas of individual choice and initiative. Delegation must take place so that behaviour is elicited rather than controlled. Organizational arrangements must be made in order to guide staff toward consistent decisions.

During the early ‘70s, researchers from Yale figured out that the time to equilibrium for such systems scales exponentially with the number of components in the system to the power of four. The more services or products offered, and the more participants involved, the longer it takes to reach stability.

An organization with just a handful of services and 50 staff members would take many decades before it became a stable system. Given that few operational organizations can wait that long, clearly this presents a problem.

What is then to be done?

Agreed, stability is a relative and context-dependent term.

So, does this mean that the ITIL and ITSM project is infeasible? That the explication of processes such as Incident, Problem, and Change is ultimately futile?

"process control models should instead be empirical; they are designed for a continuous cycle of inspection and adaptation as needed." [emphasis added]

"[I]nstead" of what? Rigid a priori processes never to be critically examined? You seem to be reacting to a caricature of BPM.

In the long run, we are all dead. I will be happy with relative stability for the next 20 years or so.

Charles T. Betz

No plan survives the first encounter

"the explication of processes such as Incident, Problem, and Change is ultimately futile?". Absolutely. The Introduction to Real ITSM says:

The modern world is entirely too dynamic for any sort of medium or long term planning to be worthwhile.
Governments, laws, executives, competitors, technologies, recessions and fads come and go like the weather. As the old military saying goes “No plan survives the first encounter”. Strategic planning is as futile as an umbrella in a cyclone. Real ITSM understands this and prohibits all strategic planning as an irresponsible waste of resources. Better to sail the unpredictable winds of change, as flexible and unencumbered as possible.

"Real ITSM understands this

"Real ITSM understands this and prohibits all strategic planning as an irresponsible waste of resources. Better to sail the unpredictable winds of change, as flexible and unencumbered as possible."

Isn't this a strategy? :-)

Good stuff. Can't wait to read my copy.

Once a system is barely adequate

One more from the book: "Once a system is barely adequate, leave it alone. Its life expectancy is short so why invest in optimising something that will be torn down soon enough?"

Short life expectancy?

We have COBOL running that was written in 1955.

Charles T. Betz


Neat trick, considering Cobol was developed in 1959.

Point still stands

OK, 1959 or assembler wrapped with Cobol a few years later. I wasn't around. Point still stands even it it was 1965. Applications are tremendously long lived.

Charles T. Betz

A matter of scale

An interesting point, and I'm sure I've seen something on this recently (New Scientist or HBR?).It also takes me back a complaint I always had about one of the early but much used diagrams where you had the users needs and the IT department's capability in equilibrium. Something I found out a few years ago is that once you meet a need you generally generate a new need in it's place

I do wonder if it is a matter of scale though, or at least of how far away your view a departments performnace. At some point does the variation become just noise depending on what you measure and where? Also does it apply in a highly dysfunctional organisation, where the user and customer have low expectations, as much as in one that basically OK?

Non functional requirements


I think you have have pointed out something that is possibly very interesting.

To those of us who know ITIL inside out (well V2 at least) I'm sure we could all implement ITIL using a waterfall approach. In many environments though organisations embarking on an ITIL implementation still don't know "what will happen next." Part of my argument here, from having tried with some success to steer a multi national financial services company in this direction, is that actually the non functional elements are difficult for the customer to define, even if post event they seem obvious. Hence a waterfall apporach is obvious in retrospect, but not at the time of execution.

I have a contract for a service desk in front of me that I think can be read two ways. A decent service provider would read it and understand the gist of the service by being creative, whereas a second rate provider would (HAS) chase the individual targets not the big picture.

I don't have a lot of time for the major consultancy I used to work for, whose appraoch in reality seemed to be that implementing ITSM was simply a project managment issue, but two things I did agree with were that you need to asses an organisations starting point, and you need to use a contingent approach to project managment.

Or the other way around

Surely it isn't just a case of agile developed software being operated under ITIL practices. What about the development of service management tools using agile methods that keep pace with an IT departments experiential discovery of what service management means to them? Or, even, using agile type approaches to an ITIL implementation? A lot of ITIL implementations fail because theya re percieved as monolithic linear projects.


One mistake we make in IT is to view everything as a predictable system operated by rational beings. Whilst bashing my head against a brick wall with a big blue service provider the other day it struck me how absurd this is. I was telling them to do something they promote heavily themsselves at a corporate level, they were nodding in agreement that they weren't doing it and agreed to start doing it...and today they didn't do it again!

On my way home from work I was reading New Scientist and two ideas from articles with an economics flavour this week struck home. The first is we should focus on building a framework that enables agents to make good choices. Don Page's internal rules for Marval are a good example of that. The other, related one, was that we should design systems that can self regulate based on simple rules. The prime example is probably flocking behaviour enabling animals to survive threat filled environments.

Back to the SKMS

Sorry to be all linear and stuff, but can we get back to the Skeptic's original topic, the SKMS? In seven days (I'm not waiting 30 days, Ian) no one has shown an example of a live SKMS, so we can scientifically conclude there isn't one. (Charles Betz did say that all the elements can be built, but the challenge is integrating them.)

Another stream of discussion, in ITIL V3 has a bet each way, has been on whether ITIL presents "best practice" (aka good practice or just "established practice") or "leading ideas".

So (this house believes that) the SKMS is (b)leading edge. Picking up the discussion of the value of theory above, that's the question now: is there good theory behind ITIL3's integrated SKMS? can we regard it as proven in a management-scientific sense? Is there a case that can be made to sceptical managers and budget-holders? For doing each part - the bullets in the original post are a good list - and for doing the whole thing as an integrated system? At the very least, can we say "if you don't do the SKMS you will lose X"?

Merrill Lynch?, & it is simply a data warehouse

the closest documented example to an SKMS may be this Merrill Lynch case study:


I've got Service Transition opened to page 151 and am looking at that architecture for the SKMS. What they are proposing appears similar to a classic Bill Inmon massively centralized data warehouse (at least, if one assumes that the big cylinder in the middle represents one physical database).

The one caveat is that they have the word "update" as part of the presentation layer. Typically, data warehouses are read-only. If updates are allowed, then it is a transactional system, in which case the extensive data integration layer becomes confusing. (The CMDB architecture on page 68 suffers from this ambiguity as well, and much more seriously.)

Calling it a Knowledge Management system is inappropriate; you would get the wrong technical talent working on it by calling it such. To build such a beast, one would go straight to your enterprise's data warehousing/business intelligence (BI) staff, especially since the SKMS calls for integration of organizational performance data.

v3 uses the term Knowledge Management from the DIKW hierarchy (data, information, knowledge, wisdom). The question is, where does business intelligence fit in the DIKW paradigm?

Charles T. Betz

$1,000,000 prize if author's can show a SKMS within 30 days....

No I am not going to offer such a prize - I just put that in the headline so I could get your attention. Perhaps Skep can setup a way of us all making a donation - it must be cheaper than actually trying it ourselves....

Anyway, for more than ten years another Skeptic - James Randi offered a $1,000,000 prize to anyone who could show "under proper observing condition, evidence of any paranormal, supernatural, or occult power". I hereby challenge the authors of ITIL V3, especially those who conjured up th econcept, to show me how its done.....

The ITIL V3 concept of the Service Management Knowledge System (SKMS), makes me feel APMG should put another $100,000 (pounds) on the table so us mere mortals can better appreciate a working, comprehensive example of a SKSMS.

You know what, I'll take a working CMDB that connects services, business activities and service level agreements to the extent it supports a true Service Desk... and enables automatic prioritization of a ticket based upon customer impact and agreed service level targets.... there I'll compromise... Or, escaping from a locked chest whilst handcuffed... thats sounding better all the time...

Any takers?

Unlikely - why? because its THEORY. Who thought of of it anyway - how many years did they actually work on a SERVICE MANAGEMENT INITIATIVE DOING THIS? I'll let the authors off the hook, there is not enough of a specification in ITIL V3 to make one anyway... Perhaps the 'companion' publications will offer more information on this.

Ian M. Clayton, ITIL V3 EXPERT, yes I admit it - I am one.... damn... unmasked...

ITIL Ahead of the Market

It's not a new reality that ITIL is ahead of the "market" in process maturity and concepts, this was true for v2 and v3 exposes this even more. Although business today can't run without IT, it's simply amazing how many organizations manage to run in an unorganized, process disfunctional manner.

While the SKMS introduced in v3 is very progressive and a vendor's windfall, it still tries to address the reality of how organizations have created and maintained their information and tribal knowledge. Putting fourth a concept of trying to discover and organize the information and tribal knowledge in an organized schema is a step in the right direction. If organizations understand the concept and work toward the concept, the "knowledge" now becomes more usable and greater benefit is derived. Remember, ITIL is descriptive not prescriptive.

The definition of Best (good?) Practice

There is no debate on the fact that (many practices 'described' in)ITIL is ahead of the market.
And the background/context of that fact is the main point of debate against ITIL in many fronts.

Personally I dont have any issue with ITIL being progressive or 'ahead of market'. That way it can be a good reference point for organizations wanting to improve ITSM practices.

However,that intent takes ITIL away from the basic definition - of being a documented 'best practice' (or the new practice of usage goes - 'Good practice') for ITSM.

From the source, "ITIL is a public framework that describes Best Practice in IT service management".

From the overall look and feel, with every version, ITIL is moving from that basic definition/objective to be an IT Service Management Body of Knowledge (BOK)- This usage can be seen in multiple places in the current ITIL publications.

If that is an accepted intent, then concepts like SKMS are valid.

But, in the same publication, it is also said that ITIL is a 'source of good practice'.

The web definition of best practice goes:"A best practice is a technique or methodology that, through experience and research, has been proven to reliably lead to a desired result". Watch the word "Experience" here.
Wikepedia also says: Best practices can also be defined as the most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people.

In this context, the skeptic's point is very valid.

The definitions of 'Body of Knowledge' goes: "Collection of all the available knowledge on a topic, or all the published material on a subject"

I feel it is critical, at least now, that ITIL be very clear on what it is : " Collection of best (good) practices" or "ITSM Body of Knowledge"


Body of Knowledge - Defined

Interesting comments. Actually the concept of a body of knowledge for ITSM and Service Management in general has already been coined and used - by me - some years back. I actually wrote a book on it - ITSMBOK. That book is shortly to be published with an even more generic and more business approach to service management. Starting from an IT perspective is a huge penalty. There is so much service management all around us and that has been so for so, so many years.

ITIL is laughably behind. If anyone feels it is leading they really do need to work in a non-IT centric service industry for a while - perhaps McDonalds will be a good start. ITIL has always looked back at what has happened and seldom forward. What new ideas it proposes are impractical. Even the SKMS was proposed by IBM in the 1980s - I think they called it the Repository (we called it the suppository!).

The CMDB was challenge enough but now we have to contend with Russian dolls - a CMDB (or more) inside a CMS, inside a SKMS.... really... how much will that cost and anyone out there prepared to proposed how I pitch that to a CFO? No not another "trust me"....

ITIL is confused in its description of best and good practice. I blogged ages ago about the need for a practice lifecycle. We start with common (many folks have heard of it), then Best (community or vendor support for it), then Good (someone has actually adapted it to get some benefit), then Next, its a candidate for CONTINUOUS improvement.

A body of knowledge is so much more than just a bunch of best, good practices..... as I defined it way back - it is the sum of all documented knowledge for a profession, it defines what we know and what we do with that knowledge... check out the USMBOK... www.usmbok.org...

Status of SMBOK


I'm easily confused, so bear with me.

What is the actual status of SMBOK/ITSMBOK?

Is it proprietry or open source? Does it belong to you or ITSMI?

How directly comparable to PMBOK is it in terms of how it is governanced /independent from commercial interest etc.?

SMBOK Status


Not your fault - its been a confusing time.

It was always my intention to establish a community led, but protected body of knowledge for the service management professional at large - and that is not limited to IT folks. After all, 99% of service management know-how is actually IT agnostic.

I wished for it to be advanced by the Community and unforeseen circumstances meant that was not going to happen - so I am trying another route. The SMBOK has been replaced by the USMBOK to reflect the more holistic approach. The ITSMBOK is the IT view of the SMBOK. The SMBOK architecture/framework, ITSMBOK and USMBOK are all my IP and copyright.

It never was owned by ITSMI.

When rights are assigned it will be for what I hope are sound and respectful reasons, and formally.

PMBOK is of course proprietary and copyright protected. It was, and remains my intention to closely mimic any governance that ensures maximum transparency and neutrality.

More great news on this very, very soon...

How protected?

Ian - when you say "It was always my intention to establish a community led, but protected body of knowledge for the service management professional at large" - what kind of, and how much, protection did you intend for this BOK?

BOK "Protection"

Hi Joe

By protected I mean it has a legitimate and peer based governance framework that enables but limits any undue influence. It also has a strategy for evolving through general consensus of the professionals it affects.

A BOK is a serious item for any professional as it defines what knowledge a professional SHOULD have, and for the most part what they should DO with that knowledge. It should reflect roles commonly found within an industry and those roles should be easily related to some operational or organizational model. it should also respect experience based knowledge and not put all its credentialing eggs in the exam basket!

Some folks just do not test well.

As you can see, a BOK can have serious career ramifications. All too often (as recent as this past week), I see folks sitting the Foundation exam with some job related review on the line. This is plain silly. The Foundation exam remains a 'work in progress' (subject for another blog), and I feel we as professionals need to have a major say in how we are credentialed.

So - protect means suitably governed by professionals who are affected by the resulting scheme. Professional governance trumps commercial interest everytime in my book...

That sounds about right -

That sounds about right - protected from undue commercial interest.

But doesn't the ITSMF struggle with this too? In SA, and I believe globally, there's a rule that the committee can't contain more than one member from any organisation, for this same reason. But the SA chapter is still dominated by vendors (tools vendors, training vendors, consulting vendors) - somewhat contrary to its mission.

Why (if you accept that it's the case, and I think you do) is it so difficult for ITSM to get business interest (we want this for our own companies/careers) instead of commercial interest (we want to sell the stuff)?

Success breeds commercial interest


I believe EVERY itSMF chapter is probably "dominated" to some extent by vendors, at the Board and local levels - just look at the US. Why is this? Because historically they do a lot to make things happen - they fund operations (sponsorship) and drive events, and they have a significant interest in the outcome - continued interest in all things IT Service Management, and especially ITIL. In many cases they deserve much of the credit for keeping the organizations alive in some cases!

The undue influence, when it occurs, is a natural by-product of an organization such as itSMF having a direct channel to the customer buying decision. With ITIL rumored to be a $5bn worldwide business, you would expect vendors to make sure they have some way of ensuring their share - thats only fair. Given the temptation, I would also expect every vendor representative sitting in an itSMF Board position to disclose their role within their own organization with respect to ITIL and product sales, and make sure they sign, and post on the itSMF website a conflict of interest agreement.

Actually, if you review the bylaws of the chapters you may find the vendor influence is EXACTLY in line with the mission. I am afraid that many of us joined the itSMF in the false hope it was already focused on advancing my individual aspirations as a professional..... As has been commented many times before on this blog - the itSMF has historically maintained an INDUSTRY level strategy, not individual.

Individuals have a few simple choices - vote folks into Board positions that will enforce or adjust the bylaws to focus on the individual, or find another association that better fits their needs....

What is SA?

Hi Joe,

You used the term "SA". What is that?


Seth Effrica

Joe is from Seth Effrica, but they spell it with an "A"

Alrighty then...

So it is. Thanks for the translation!


Common - Best - Good?

Good inputs on the Body of knowledge.
I have confusion about the path you mentioned:

Is it Common -> Best -> Good


Common -> Good --> Best?

My view is Best is a virtual and many times a relative term. Most of the practices can only be said Good practice as you are never sure (in most cases) if it is best or even better practices are there!

These are my personal views - not supported by any research/documentations.....


The Practice Lifecycle....

The challenge we have is that ITIL has poisoned the best practice well by inconsistent definitions of what constitutes a best practice. ITIL's definition stands at: "Proven activities or processes that have been successfully used by multiple organizations. ITIL is an example of best practice." Lets decipher this offering.

Proven - by whom, where and when, and according to who's criteria?
Successfully used - how is success being defined and from what perspective, customer, provider or supplier?
ITIL is an example - point me to the central register of success - with references and contact information. Ludicrous chest beating.

On pages 6-7 of the introductory section of the Service Strategy publication ITIL discusses "good practice", not best. At no point does ITIL offer an explanation for what is better, good or best, or whether there is any difference.

The American Society of Quality offers: "A superior method or innovative practice that contributes to the improved performance of an organization, usually recognized as "best" by other peer organizations".

An equivalent European quality champion offers: "An outcome of benchmarking whereby measured best practice in any given area is actively promoted across organisations or industry sectors. ..."

I read today's general definition of "best practice" as being closer to part of the last definition. namely a method, or concept ACTIVELY PROMOTED across organizations, since there is no central, peer managed register of proven best practices.

To help my clients and class attendees live with the current vagueness I have conjured up the "practice lifecycle" to give folks a means of starting somewhere (with common - unproven practices), and progressing through peer consensus to define industry best (generally accepted good idea but as yet unproven in specific circumstances), good (proven to work for you personally), and then next (stable and possible candidates for another cycle of improvement).

So my decipher is Common (gene pool)->best (industry recognized)->Good (delivered results for uou) -> Next....

I'm open to ideas on how we can clean up the definitions of what constitutes best and good practice..... and promise to include it in the to-be published version of USMBOK... recognizing the source...

I still prefer Common -->Good --> Best

I agree the confusion is existing.

From a logical point of view (from the extent my logic can ;-))- I prefer a slight fine-tuning of your view given above as :

Common (Gene pool) --> Good (Delivered good results to me/a few I know - but not sure if it is the best across the world) --> Best (Globally proven and accepted to be the optimal best practice) --> Next (The futuristic practices - Like SKMS :-))

But in that case, I also feel, the transition for Next is from start again - right?

I mean like a loop: --> Next --> Common --> Good --> Best

or at least Next --> Good--> Best

Any thoughts on this view? I am prepared to learn - thanks for all the information.


Core pre-requisites

There is a real danger in my mind thta ITIL v3 can be seen as having moved away from being a practical guide into the area of theory, and in places perhaps not particularly well thought out theory. There is a legitimate case for ITIL to be ahead of the game in terms of suggetsing novel approaches, but they should be sign posted as such. Conversely there is not enough sign posting of those things you must do if you are going to deliver effective services.

It works in practice but does it work in theory?

James, I couldn't disagree more.

I once worked on a particularly thorny problem for a very talented manager and wise mentor of mine. Upon close examination of my proposed solution he asked, “It works in practice but does it work in theory?” I quickly removed the chuckle from my face when I realized he was deadly serious. It took me some time to really understand this question.

When practitioners observe a few successful or common practices and then conclude that they have a solution, they have headed down the wrong path. Imagine going to a doctor who, before you’ve described the symptoms, writes a prescription and says, “Take two of these twice a day, and call me next week.”

“How do I know this will help me?” you ask.
“Why wouldn’t it?” the doctor replies, “It worked for my last two patients.”

No rational patient would accept this kind of practice. Yet IT managers routinely accept such solutions, in the naive belief that if a particular course of action helped other organizations to succeed, it ought to help theirs, too. Surely common practice must be correct practice.

Theory helps managers understand what is happening and why. Every plan and action that managers formulate and take is based on some theory in the back of their minds that makes them expect the actions they contemplate will lead to the results they expect. Theory may sometimes be the only way managers can peer into the future with any degree of confidence. It helps a service provider sort the signals from the noise.

When early researchers visited Toyota to see its lean production methods, they observed the significant attributes: low inventories, coordination driven by kanban cards instead of computers, and so on. They leaped quickly to conclusions, writing books assuring managers that if they, too, built manufacturing systems with these attributes, they would achieve improvements in cost, quality, and speed comparable to those Toyota enjoys.

Many manufacturers copied these attributes and none came close to replicating what Toyota had done. Theory was needed to tease out the true causes of Toyota’s success. It showed Toyota's thought patterns when designing any process, whether training workers or maintaining equipment. Using these theories, organizations as diverse as hospitals, aluminum smelters, and semiconductor fabricators achieved improvements on a scale similar to Toyota’s, even though their processes often share few visible attributes with Toyota’s system.

Trying out a "common" practice to hope and see if it works is really not an option. There is tremendous value in asking, “When *doesn’t* this work?” That can only be answered with the deeper understanding of theory.

"it works in practice but in theory?"

Dear Visitor,

The question that your mentor asked (which I summarized in my title) is, in fact the point of contention here.
His first part of the question/phrase was "It works in practice".
With that premise, "will it work in theory" question becomes very relevant.

The question here is ITIL works in the reverse way - "It works in thoery" So, many of us are raising the remaining part of the question, "Does it work in practice?" If yes, can we get exposed to some evidences/case studies etc?

I dont think any of us is(at least, for sure I am not)is questioning the need for theoretical practices. But the ideal objective of a 'best practice framework' should be to describe 'Good ideas' which are proven to work in practice.


Let me be clearer

I think you are reading far too much into what I said, which was that ther eshould be a clear distinction between the mimimum prerequisites to provide a professional service, and those aspects that are not yet proven to be necessary or beneficial. I might have added that v3 in places seems to present the thinking of individual management theorists as "the" theory, rather than putting them into a wider context.

What I am objecting to is the bits of ITIL that are pure "theory" without reference to evidence backed studies that prove or disprove their effectivness in the situations IT departments face in the real world. I'm very in favour of a theory that can adequatly explain verifiable facts, and I even think that blue sky thinking has its place, I am, after all, a consultant with a philosophy degree, but I do object to things being mis-sold or mis-represented as a result of muddled thinking.

There is a need to consider the target audience here. A large number of posts on this forum aren't written from the perspective of the "average" reader/user of ITIL, and that, I would venture to suggest without being patronising, is quite different from the average reader of HBR. The rigorous theoretical level of debate needs to take place, but the end result needs to be actionable and effective in the real world, not in the lecture theatre. Take COBIT as an example, there is a strong theory behind COBIT's structure and the profession of computer audit, but the end product is inherently practical.

Incidentally I do a lot of work in the pharma sector and also have a lot of work in the health sector....I think you might want to talk to a few doctors before dismissing the idea they subscribe to certain treatments just because they seem to ahve worked in the past. There is some interesting debate going on around the use of blood transfusions at the moment.

Having said that, I like the analogy, Only I imagine the ITIL Dr would say somethign like:

"Well I've got a mixture of treatments to give you. One was used by the Dr who taught me at college, and he said it was quite effective with his patients back in the pre-superbug era, and saved a couple of lives back then"

(Whilst failing to mention that it hasn't been effective for twenty years)

"And there's a new product a drugs company told me about at our last skiing/educational seminar trip they sponsored. It hasn't passed clinical trials yet, but we've given it a fancy name."

As for the Toyota approach - I would interpret the problem that US companies had in understanding was precisely the result of trying to put a theory around it, rather than focusing on the cultural and attitude aspects. The same applies in service management, doing ITIL by the book is possibly less effective than trying to capture the spirit of ITIL.

But is this *good* theory?

Visitor: yes, repeating practices that seem to have worked somewhere is endemic in the IT industry. I think we only get away with it because few customers, or vendors or consultants, know how to measure business success and the leverage that IT may or may not have on it. We can't diagnose the patient with any consistency. (I recommend a course of leeches.)

So, yes, there's a big need for theory. But does ITIL V3 offer the right theory? Is it correct, consistent and theoretically proven?


The Skeptic will be aware that ironically I'm working on a piece of work that directly addresses the "it has worked once or twice so must be universally true" view of the world.

Typically my latest draft got (correctly) shot down in flames for being far too theoretical.

the SKMS was too much for Sharon too?

It is interesting to note that the Official introduction mentions SKMS only once in the beginning and once in a diagram, while the bulk of the book (p84-85) limits itself to the Configuration Management System which is the rebadged CMDB. Perhaps the SKMS was too much for Sharon too?

Why ITIL V3 is like Windows Vista...

This posting makes me think that ITIL has in similar fashion to Windows Vista acquired bloat to make it cumbersome and prone to difficulty. The technology required to deploy a system of this magnitude is a mammoth task, never mind populating it which is shangri-la.
If we take this down to a simple level, draw a line in the sand, and call this line, the documentation of the known deployed state of a system. When working on a problem, I have personally encountered that this does not ususally exist and if it did there were inaccuracies. So I would agree that not only are there no examples of the larger SKMS, you would be hard pressed to find functional examples of even smaller subsets or components.
Personally, I think the cart is before the horse...


V3 as Vista - yep, love it!

Clearly directional for the Big 4

It's clear that the Big Four (IBM, CA, HP, BMC) are all moving aggressively in this direction with their product suites and marketing. Components such as change, incident, availability, project, vendor, contract, and skills management of course have been available for years. The next stage is to integrate them, which is why HP bought Mercury, IBM bought MRO, BMC bought Remedy, CA bought Niku... do you think that these companies are just sitting on these acquisitions? Hardly. All are engaged in a feverish race to integrate these acquisitions into coherent suites for IT, and this integration is no different than what we saw when financial systems (payables, receivables, general ledger, etc) were integrated into comprehensive ERP suites. Who would buy a GL separate from their receivables system nowadays? Yet this was how it was twenty years ago...

Whether the cart is before the horse is generally a question of scale. There's no doubt in my mind that the business case for doing all of this exists when your IT operations grow to a certain point. The integrated IT management systems are being pursued by the Fortune 50, and when the challenges of this integration are solved for the world's biggest companies, the resulting solutions will be scaled down and sold to smaller companies. Eventually, even small IT shops will have something like a "Quickbooks" that will include the basics in an integrated suite.

Charles T. Betz

all that is visible is the disappearing rump

Charles, I agree, or more precisely I accept your future scenario as feasible. In which case, I think it is fair to say that
a) the SKMS concept is driven by vendors' desire to drum up a new fad to replace the waning CMDB and
b) ITIL is so far ahead of the market with the SKMS concept that all that is visible is the disappearing rump

The disappearing rump...

There are two perceptions to a disappearing rump.
1. The horse is moving.
2. You are moving.
Either way you are in a cart without a horse.

We've got all this stuff, but...

Calling it a Knowledge Management System was off base. KM is moribund because it is a fundamentally flawed concept; I agree with those who think that "Knowledge Management" is an oxymoron. If you can "manage" it, it is not knowledge. Information, at best.

I know organizations that have systems covering all the requirements stated above (for that is what they are, requirements like we find in any other business domain seeking automation). The current gap in the industry is lack of integration, mostly hamstrung by poor master data management of IT data. The skills system does not use the same master vendor list as the contracts system. Competing application portfolios. Organizational performance does not easily map to underpinning IT infrastructure. The CMDB does not reference the project portfolio. Etc.

So, the reality of current large IT practice is taking baby steps to incrementally align the systems. Declare master systems of record for the various data topics & slowly, politically, start getting other system owners to align with them.

The biggest danger, well known as a data warehousing antipattern, is the idea of 'build it and they will come.' But just as some data warehousing efforts avoided this pitfall, so too some internal IT integration efforts will succeed. Bear in mind that a "system" at least implies some looser coupling than the v2 concept of "database." So, it can consist of modules that have some freedom, and still be considered a "system."

Master data management is the key to federation. See here.

Charles T. Betz

We don't need more products to integrate the products we have

Charles mentions the lack of and need for integration, and I think this hits at least one of the nails on the head.

Everyone knows this it seems, even if they don't use the same words. All the IT-service-managing organisations are complaining about not being able to integrate products, even if they don't fully realise the impact of using different user lists, asset lists, service lists etc in different places. All the vendors are hearing this ... and thinking "we need a product that will deliver integration". Isn't there something crazy about adding products to reduce complexity?

And ITIL V3 shows (multiple) CMDBs in one layer of the SKMS. As a framework to support integration it's not necessarily bad, but it still seems that there are some very simple principles from data management that organisations need to understand before going for the big framework.

Syndicate content