ITIL configuration guinea pig wanted

Wanted: an IT organisation to test the hypothesis that improving configuration procedures (and team) with some proper cultural change (not decree) will lead to the same benefits as implementing a CMDB, but at much lower cost.

I've banged on about this often enough but I'd dearly like to see someone put their money where my mouth is.

ITIL defines Configuration Management as the delivery of information but then spends most of its pages describing Configuration Management as the maintenance of a static repository of data, not an active process of serving that data to others. Let's get it clear: Configuration Management is a process not a thing.

Before we rush off implementing a CMDB, we first need to be optimising our practices. That's what ITSM is all about, remember? Optimising the Service Impact Assessment reporting (the key purpose of configuration as compared to asset management) involves being more timely, more accurate and more efficient. "More timely" doesn't necessarily mean fast. Timely means fast enough. We must look at the true business requirement for the impact data. Perhaps the need is actually measured in hours rather than minutes or seconds. Likewise there comes a point where the information is accurate enough to reduce the risk to an acceptable level. Further accuracy and speed are ETF - Excessive Technical Fastidiousness: the geeks’ compulsion to do everything “right”, also known as GAR - Geek Anal Retentiveness. Let it go. Pursuing more speed and accuracy than really necessary to address business risk acts against the third objective: efficiency. Put another way, you would be blowing your employer's money.

On demand

I propose the idea of on-demand CMDB. It is what we do now. We create data ad-hoc anyway when we have to. If the data is not there or not right and management wants the report, we gather it up and clean it up and present it just in time, trying not to look hot and bothered and panting.

How much better if we had a team, expert in producing on-demand configuration information? They would have formal written procedures for accessing, compiling, cleaning and verifying data, which they would practice and test. They would have tools on the ready and be trained in using them. Most of all they would “have the CMDB in their heads”: they would know where to go and who to ask to find the answers, and they would have prior experience in how to do that and what to watch out for. Instead of ad-hoc amateurs responding to a crisis, experts would assemble on-demand data as a business-as-usual process.

Certainly we would need some basic Configuration data kept continually. This would be the stuff we discover automagically already, such as procurement-driven asset databases, or discovered network topologies and desktop inventories, or the transactional information captured by the Service Desk. Add to that the stuff we document on paper already (or ought to): the service catalogue, phone lists, contracts and so on.

The savings in not trying to go beyond that base Configuration data would be great. The price paid for those savings would be that “on-demand” does not mean “instantaneous”. It might mean hours or days or even weeks to respond to the demand. So a business analysis needs to be done to find out how current the data really needs to be (as compared to what the technical perfectionists say). In some organisations the criticality demands instant data and they need to trudge off down the CMDB path. But for the majority of organisations this just isn’t so.

Approach

So, as we should approach any initiative, I suggest coming at it like this:

1. Determine the business risk/problem/need
2. Define the requirement/objective/outcome/deliverable
3. Capture and record how procedures work now
4. Formalise this as documented repeatable procedures
5. Get stakeholders to agree that’s how it works
6. Design and develop a new version
i. Ask those who do it how they would improve it to meet the desired business outcome
ii. Use theory as guidance (e.g. ITIL) and ask “experts” how they would improve it
iii. Decide improvements that are SMART: Specific Measurable Attainable Relevant Timebound
iv. Check with architects that the planned changes fit with the overall framework / architecture / design for services, processes and technology
v. Write a new procedure to include the improvements.
vi. Make required technology changes needed to support it
vii. Perform a walkthrough or live trial with users. An assessment is made of progress and reported to management.
viii. If and only if there are efficiency or effectiveness problems that could be fixed with better technology, make technology improvements
ix. Repeat as necessary
7. Once accepted, implement: testing, training, release...

Culture

The challenge will be overcoming two things: Excessive Technical Fastidiousness and that “be prepared” ethic. Remember the fable of the grasshopper and the ant? The ant slaves away all summer storing food while the grasshopper plays around, then come winter the ant is well fed while the grasshopper starves. It is obviously not true. The world is full of grasshoppers. They are more resourceful than that. They work something out.

There are many organisations and societies who understand that effectively and efficiently dealing with the things that happen, and not wasting time and money preparing for the things that don’t happen, is in fact a nett saving in effort. In this case, instead of building infrastructure to continually record data we might one day need, we can quickly gather only the data we do need in reaction to a requirement. There is some linkage here with the concepts of “Lean IT”.

Tools

We only need to store simple derivations of the data that take us steps closer to the full information, reducing the amount of effort required when information is requested, leaving us less pieces to assemble. E.g. network topologies, server inventories, network traffic patterns, more complete asset lists, access to new data e.g. org charts, procurement.

And/or we need tools to help us perform that derivation, that piecing together, more quickly. E.g. query and analysis tools, data cubes and spreadsheets, report delivery mechanisms.

In less than 5% of organisations, we may need to go to a CMDB because the value of it actually exceeds the costs.



So how about it? Is there someone who can put this concept to the test? or has already done so? Please let us all know.

Syndicate content