CMDB's Dirty Little Secret

Through all the talk about the wonderful benefits of a CMDB, nobody mentions the additional drag a CMDB puts on all change to the IT environment.

(Incidentally most of those benefits ascribed to CMDB are actually the benefits of IT Asset Management, license management and Supplier Management, realisable with much simpler technology. Anyhoooo....)

There are three major costs of CMDB: implementing it, populating it, and maintaining it, and they are all of similar magnitude. Some vendors have to fiddle even the implementation costs to get an ROI. Sure as hell NONE of the vendors ever talk about the other two costs: population and maintenance.

People frequently underestimate the cost of the initial population exercise; finding, validating, massaging and loading all the data. Autodiscovery helps but not for real world data like vendors, contracts, warranties, business owners...

It's a huge exercise, but not what I want to look at today. This post is about the ongoing maintenance of a CMDB.

if you have good tight Change Management process, then the ongoing maintenance of the data is helped, as updating the CMDB should be just one more step in processing every change. But even in that idyllic world, there is still ongoing auditing and correction.

And an even bigger factor regularly overlooked is the constantly changing nature of the environment that impacts the CMDB's infrastructure: tools that are sources of data get replaced or upgraded, whole new technologies are introduced, companies merge, SaaS and Cloud roll in.... The CMDB is right in the middle of everything - it rarely escapes having to be checked, changed and tested any time anything else changes. It's CMDB's Dirty Little Secret.

IT geeks love to have integration, complete coverage, tidiness, holistic solutions. They sure make for pretty pictures on whiteboards and powerpoints. they appeal to our instinct for correctness and completeness and accuracy. But it is not the real world. The real world is messy and imperfect and incomplete and estimated and iterated and falling short of the ideal. Get over it, get used to it.

More to today's point: understand that the more you glue things together with some centralised magic tool, the more you create interdependencies where there were none. In particular, if you introduce a central hub connected to everything in your infrastructure, then every time you change anything in your infrastructure you will at least have to check for impact and later retest that connection. You may well have to change the connection.

And don't buy the vendor B.S. about federation solving this. It just might. one day. Right now a useful federation standard is about as close as an open source version of Windows. It is all vendor smoke to try to hide this very problem.

Be clear on this: any central hub such as a CMDB (or an event monitoring console or...) has a big hidden cost in that any time you change anything in your infrastructure there will be an added step to check and test and maybe fix the interfaces.

So what? So factor that cost in when you build a business case for the value returned by this wonderous new technology. You do make a business case before you buy new toys for IT, right?

If you implement better Configuration process instead of CMDB tools, the process and people change faster and more flexibly than a technology does in response to underlying change in the managed infrastructure. (And they don't need to be "populated": they go get the data on demand.)

Comments

July 12th's Dilbert

Personally I still believe that a 'CMDB' (CMS, PMDB, et al) that focuses on real-time business service impacts may be more likely to provide a better balance between the 'everything in the CMDB mentality' and 'only what we really need/when we need it' approach. I think Skep has even mentioned this as an 'On-Demand' CMDB. So I'll continue to rant on about monitoring and Event Management automation...

In any case, I agree that regardless of any approach you don't get to skip your homework.

TODAY'S DILBERT SAYS IT ALL.

John M. Worthington
MyServiceMonitor, LLC

For the record

For the future record Dilbert said: "The best way to compile inaccurate information that no one wants is to make it up"

Another way of thinking

The overhead of maintaining the CMDB is a pretty good indicator of the cost drivers in maintaining our real world infrastructure, something many organisations don't really seem to get. So if the CMDB is hard to maintain it is because our infrastructure is hard to maintain. That is when the thinking should change to "How can we simplify the maintenance of our infrastructure?"

We can design for maintenance
We can outsource parts of the infrastructure - even move towards a cloud model
We can introduce a structured policy driven approach to change
We can have planned change freezes

And, as I've said many times before, to learn how to do such sensible things well we need to look outside of IT and see what engineering does.

A bus company I knew well saved a fortune, and reduced the size of their fleet, when they moved from a model of trying to fix a faulty engine component in situ and moved instead to a default approach of dropping the engine out of the bus and swapping it for one they knew worked (A well practised and documented procedure that they knew could be done in a fixed time) and then fixing the faulty engine during normal workshop hours. Its interesting working in a thin client environment that one of my clients' default reaction is now to simply swap over desktop boxes with a new one being despatched to the office whilst the faulty one is returned for maintenance. There is no longer any lengthy at desk diagnostics done.

Just a pity that the supplier has been known to send the faulty machines straight back out again, complete with their "D.O.A." sticker.

Constant vigilance

I would add that in the dynamic environment that most CMDBs are supposed to be working towards capturing, the slightest loss of focus, whether because of Org change, manager indifference, etc the CMDB again becomes useless and requires re-implementation.

Upkeep of a CMDB requires a religious attention to accuracy, I have seen this happen in a few start-ups where the CMDB is the business enabler, where being able to accurately manage change impact and roll out new services in minutes requires this kind of devotion. It’s very hard and expensive to do from a legacy position, and the cost is hefty, not impossible but the benefits had better be worth it... if not it's hard to keep the faith.

Alex

YAFR

Excellent point Alex. I see the same thing with knowledge management repositories and portals. For anyone working for a large firm or department, how many times have you seen someone get all fired up about a place to store everything? we'll have a portal to all the documents. We'll store boilerplate text. We'll index and manage all useful documents. We'll...

I took to refering to them as YAFR, Yet Another Jolly Repository. A team spends countless time and money putting it all together, there is little or no cultural change to embed the thing, it doesn't actually help much anyway, and in six months time it is a rusting hulk.

All these much touted "successful" CMDB projects, I'd like to revisit them after 1-2 years and see just how many are used and for what. (I'd alos dearly love to take a close look and see just how close to a real CMDB they actually are but that's another discussion)

Touting

Agreed...

I have a sneaking suspicion that the touted successful CMDB (maybe even ITSM) projects are not quite as successful as we are being led to believe.

He who wins the war writes the history...If you are a sponsor, manager, vendor, or consultant there is no incentive to detail your failure, as the owner of all the data able to qualify success there is no need to.

I have seen success defined in one project but the sudden uptick in changes submitted as a result of a new change management process - brilliant!

I am actually surprised we have such a high "failure rate" I can't imagine many organizations detailing their failure unless it was spectacular!

Maybe I am just being a little too skeptical.

Doctors bury their mistakes

Doctors bury their mistakes, engineers' mistakes stand as a rusting monument to their incompetence, but IT mistakes just disappear in a puff of money... after we get to hear about them.

For the ones we don't get to hear about:

In the face of defeat, declare victory. This was an old British military tactic when faced with unshakeable guerrilla insurgence: walk away and hold a victory parade. No need to admit the half-million-dollar project is a failure when you can bluff your way out of it with vocal assistance from the vendor. Tell everyone how successful it was for long enough and even your own staff might start to believe it, especially if they start getting invited to conferences in exotic places.

(from Introduction to Real ITSM)

Syndicate content