Great article about CMDB

Here are some quotes from an excellent article "Dirty Little Secrets of Application Dependency Mapping" by Michael LaChance.

I recall back in December planning to review this article then I got sidetracked somewhere. I highly commend it as a cracking good read from a non-vendor.

Slick road shows demonstrate how state-of-the-art application mapping tools easily discover configuration items and relationships and automatically populate the CMDB. Only after significant due diligence do you realize how much such capabilities cost, how much staff it takes to implement and support it, how long it takes to deploy and, for some customers, how little value is achieved.
...
What these tools can’t discover are those important things like applications and business processes. You’ll have to create these logical entities and describe to the tool how to discover their components. That’s a particular challenge when there are hundreds of applications in an organization’s applications portfolio as this work requires a great deal of effort, on the part of application architects, developers and infrastructure support personnel. This can easily amount to hundreds of hours of effort per application!
...
The introduction of these tools don’t help with the fundamental questions of determining which CIs, attributes and relationships are important enough to place under the control change management.
...
Because there isn’t an industry standard, federating CIs, attributes and relationships across multi-vendor environments is exceedingly difficult especially as vendors are apt to interpret these emerging standards differently while applying their own extensions.

Wonderful skepticism.

My thanks to bushwald for linking to this on SWik which reminded me

Comments

Suitably skeptical

Nicely balanced article. I've blogged around this area a bit myself (http://www.tideway.com/community/blog/author/15/)

From what I've seen, the Systems Management tool vendors are only just coming into the world that the enterprise architects had to sort out a couple of decades ago: integrating multiple systems is tough, it's all about the data its lifecycle and all of the architecture related aspects (integrity, accuracy, consistency, currency, completeness, etc). ITIL v3 seems to make things worse as it ratifies the view that a CMDB is a physical database that you put all information about technology assets into. It also makes it too easy to assume that the qualities of the data that you need for, say, financial management are comparable to what you need for problem management.

The idea that you can take something like CIM and just implement it for the processes that ITIL's about is madness: you'll need more people managing the data model and the data content than you will managing your computers! You have to simplify.

For BSM, you don't need much of an application model: all that you are providing is enough of an application structure to allow the BSM tool to correlate events from a piece of technology (a computer or something providing a technical service such as a DBMS or an app server), with the rest of the business application. You don't (as I've seen in some models) need to track each of the 64 processors in your computer. But you absolutely do need some application model that represents the current state of the world, otherwise your BSM tool will only hinder your incident management.

We've (Tideway) built, with our customers application mapping technology that does, as Mike suggests, deliver maintainable models cheaply (between 0.2 and 2.0 person days per application per year), using lower cost resources for the bulk of the work (application experts need about 0.5 hours input to that process. We've nearly always found that the application support teams are surprised that the application structure wasn't as expected. These models are used to take time and cost out of the incident management process. There's an overall very significant cost saving AND service improvement.

Funnily enough, customers usually overestimate the number of applications that they have (and underestimate the number of servers): we modelled 74 applications for one client, in a month - they'd been expecting 110 applications.

As Mike points out, the key to doing this type of work is keeping the level of information usable and ensuring that your tools allow you to baseline and then track changes in your environment. Many tools just provide you with a whole new set of data each day to reconcile with your existing view of the world. Another issue with the discovery tools is that you need to be able to track your various landscapes (dev/prod/uat/dr), and you don't want to put the same amount of control around them all.

The different business value of different parts of the IT estate can create a headache for service delivery: either you gold plate the service to meet the needs of the most important services, wasting money, or you provide a service for the lowest common denominator and then disappoint the important service owners. I've seen this as a particular challenge to the network management teams who, without the models that we've provided, could not tell whether a given pair of IP addresses were streaming music, or were part of the settlement system. I find the technology focus of most product companies in the systems management space makes it hard for their customers to fix this issue.

Business refactoring

From a service driven ITIL perspective our starting point should be the services that the business recognise, and that is the number/complexity that I think most organisations over-estimate. My last "proper" job before returning to consultancy was in one of the world's biggest companies - 14th in the world at one point but it had dropped to 17th by the time I'd finished with it ;-) and during the combination of our Y2K and SOX work we came up with a fairly conservative but genuine estimate of 740+ applications. The number of processes we were supporting, however, was much, much smaller than even the business realised. I only half grasped the concept at the time but in retrospect that high level was also where we should have been doing service differntiation, rather than at application level.

"But data isn’t always information."

Best quote of the article:
"But data isn’t always information."

IMHO, discovery tools have been so slow to catch on because they capture a tremendous amount of data, but very little actionable information. The output is like one of those massive full schema printouts that you see DBA's pasting on their cube walls... pretty to look at but the day-to-day value is just about nil.

The only way to get meaningful data into a CMBD is to have humans do it. Unfortunately, the only way to get humans to do it is to make it something that automatically results as output of their day-to-day work (like their IDE, build tools, system configuration management tools, application deployment automation tools, etc.). Counting on a human to keep a CMDB up-to-date "because they should" is like expecting them to keep documentation up-to-date and accurate... and we all know how well that works!

-Damon Edwards
http://dev2ops.org
http://controltier.com

Syndicate content