Doubting that automation really reduces errors

Here's a personal reflection unsubstantiated by any research: I really doubt that automation reduces errors. And yet that is often the reason touted for automating processes. In our IT management world, we often hear automation pushed as the panacea for preventing outages, security lapses, and so on. I don't buy it.

I'm all for automation, if and only if there is a business case for it. There are good reasons to automate: speed, scale, and of course cost reduction.

But I doubt automation reduces errors. I think it reduces their frequency but much amplifies their magnitude, with a nett result that not much is changed. Humans make small frequent errors, many of which are picked up by themselves or their peers - manual systems are self-correcting systems. Automated systems are not - in general - self-correcting. If they mess up they will continue to mess up until someone notices, by which time the mess can be considerable. And when the automated system itself fails - as they all inevitably will - the impact of a failed system is major because we have automated for speed and/or scale.

Every error is human.

(Once every petagazillion machine instructions, one hiccups due to a cosmic ray flipping the bit. I'm neglecting those).

Automation does not reduce the nett effect of human error. It results in fewer bigger errors. You could say automation dams up error. If the dam breaks...

Process improvement reduces the nett impact of errors, not automation.


Hi Skep, My first comment on

Hi Skep,
My first comment on your blog.
Thanks for sharing your ideas with the community.

Automation reducing errors, in terms of ?

Frequency... you're right, reducing human actions will reduce the potential 'human deviation' to perform a none required/ inappropriate action in the process.

Impact... sure not !!!
My understanding is that automation is about configuring a 'system' in a specified context that handle (repeating) required task. If something happens in a context not initially identified and addressed during the 'system' design, this system is failing by applying inappropriate actions that could lead to Bigger Impact.

To reduce this, we need to implement appropriate controls to identify theses 'context deviation' and may stop the process.

We could consider also that a 'bad system design', is still due to humans. Doh! (Homer Simpson Style)
To Reduce/ prevent errors, please eliminate Humans (See "2001, a space odyssey" 1968).


It depends upon the procedure

One thing's for sure - if even (especially) technical staff are given a checklist to follow for a regularly performed operation then they will miss or skip bits, based upon what they think the instructions say, what they remember from previously doing it, etc.

Whereas an automated procedure can be tested just as much as the software it may be managing or configuring.

Cost of testing

You raise the issue of testing. Testing software - whether it be automation or anything else - far exceeds the cost of coding and (often? usually/ always?) exceeds the cost of planning and design. This is frequently overlooked by IT Operations staff automating IT systems, who think that banging another rule or script into Tivoli, Patrol or Unicenter is a few hours work. Likewise Service Desk tool admins adding workflow.

if the business is automating a core process, then the system is usually properly design and built.. and tested. If security staff want to automate access provisioning, someone bangs up a "self-documenting" Perl script, give it a quick unit test on the bench, then release it into the wild

Syndicate content