Benchmarking – worse than cheating

CheatingDo you remember back to your school days, and the scandalous crime of cheating by copying someone else’s work?

Why was school-boy (& girl) copying seen as such a sin?

  1. The most obvious reason in traditional education is that you are cheating the ‘grading’ system such that people will think you are ‘better’ than you (currently) are;
  2. But, what’s far worse is that you haven’t actually gone through the learning and development process, for yourself…which is what education should be about.

So why am I comparing and contrasting ‘benchmarking’ with school-boy copying? Let’s first look at a definition:

“Benchmarking: Managers compare the performance of their products or processes externally with those of competitors and best-in-class companies and internally with other operations within their own firms that perform similar activities.

The objective of Benchmarking is to find examples of superior performance and to understand the processes and practices driving that performance.

Companies then improve their performance by tailoring and incorporating these best practices into their own operations.” (from Bain & Co. website – a well regarded Management Consulting organisation selling its benchmarking services)

So, essentially Benchmarking is akin to deliberately (and usually openly) finding out who the best kids in the class are and then trying to copy them…with this being seen as a logical and acceptable thing to do. Business is clearly different to Education (right?)

A number of things strike me about this ‘benchmarking’ definition:

  • It assumes that, if I find someone with excellent ‘result metrics’ (in respect of what I chose to look for) then:
    • the metrics I see are true (undistorted) and tell the whole picture (e.g. cope with differing purposes, explain variation,…); and consequently that
    • I should be doing what they are doing…which implies that I can easily, correctly and completely unpick how they arrived at these results;
  • It is about managers looking for answers externally and, essentially, telling the workers which areas will change, and to what degree (commanding and controlling);
  • It is looking at what other organisations are doing rather than what the customer requires (wrong focus)…and likely constrains true innovation;
  • It focuses on component parts of the system, rather than the system as a whole (which will likely destroy value in the system);
  • It incorporates the related, and equally flawed, idea of ‘best practise’ (rather than understanding that, setting aside the above criticisms, there may be better practises but no such thing as perfection);

Sure, we should be aware of what other organisations, including our competitors, are doing for the good of their customers but attempting to copy them is far too simplistic (see my very first post re. ‘perspective’ ).

It is interesting to read what Jim Womack (et al at MIT) had to say about benchmarking after they spent many years studying the global car industry.

“…we now feel that benchmarking is a waste of time for managers that understand lean thinking. Benchmarkers who discover their ‘performance’ is superior to their competitors have a natural tendency to relax, whilst [those] discovering that their ‘performance’ is inferior often have a hard time understanding exactly why. They tend to get distracted by easy-to-measure or impossible-to-emulate differences in costs, scale or ‘culture’…

…our earnest advice…is simple: To hell with your competitors; compete against perfection…this is an absolute rather than a relative standard which can provide the essential North Star for any organisation. In its most spectacular application, it has kept the Toyota organisation in the lead for forty years.”

And to compete against perfection, you must first truly understand your own system:

“Comparing your organisation with anything is not the right place to start change. It will lead to unreliable conclusions and inappropriate or irrelevant actions. The right place to start change, if you want to improve, is to understand the ‘what and why’ of your current performance as a system.” (John Seddon)

Each organisation should have its own purpose, which attracts its own set of customers, who have their specific needs (which we need to constantly listen to)…, which then determine the absolute perfection we need to be continually aiming for.

You can see that, if we use benchmark metrics, we usually end up back with the Target/ Incentive game. We can expect distorted results and ‘wrong’ behaviours.

The real point – Experimentation and learning: Now you might respond “okay, so we won’t benchmark on result metrics…but surely we should be benchmarking on the methods being used by others?”

The trouble with this goes back to the 2nd, and most consequential, ‘sin’ of school boy copying – if you copy another’s method, you won’t learn and you won’t develop.

“We should not spend too much time benchmarking what others – including Toyota – are doing. You yourself are the benchmark:

  • Where are you now?
  • Where do you want to be next?
  • What obstacles are preventing you from getting there?

…the ability of your company to be competitive and survive lies not so much in solutions themselves, but in the capability of the people in your organisation to understand a situation and develop solutions. (Mike Rother)

When you ‘benchmark’ against another organisation’s methods you see their results and you (perhaps) can adequately describe what you see, but:

  • you don’t understand how they got to where they are currently at, nor where they will be able to get to next;
  • you are not utilising the brains and passion of your workers, to take you where they undeniably can if you provide the environment to allow them to do so.

…and, as a result, you will remain relatively static (and stale) despite what changes in method you copy.

“When you give an employee an answer, you rob them of the opportunity to figure it out themselves and the opportunity to grow and develop.” (John Shook)

Obsessed!

ObsessionThere’s a word that seems to be overly used within many organisations, almost an obsession.

That word is ‘Culture‘. Indeed, they seem to have a culture of ‘being obsessed by the word culture’.

We hear the following phrases (or variants of):

  • We are measuring our culture
  • We need to change our culture
  • We have a culture committee
  • We are performing a culture-changing programme of work

So, here’s the thing – an organisation’s culture is a result, an outcome, just like its financial situation. As I wrote in one of my first posts, we shouldn’t be attempting to ‘manage by results’ (as in “let’s change our culture”), we need to manage the causes of the results…and the results will then look after themselves.

The culture of an organisation is the sum of the way people behave. The main cause of the culture is the management system in place. That management system reflects the beliefs and behaviours of the leaders of the organisation.

A reminder of a hugely important quote from John Seddon:

“People’s behaviour is a product of their system. It is only by changing [the system] that we can expect a change in behaviour.”

i.e. we can do all sorts to ‘require’ people to change how they behave (in an attempt to change the culture), but if we continue to apply the command and control management instruments ‘on’ them, such as:

  • management by hierarchical opinion rather than facts at the Gemba;
  • cascaded personal objectives;
  • setting of arbitrary numeric targets;
  • dictating methodologies and tools to use;
  • contingent rewards; and
  • the rating and ranking of people

…then we can’t expect much to really change.

No end of people ‘attitude’ targets, incentives, evidence gathering and rewards will change the system. Instead, we can expect such a system to derive distorted ‘attitude’ metrics – “I will likely tell you what you want to hear if it benefits me to do so.”

Interestingly, whenever I’ve worked in an organisation with a really good environment, the ‘culture’ (outcome) word was seldom mentioned – it didn’t need to be.

So, whilst we’re considering the ‘Culture’ word, what about the ‘Transformation’ word?

Here’s a definition to ponder:

Transformation: In an organisational context, a process of profound and radical change that orients an organisation in a new direction and takes it to an entirely different level of effectiveness….transformation implies a basic change of character and little or no resemblance with the past configuration or structure.”

Many organisation’s use the word ‘transformation’ a lot, and perform major organisational change a lot…but unless that change has succeeded in delivering an entirely different level of effectiveness, then they’ve only really been ‘rearranging the deck chairs’.

Conversely, if an organisation changes its management system (which would be truly transformational!) then culture change is free.

If an organisation truly operates a ‘systems thinking’ management system then it should result in a powerful culture capable of continuously improving, through the people who work there…with no need for endless attempts at ‘transformation’.

DUMB

smartWe are all taught at an early age in our careers (i.e. ‘Management for dummies’) that we should cascade down S.M.A.R.T objectives. You will come across it as an idea that is so deeply rooted that it has been co-opted as ‘common sense’.

Sounds so good, it must be right, right?

Let’s just remind ourselves what SMART stands for:

  • Specific
  • Measurable
  • Achievable
  • Realistic
  • Time bound

Let’s then also remind ourselves about the definition of a system (taken from my earlier ‘Harmony or cacophony’ post):

“A system is a network of interdependent components that work together to try to accomplish the aim [purpose] of the system.” (W. Edwards Deming)

The cascaded objectives technique (known as Management by Objectives, or M.B.O) is used by ‘Command-and-control’ organisations in the mistaken belief that, if we all achieve our cascaded personal objectives, these will then all roll up to achieve the overall goal (whatever that actually is).

This misunderstands:

  • the over-riding need for all the parts (components) of a system to fit together; and
  • the damage caused by attempting to optimise the components…because this will harm the whole system.

A simple illustrative example (taken from Peter Scholtes’ superb book called ‘The Leaders Handbook’):

Let’s say that we run a delivery company – our system. Fred, Amy and Dave are our drivers – our people components. If we provide them each with SMART personal objectives cascaded down (and offer performance-based pay), we might assume that they will all be ‘motivated’ to achieve them and therefore, taken together, the purpose of the whole will be achieved. Sounds great – I’ll have some of that!

…but what should we expect?

  • Each driver might compete with the others to get the best, most reliable, largest-capacity truck;
  • Each driver might compete for the easiest delivery assignments;
  • Drivers might engage in ‘creative accounting’: such as trying to get one delivery counted as two; or unloading a delivery somewhere nearby where it can be made after hours so that they can go back to the warehouse to get more jobs;
  • If we have created a competition out of it (say, the getting of a desirable award) then we can expect to see little driver co-operation, more resentment and perhaps even subtle sabotage.

The above shows that the sum of the outcomes will not add up to what we intended for the whole system…and, in fact, will have caused much unmeasured (and likely immeasurable) damage!

This is a good point to bring out Eli Goldratt’s classic quote:

“Tell me how you will measure me and I will tell [show*] you how I will behave.”

* I prefer to use the word ‘show’ since most people won’t tell you! They know their actions aren’t good for the overall system (they aren’t stupid) and so don’t like telling you what daft practices the management system has ended up creating.

A critique of S.M.A.R.T:

“SMART doesn’t tell us how to determine what to measure, and it assumes knowledge – otherwise how do we know what is ‘achievable’ and ‘realistic’? It is only likely to promote the use of arbitrary measures that will sub-optimise the system.” (John Seddon)

If an individual (or ‘team’) is given a truly SMART objective then, by definition, it would have to have been set so that they could achieve it on their own….otherwise it would be unrealistic.

Therefore any interdependencies it has with the rest of the organisational system would have to have been removed…which, clearly, given the definition of a system means one of the following:

  • if all interdependencies had been successfully removed…then meeting the resultant SMART objective will be:
    • a very insignificant (and very possibly meaningless) achievement for the system; and/or
    • sub-optimal to the system (i.e. work against the good of the whole)

OR

  • if (most likely) it was in fact not possible to truly remove the interdependencies…despite what delicate and time consuming ‘word smith-ing’ was arrived at…then:
    • it will be a lottery (not really under the person’s control) as to whether it can be achieved; and/or
    • it will ‘clash’ with other components (and their supposedly SMART objectives) within the system

So where did the post title ‘D.U.M.B’ come from? Here’s a thought provoking quote from John Seddon:

“We should not allow a plausible acronym to fool us into believing that there is, within it, a reliable method.”

Consider     

  • SMART: Specific, Measurable, Achievable, Realistic, Time-bound

With

  • DUMB: Distorting, Undermining, Management-inspired, Blocking improvement

Does the fact that the acronym and its components ‘match’ make it any more worthy?

Cascaded personal objectives will either be ineffective, detrimental to the whole system or a lottery (outside of the person’s control) as to whether they can be achieved.

We need to move away from cascaded personal objectives and, instead:

  • see each horizontal value stream as a system, with customers and a related purpose;
  • provide those working within these systems with visible measures of the capability of the system as against its purpose; and
  • desist from attempting to judge individuals ‘performance’ and thereby allow and foster collaboration and a group desire to improve the system as a whole.

The trouble with targets

1136281264582304The front page article on the Press for Friday 7th November 2014 says “Patients ‘forgotten’ in wait for surgery”.

It goes on to say that research published in the NZ medical journal suggests that:

“One in three people requiring elective surgery are being turned away from waiting lists to meet Government targets.”

It should be no surprise to any of us that if a numeric target is imposed on a system then the process performers will do what they can to achieve it, even when their actions are detrimental to the actual purpose of the system. The controlling influence of the targets will be even greater if contingent financial implications are involved (carrots or sticks).

If we viewed a league table of (say) hospitals and wait times, what would this tell us? Would it tell us which:

  • has the best current method as judged against the purpose of the system; or
  • is best at managing the system against the numeric targets?

…and what about quality?

This NZ research is not an isolated or even new incident. John Seddon has been following, and challenging the fallout from target setters for many years, across the whole range of UK public sector services. Many of his findings are comedy and yet scary at the same time.

Any target-setter should have no surprise by the resultant behaviours of process performers and their managers, such as to:

  • Avoid, or pass on difficult work;
  • Attempt to restrict work in the process, by:
    • making it hard to get into the process; or
    • throwing them back out (‘they didn’t do it correctly’); or
    • inventing new ‘outside the target’ queues earlier in the process
  • Applying the ‘completed’ stamp as soon as possible, and often before the customer has reached the end from their point of view;
  • Earn easy points, by doing things anyway when not strictly necessary…because it will count towards the target

The target-setter has created a ‘survival game’ of ‘how can we make the target’ which replaces ‘serve customer’.

So what to do? How about adding on layers of compliance reporting and inspections to police the process, to spot them doing ‘naughty things’ to meet target and punish bad behaviour…that should work, shouldn’t it?

Thus the battle lines are drawn, with the customer suffering in the cross fire.

Of note, the Press article goes on to explain that the Government target of 6 months is soon to be reduced to 5 and then 4….because, obviously, adding more pressure on them will motivate them to improve!???

What about if we replace numeric targets with capability measures (which measure the capability of the process against the purpose of the system)….and then used these measures to help us improve.

We can laugh (or cry) at the public sector comedy…but let’s not forget what we do with targets in our own organisations.

Stating the obvious!

Copy-of-dumb_blondeIt is really easy for any leader to say “I want…

  • Continuous Improvement;
  • Removal of waste;
  • Reduction in failure demand*.”

(* explained in my earlier marbles post here)

All are sensible, in fact obvious! But it’s a bit like a financial advisor telling you to ‘buy low and sell high’…what have you actually learned that you didn’t already know, and how does this help?

It’s much harder to understand the system conditions (structures, policies, procedures, measurement, IT), and underlying management thinking (beliefs and behaviours) that protect the status quo, create the waste and cause the failure demand….because you have to change your thinking!

“We cannot solve our problems with the same thinking we used when we created them.” (attributed to Einstein)

If you:

  • set numeric activity targets to make improvements…
  • …and offer rewards for their achievement…
  • …and rate (and rank) people’s performance against them…

…then you haven’t understood (or accepted) about systems, measurement and motivation.

To quote from John Seddon:

“Treating improvement as merely process improvements is folly; if the system conditions that caused the waste are not removed, any improvements will be marginal and unsustainable.

The original marbles

blue_marble_closeup_sjpg1676For those of you who have attended a particular course that I run, I hope you remember the marbles!

For those of you who haven’t (yet) attended, then this post should cover the point nicely.

I try to be mindful of the source of everything that I use (no, really, I’m not making it all up…I am trying to stand on the shoulders of giants) and, with this in mind, I wanted to share with you the link from which the marbles presentation comes from…it is well worth a quick read!

Now, before you go there, it’s worth bearing in mind that the blogger (ThinkPurpose) has a particular ‘mess with your head’ style of writing (which I really like…you’ll see what I mean the more posts you read!).

…so, here it is: https://thinkpurpose.wordpress.com/that-marbles-post/ post.

If you look at each marble that is being listened to, you can see that they can easily be converted to the same/ similar types of demand we receive in our organisation.

Now, ThinkPurpose is him/herself (?) standing on the shoulders of John Seddon and his original definition of:

Failure demand is demand caused by a failure to do something or do something right for the customer…which is created by the organisation not working properly…which is under the organisation’s control.”

“…in service organisations, failure demand is often the greatest source of waste.”

Going forward I’d love to hear about people seeing, studying, and talking about their marbles!!!

Finally, it was recently put to me that ‘isn’t failure demand just another way of explaining the waste of re-work?’. My response is ‘no, but there may very well be a relationship between the two’. My explanation to show they are different is as follows:

On the one hand: You might spot an error, perform some re-work to correct it and do this without the customer’s knowledge/ attention…and thus avoid failure demand (the customer contacting you).

On the other: You might receive failure demand (FD) without this requiring re-work of what’s already been done:

FD Archetype 1. ‘where is my claim?’: doesn’t mean that there is necessarily anything wrong with the work that has been done so far…it just might be ‘stuck’. To handle this failure demand requires new yet avoidable work to:

  • handle the customer’s request (e.g. the phone call), look up the claim details, make enquiries, work out what is happening;
  • expedite the claim so as to be seen to be ‘doing something’ for the customer
    • get back to the customer with well thought through and carefully crafted explanations and ‘platitudes’

FD Archetype 2. ‘why haven’t you done this to my claim?’: doesn’t necessarily mean that previous work has to be re-worked. It requires new yet avoidable work to:

  • handle the customer’s request as per the above; and
  • perform further actions that:
    • should have been done, but weren’t; or worse
    • are now required but wouldn’t have been if it had been done right in the first place.

Either of these examples of failure demand might prompt an element of re-work, but they will always require new work.

IT and Improvement

Bruce_the_Shark_by_hayn“Asking a consultant if you should….put in a new computer system is like asking a hungry Great White Shark if the water is warm and you should go for a swim.” (Seddon quoting Craig, D and Brooks, R)

We all know about the wonders that can be achieved through technology…we also know the massive pain that we can suffer from trying to jump on/ implement the next ‘big thing’.

Another quote that fits well in this space:

“IT marketing is more hyped than next season’s fashion colours and the MTV awards combined.” (Unsourced)

John Seddon contends that the problem with IT is with the way we approach it, this being something like:

  • We see some potential ‘holy grail’ dangled in front of us that seems to play to our symptoms;
  • We write some specification of what we think we need/ how we might use the ‘shiny new thing’;
  • The IT provider then takes this, re-writes it in their own version (a straight jacket if you will) and then delivers against this;
  • …which then fails to deliver against our actual reality (which only now we begin to properly understand…but this is now too late);
  • …so the supplier blames our original specification;
  • …and succeeds in selling more ‘implementation consultancy’ to ostensibly ‘put matters right’ or, at the risk of being cynical, ties us further into the abyss of their technology.

Seddon proposes that our approach should be to “understand and improve – then ask if IT can further improve.”

  • Understand: Ignore IT. Do not even assume the problem, or solution, has anything to do with IT. Instead, work first to understand the ‘what’ and ‘why’ of current performance as a system…which means learning about demand, capability, flow, waste…and the underlying causes of waste;
  • Improve: Improve performance without using IT to do so. If you currently use IT, either leave it in place or work without it. Now, improve doesn’t just mean the process…it very often means the management system surrounding it;
  • Ask ‘can IT further improve this system?’: It is only now that you can address the benefits that potential IT counter-measures can bring because you are asking from a true position of knowledge about the work. This is IT being ‘pulled’ into the work rather than dictating the method (“the way the work works”).

And, throughout all of the above, we should be measuring the capability of the system against its purpose (from the customers’ point of view) and can then consider whether each change in method (including the use of IT) has in fact been an improvement.

Now, an obvious chicken-and-egg question arises here: ‘…but don’t I first need IT to measure capability?’. A couple of thoughts in reply:

  • You don’t need IT to capture the demand trigger point and its satisfaction point….though it is likely to make it much easier – the same ‘understand, improve and then ask if IT can further improve’ applies to IT reporting. Before touching IT for reporting, you need to understand what you should be measuring. I have seen most IT implementations deliver a suite of out-of-the box reports that do not measure capability;
  • Even if your IT ‘solution’ delivers you such measures, you need to understand whether they are being distorted by the process performers due to the effects of the management system on their behaviours? …perhaps this needs focus first?

‘Management by results’…how does that work?!

cause_effect__1_1_5312The CEO of the company I work for recently shared a tweet with us summing up an insight she gained at a recent engagement she attended, which read as follows:

“Nick Farr-Jones dinner guest at conference key message focus on process not the scoreboard and you will get result.”

I like this tweet – the words, whilst short, are incredibly important for anyone wanting to make improvements. I thought it useful to dig into this a bit.

Dr W. Edwards Deming was very clear on this point! He set out the practise of ‘Management by results’ (e.g. focusing on a scoreboard of outcomes) as one of the diseases of a ‘modern organisation’ and, instead, proposed that we should spend our time and focus on understanding and improving the processes that produce the results i.e. the results are the outcome of something, and you can look at the outcome till you are blue in the face…but this won’t make it change!

“The outcome of management by results is more trouble, not less….Certainly we need good results, but management by results is not the way to get good results….It is important to work on the causes of the results – i.e. on the system. Costs are not causes: costs come from causes.” (Deming, The New Economics)

Professor John Seddon (think ‘Deming on steroids’ for Service organisations 🙂 ) takes this message on further. He notes that the measures used in most organisations are ‘lagging’ or ‘rear-view’ measures – they tell you what you did.

Seddon has a very clear view on measurement but at this time I want to simply put forward his thinking regarding the difference between operational and financial measures. He says that we should use:

  • Operational measures (such as demand for a service, and a processes capability to deliver against its purpose) to manage; and
  • Financial measures (revenues, costs) to keep the score.

We know that one affects the other but we can never know exactly how and it is waste to divert time and effort to try to do so.

Bringing this back to Nick Farr-Jones and Rugby: A rugby coach uses process measures to manage (e.g. passes complete, line-outs won, tackles made…) and the result quite literally as the score!

So Nick Farr-Jones, Deming and Seddon quite clearly agree with each other. If you work on the capability of a system/ value stream/ process to deliver against its purpose (as from the customer’s point of view) then the results will come.

Finally, you may be thinking ‘ah yes, this is where the balanced scorecard comes in’…there’s a post in there! Watch this space.