Toilet Humour

UrinalSo, I’m in the process of moving office and I’m clearing out paperwork around my desk. I came across something which made me reflect, and have a giggle…and I thought I’d share:

We moved into our current building about four years ago – approximately 100 people, with only one urinal in the men’s toilet.

It all started with an email which read something like this…

“Would the men please stand over the urinal before doing their business.”

The motivation for this email? Well, let’s just say that there was a fair bit of ‘dribbling’ going on, creating ‘puddling’ on the floor…and (unsurprisingly) some of the male toilet goers weren’t particularly enamoured with their colleagues’ failure to aim…and neither was the cleaner!

So that email should have sorted it all out, yes?

No.

The next email was more direct, dropping any attempt at politeness.

Then, a hand-written sign was put above the urinal. The author’s aim was clearly to insult the culprit(s) – the phantom dribblers.

Finally, the cleaner refused to mop the male toilets.

Action was required…and this came in the form of a mop and bucket of chemical solution, bought for us men, and installed next to the urinal.

Another ‘direct and to the point’ email was sent around informing us of the ‘mop and bucket’ purchase, and what to do with it!

Things quietened down for a while. I definitely detected that a little mopping was happening. A change, but nowhere near perfection.

We then got an email about the mop. Apparently, mops left in chemical solution ‘all day, every day’ quickly dissolve – we were going through a mop head every week! We were now instructed to take the mop out of the bucket once we had cleaned up.

I observed (through my natural toilet visits each day) that the mop was being balanced in all sorts of weird positions – often not making it out of, or falling back into the bucket (splashing chemicals over the floor).

Taking a different approach

It was at this point that I saw an opportunity to experiment.

Emails operate at a point in time, far from the gemba – the urinal in this case! And those emails were clearly proving to be ineffective.

Question: When does someone most need instructions, and/or prompts?

Answer (Hypothesis): At the time and place that the ‘action’ occurs (!), in a format that they can understand and (importantly) accept1 and relate to.

Proposed Countermeasure: Create a clear (and non-judgemental) poster and put it up on the wall, by the urinal.

…and so I did:

Musings of a mop

…and every time I went to the toilet I observed the condition of the urinal2.

So how did that go?

Miraculous! The toilet floor was virtually always mopped clean (it was still shiny from the last fella passing through)…and the mop was always balanced nicely on the bucket.

Hilariously, I found myself regularly reading my own poster and checking the floor and mop3 – I was altering my own behaviours.

The floor stayed nice and clean for many months and I became bored of reading the poster (my words were annoying me)…and so I wanted to perform another experiment – what would happen if I took the poster down? And so I did.

Did the wheels fall off?

Well, no, not really.

I’d say that the floor doesn’t stay quite as clean as it used to…but, in general, people will mop up, and best of all for Mopsy, he remains nicely balanced on the side of the bucket, away from those harsh chemicals.

Learnings

In summary:

  • clear, practical and non-judgemental visual controls at the gemba really work;
  • emails telling you off, and/or telling you what to do, don’t!
  • new behaviours can become habitual (hence why the poster could be taken down4).

I know that there are loads and loads of far more meaningful examples of the enormous power of visual management…but I thought that I’d introduce a little bit of ‘toilet humour’ into our office move process…and now my fellow male office dwellers5 know who wrote the poster 🙂 .

A serious point to end

My editor for this post (thanks Paul) made the most excellent comment:

“It’s interesting that when something is not good then we behave with ‘blaming’…

Asking ‘why’ over ‘who’ is better. [Whether we do this] must be something to do with the environment!”

Paul is calling out that:

  • we so easily jump to blame, looking for the WHO…and think that by naming and shaming, we will force things to improve;
  • we can achieve so much if we look for the WHY, and then do something meaningful about this.

The emails and rude notes were aimed at people.

The mop and bucket, and visual control, were focused on the activity.

Footnotes

1. Acceptance: All those emails (and rude signs) were deficient because, just like cars, no-one believes that they are a bad driver. Unless you are presented with direct evidence of your deficient behaviour, it’s always somebody else!

2. Analysis: Don’t worry, I didn’t set up a formal measuring regime and/or start making deliberate trips to the toilet. For the benefit of any hard core researchers reading this post – my observations can only be described as anecdotal.

3. I’m not saying that I was ‘the phantom dribbler’! I just felt compelled to share the burden and have a little mop up 🙂

4. Taking down the poster: I’m not suggesting that visual controls should be taken down after a period of time. I am suggesting though, that they should be regularly revisited to be refreshed – to keep people’s attention, and improved – to make things even better.

5. …and the females around the office are either disgusted (not having known that some of the males were ‘dribblers’) or pleasantly surprised (that males can change their habits!)

’80 in 20’…erm, can we change that?!

80 in 20This is a bit of a ‘back to basics’ post, inspired by refreshing my memory from reading a superb book. It’s long…but hopefully interesting 🙂

Some years back I was working with a most excellent colleague, who managed a busy contact centre operation. Let’s call her Bob. She was absolutely committed to doing the best she could, for her staff and her customers.

Bob came to me one day for some help: Things weren’t going well, she had a meeting with senior management coming up and she was going to ask them to approve a radical thing – to change, by which I mean relax, their current call handling target.

I didn’t know too much about contact centres back then…so I started by asking some dumb questions. And it went something like this:

Me: “What’s this ‘80 in 20’ measure about?”

Bob: “It’s our main ‘Key Performance Indicator’ (KPI), called ‘Grade of Service’ (or GOS for short) and it means that we aim to pick up 80% of all incoming calls within 20 seconds of the customer calling.”

Me: “Oh…and where do these figures comes from?”

Bob: “It’s an industry recognised KPI. All ‘up to date’ contact centres use it to measure how they are doing and ‘80 in 20’ is Best Practise.”

Me: “…what ‘industry body’ and where did they get these figures?”

Bob: “The [insert name of a] ‘Contact Centre Association’…and I’ve got no idea where the figures come from.”

Me: “So, we have a target of picking up a customer’s call within an arbitrary 20 seconds…and we have an arbitrary target on meeting this target 80% of the time? …so it’s a target on a target?”

Bob: “Yes…I suppose it is…but we are having a real tough time at the moment and we hardly ever achieve it.”

Me: “Okay…but why do you want to ask senior management to ‘relax’ this target-on-a-target? What will this achieve?”

Bob: “Because we publish our GOS results against target for all our contact centre team leaders to see…and frankly there’s not much they can do about it…and this is really demoralising. If I could just get senior management to relax it to, say, 70% in 30 seconds then my staff could see that they at least achieve it sometimes.”

…and that’s how my discussion with Bob started.


I have just finished reading Donald Wheeler’s superb book ‘Understanding Variation – the key to managing chaos’ and my work with Bob1 all those years ago came flooding back to me…and so I thought I’d revisit it, and jot down the key points within. Here goes…

Confusing ‘Voice of the Customer’ and ‘Voice of the Process’

VoPI’ll start with clarifying the difference between the customer and the process. In the words of Donald Wheeler:

“The ‘voice of the customer’ defines what you want from a system.

The ‘voice of the process’ defines what you will get from a system.”

The difference in words is subtle, but in meaning is profound.

In Bob’s case, she has determined that customers want the phone to be picked up within 20 seconds2. However, this wishful thinking (a target) is completely outside the system. Bob could set the customer specification (target) at anything, but this has got nothing to do with what the process can, and will predictably3, achieve.

What we really want to see is what the system (‘handling4 customer calls’) is achieving over time.

A target is digital (on/off) – either ‘a pat on the back’ or ‘not good enough!’

On off switch “A natural consequence of this specification [target] approach…is the suddenness with which you can change from a state of bliss to a state of torment. As long as you are ‘doing okay’ there is no reason to worry, so sit back, relax, and let things take care of themselves. However, when you are in trouble, ‘don’t just stand there – do something!’ …This ‘on-again, off again’ approach is completely antithetical to continual improvement.” (Wheeler)

Unfortunately, Bob is constantly the wrong side of the (current) specification and therefore has the unwavering torment of ‘don’t just stand there – do something!’

But do what? And how would Bob know if whatever they try is actually an improvement or not? Using a target is such a blunt (and inappropriate) tool. Future results:

  • might ‘beat target’ (gaining a ‘pat on the back’) and yet simply be noise5; or
  • might still be lower than target (receiving another ‘kick’) and yet contain an important signal.

Bob cannot see the true effects of any experimentation on her system whilst relying on her current Industry best practise ‘Grade of Service’ KPI. She does not have a method to separate out potential signals from probable noise.

Thinking that a target can change things for the better

pressure“When people are pressured to meet a target value, there are three ways they can proceed:

  1. They can work to improve the system;
  2. They can distort the system; or
  3. They can distort the data.               

(Wheeler, referencing Brian Joiner)

What can a call agent do to ‘hit’ that target? Well, not much really. They can’t influence the number of calls coming in or what those customers want or need. They CAN, however, try to ‘get off the phone’ so as to get to the next call. Mmm, that’s not going to help the (customer-defined) purpose…and is probably likely to create failure demand, complaints and re-work…and make things worse.

What can the contact centre management (from team leaders and upwards to Bob) do to ‘hit’ that target? They could try to improve the system* (which, whilst being the right thing to do, is also the hardest) OR they could simply ask for the target to be relaxed. If they aren’t allowed to do either, then they might begin to ‘play games’ with the data…and hide what is actually happening.

* To improve the system, Bob needs contextual data presented such that it uncovers what is happening in the system…which will enable her to listen to the process, see signals, ask relevant questions, understand root cause, experiment and improve. She, and her team, cannot do this at present using her hugely limiting KPI.

In short, the target is doing no good…and probably some (and perhaps a lot of) harm.

It’s perhaps worth reflecting that “Bad measures = bad behaviours = bad service” (Vanguard)

What’s dafter than a target? A target on a target!

stop that its very sillyWhy? Well, because it removes us from the contextual data, stripping out the necessary understanding of variation within and thus further hiding the ‘voice of the process’.

It’s worth noting that, in Bob’s ‘20 seconds to answer’ target world:

  • A call answered in 3 seconds is worth the same as one answered in 19 seconds; and, worse
  • A call answered in 21 seconds is treated the same as one answered in, say, 480 seconds….and beyond…perhaps even an hour!

Note: I’ve added an addendum at the end of this post with a specific ‘target on a target’ example (hospital wait times). I hope that it is of use to demonstrate that using a ‘target on a target’ is to hide the important data underneath it.

“Setting goals [targets] on meeting goals is an act of desperation.” (Wheeler)

Worse still, a ‘target on a target’ can fool us into thinking that we are looking at something useful. After all, I can still graph it…so it must be good…mustn’t it?

Here’s a control chart of Bob’s ‘Grade of Service’ (GOS) KPI:

I-MR

 You might look at it and think “Wow, that looks professional with all that I-MR control charty stuff! I thought you said that we’d be foolish to use this ’80 in 20’ target on a target?”

You can see that Bob’s contact centre never met the ’80 in 20’ target-on-a-target6 (and, with the current system, isn’t likely to)…and you can perhaps see why she wants to ‘relax’ it to ’70 in 30’….but we can’t see what really happens.

What’s the variation in wait times? (times of day, days of week etc.)

Do some people get answered in 5 seconds? Is it common for some people to wait for 200 seconds? (basically, what’s actually gong on?!)

Is the variation predictable? Are there any patterns within?

Are those months really so comparable?…are any games being played?!

Okay, so I’ve shot at what Bob has before her…but what advice can I offer to help?

Does Bob need to change her ’80 in 20’ KPI?  Yes, she does….but not by relaxing the target.

‘The right data, measured right’ (‘what’, and ‘how’)

what how whyAt its very simplest, Bob’s measures need to help her (and her people) understand and improve the system.

To do this, they need to see:

WHAT matters to the customer? …which could be uncovered by:
“Don’t make me queue” Volume of calls, time taken to answer, abandonment rate.
“I want you to deal with me at my first point of contact” % of calls resolved at first point of contact (i.e. didn’t need to be passed on).
“Don’t put me on hold unnecessarily” % of calls put on hold (including reason types and frequencies).
“I want to deal with the right person (i.e. with the necessary knowledge, expertise, and authority)” % of calls passed on (including reason types and frequencies).
“I want you to action what you have promised, when I need it…and to do so first time.” Failure demand, either chasing up or complaining (including reason types and frequencies).

Now, Bob (and most contact centres) might reply “We already measure some of that stuff!”

Yes, I expect you do.

What also matters is HOW you measure it.  Measures should be:

  • shown over time, in chronological order (i.e. in control charts, to show variation), with control limits (to separate out signals from noise);

  • updated regularly (i.e. at meaningful intervals) and shown visually (on the floor, at the gemba), providing feedback to those working in the system;

  • presented/ displayed together, as a set of measures, to show the system and its interactions, rather than a ‘Grade of Service’ KPI on a dashboard;

  • monitored and analysed to identify signals, and consider the effect of each experimental change towards the customer purpose;

  • devoid of a target! The right measures, measured right will do just fine.

Why are control charts so important? Wheeler writes that:

“Instead of attempting to attach a meaning to each and every specific value of the time series, the process behaviour [i.e. control] chart concentrates on the behaviour of the underlying process.”

aeroplane dashboardWhy do we need to see a set of measures together? Simon Guilfoyle uses the excellent analogy of an aeroplane cockpit – you need to see the full set of relevant system measures to understand what is happening (speed, altitude, direction, fuel level…). There isn’t ‘One metric that matters’ and it is madness to attempt to find one.

Looking at Bob’s proposed set of capability measures (the table above), you can probably imagine why you’d want to see them all together, so as to spot any unintended consequences to changes you are experimenting with.

I.e. if one measure appears to be improving, is another one apparently worsening? Remember – it’s a system with components!

To summarise:

In a nutshellIf I am responsible for a process (a system) then I want to:

  • see the actual voice of the process;
  • get behind (and then drop) any numerical target;
  • split the noise from any signals within;
  • understand if the system is ‘in control’ (i.e. stable, predictable) or not; and
  • spot, and investigate any special causes7

and, perhaps more important, I want to:

  • understand what is causing the demand coming into the system (rather than simply treating all demand as work to be done);
  • involve all of the people in their process, through the use of visual management (done in the right way); and then
  • experiment towards improving it…safe in the knowledge that our measures will tell us whether we should adopt, adapt or abandon each proposed change.

Bob and I continued to have some great conversations 🙂


I said that I would add an addendum on the subject of ‘a target on a target’…and here it is:

Addendum: An example to illustrate the point

I’ll borrow two diagrams8 from a really interesting piece of analysis on NHS hospitals (i.e. in the UK) and their Accident and Emergency (A&E) wait times.

The first chart is of Alder Hey Children’s hospital. It shows a nice curve of the time it takes for patients to be discharged:

Alder Hey

The second chart is of Croydon University Hospital. Same type of chart, but their data tells a vastly different story!

Croydon

Q1: Do you think that an activity target has been set on the A&E system and, if so, where do you think it has been set?

I’d bet (heavily) that there is an A&E ‘time to discharge’ target, set from management above, of 4 hours (i.e. 240 minutes). It’s sort of evident from the first graph…but ‘smacks you between the eyes’ in the second.

Two further questions for you to ponder9: Looking at the charts for these two hospitals…


Q2: Which one has a smooth, relatively under control A&E system, and which do you think might be engaged in ‘playing (survival) games’ to meet the target?

I’d say that Alder Hey is doing rather well, whilst Croydon is (likely) engaged in all sorts of tricks to ship patients somewhere (anywhere!) ‘before the 4 hour buzzer’ – with a likely knock-on effect to patient experiences and outcomes;


Q3: Which one looks better on a ‘% of patients that met the 4 hour target’ league table? (i.e. a target on a target)

It is typical for health services to set an A&E ‘target on a target’ of, say, ‘95% discharged from A&E within 4 hours’10. This is just like Bob’s ‘80% in 20 seconds’.

Sadly, Croydon will sit higher up this league table (i.e. appear better) than Alder Hey!

If you don’t understand why, have a closer look at the two charts. Look specifically at the volume of patients being discharged after the 240 min. mark. Alder Hey has some, but Croydon has virtually none.

Foot notes

1. Just in case you hadn’t worked it out, she (or he) wasn’t called Bob!

2. Customer Target: Setting aside that the customer target shouldn’t (and indeed can’t) be used to improve the ‘handling calls’ system, I have two problems with the 20 second ‘customer specification’.

a. An industry figure vs. reality: rather than assuming that a generic industry figure of 20 seconds is what Bob’s customers want, I asked Bob to provide me with her call abandonment data.

I then graphed a histogram of the time (in seconds) that each customer abandoned their call and the corresponding volume of such calls. This provided us with evidence as to what exactly was happening within Bob’s system…which leads me on to:

b. An average customer vs. variety: There’s no such thing as ‘an average customer’ and we should resist thinking in this way. Some people were abandoning after a couple of seconds, others did so after waiting for two minutes. We can see that there is plenty of customer variety within – we should be thinking about how we can absorb that variety rather than meet some non-existent average.

3. Predictably, assuming that it is stable and there is no change made to the process.

4. Handling: I specifically wrote ‘handling’ and not ‘answering’. Customers don’t just want their call answered – they want their need to be met. To properly understand a system we must first set out its purpose from the customer’s perspective, and then use an appropriate set of measures that reveal the capability of the system against this customer purpose. ‘Answering calls’ may be necessary, but it’s not sufficient.

5. Noise vs. Signal: I’m assuming in this post that you understand the difference between noise and signals. If you don’t (or would like a refresh) then an earlier (foundational) posts on variation might assist: The Spice of Life

6. A clarification in respect of the example ‘I’ control chart: The Upper Control Limit (UCL) red line (at 80.55%) does not represent/ is not the 80% target. It just happens to be the case that the calculated UCL for Bob’s data works out to be nearly the same as the arbitrary target – this is an (unfortunate) fluke. A target line does not belong on a control chart!

7. Special Cause tests: The most obvious signal on a control chart can been seen when a point appears outside the upper or lower control limits. There are, however, other types of signals indicating that something special has occurred. These include ‘trends’, ‘shifts’, and ‘hugging’. Here’s a useful diagram (sourced from here):

special causes

8. Hospital charts: The full set of charts (covering 144 NHS hospitals for the period 2012-13) is here. I’ve obviously chosen hospitals at both extremes to best illustrate the point.

I can’t remember where I first came across these hospital charts – which annoys me!…so if it was via a post on your blog – I’m sorry for my crap referencing/ recognition of your efforts 🙂 

9. Here’s a 4th and final question to ponder: If, after pondering those two questions, you still think that a ‘target on a target’ makes sense then how do you cope with someone not always meeting it? Do you set them a target…to motivate them?

How about a target for the ‘target on a target’???

  • A 95% target of achieving an ‘80% of calls answered in 20 seconds’ target
  • A 90% target of achieving an ’95% of patents discharged within 4 hours’ target
  • ….

…and, if you are okay with this…but they don’t always meet it then how about setting them a target…where does the madness end?!

We are simply ‘playing with numbers’, moving ever further from reality and usefulness.

10. Hospital ‘Emergency department’ League tables:

Emergency tableHere’s a New Zealand ‘Emergency departments’ league table, ranking district health boards against each other (Source).

Notice that it shows:

  • A ‘target on a target’ (95% within 6 hrs)
  • A single quarter’s outcome
  • A binary comparison ‘with last quarter’
  • A (competitive) ranking

All of which are, ahem, ‘problematic’ (that’s me being polite 🙂

You can’t actually see how each district is performing (whether stable, getting better…or worse)

…and you certainly can’t see whether games are being played.

Inspector Clouseau

inspector-clouseauSo, a bloody good mate of mine manages a team that delivers an important public service – let’s call him Charlie. We have a weekly coffee after a challenging MAMIL 1 bike ride….and we thoroughly explore our (comedic) working lives.

And so to our most recent conversation – Charlie told me that his department is due an inspection (a public sector reality)…and how much he, ahem, ‘loves’ such things! 🙂

“Oh, and why’s that?” I say.

His response2 went something like this: “Well, they come in with a standardised checklist of stuff, perform interviews and site visits to tick off against a set of targets, and then issue a report with our score, and recommendations as to what we should be doing….it’s not exactly motivational!”

“Mmm” I say…”but presumably their intent – to understand how well things are going – is good?”

Here’s the essence of Charlie’s reply: “Yep, I ‘get’ the intent behind many of the items on their checklist, and I accept that knowing how we are doing is really important…but, rather than act like robotic examiners, I want them to:

  • understand us, and our reality (what we are having to deal with3)…and I want them to focus on what really matters in respect of our service, not hide behind narrow ‘currently trendy’ targets and initiatives set from above;

  • give us the chance to demonstrate:
    • how we are doing;
    • where we know we have room for improvement; and
    • what we are doing to get better; and finally 

  • assist us, by adding value (perhaps with useful references to what they’ve seen working elsewhere) rather than be seen as ‘taking up our time’. “

“Righto”, I said ”…that sounds excellent! I’ll write a post on that lot”…so here goes:

A ‘short but sharp’ generic critique of inspections:

inspectionInspection requires someone looking for something (whether positive or negative)…and this comes from a specification as to what they believe you should be doing and/or how you should be doing it.

The ‘compliance’ word fits here.

…and, as such, the inputs, behaviours and outcomes from inspections are rather predictable. You can expect some or all of the following:

  • people, often (usually?) from outside your service, spend time writing (often inflexible) specifications as to what you should (and should not) be doing;
  • people are employed, and trained, as inspectors of those specifications;
  • you and your team spend precious time preparing before each imminent inspection:
    • running preparatory meetings to guess what might happen;
    • window-dressing solely for the benefit of ‘the inspector’ e.g. making your work space look temporarily good, filling in (and perhaps even back dating) ‘paperwork’;
    • even performing ‘dummy runs’ (rehearsals!);
  • you and your team serenade the inspector around during their visit, with everyone on their best behaviour;
  • questions are asked, careful (guarded) answers are given, and an inspection report is issued;
  • your post-inspection time is then consumed rebutting (what you consider to be) poorly drawn recommendations and/or drafting action plans, stating what is going to be implemented to comply.

…and after it’s all over, a big sigh is let out, and you go back to how you were.

Wouldn’t it be great if, rather than playing the ‘inspection game show’, you truly welcomed someone (anyone) coming in to see what you actually do and, when they arrive, you carried on as normal because you are confident that:

  • you are operating in the way that you currently believe to be the best; and
  • you want them to see and understand this, and yet provide you with feedback that you can ponder, experiment with, and get even better at delivering against your purpose4.

…so how might you get to this wonderland?

A better way:

just-one-questionI’m a huge fan of a (deceptively) simple yet (potentially) revolutionary idea put forward by John Seddon:

“Instead of being measured on compliance, people should be assessed on whether they are able to show that they are working to understand and improve the work they do.

It is to shift from ‘extrinsic’ motivation (carrot and stick) to intrinsic motivation (pride), which is a far more powerful source of motivation.”

…and to the crux of what Seddon is suggesting:

“Inspection of performance should be concerned with asking only one question of managers:

‘What measures are you using to help you understand and improve the work?’ “

This, to me, is superb – the inspector doesn’t arrive with a detailed checklist and ‘cookie cutter’ answers to be complied with; and the manager (and his/her team) has to really think about that question!

To answer it, the team must become clear on:

  • the (true) purpose of the service that they provide;
  • what measures5 would tell them how they are doing against this purpose (i.e. their capability);
  • how they are doing against purpose (i.e. as well as knowing what to measure, they must be actively, and appropriately, measuring it for themselves);
  • what they are working on to improve, and how these are affecting the performance of their system; and
  • what fresh ideas have arisen to experiment with.

You can see that this isn’t something that is simply ‘prepared in advance’ for a point-in-time inspection. It is an ongoing, and ever maturing, endeavour – a way of working.

It means that any inspector (or interested party) can arrive at any time and explore the above in use (as opposed to it being beautifully presented in ‘this years’ audit file ring-binder)

 “Are you saying that ‘specifications’ are wrong then?”

i-love-to-clarifyBefore ending this post, I’d like to clarify that:

  • No, I’m not saying that specifications are (necessarily) wrong; and
  • I’m also not saying that scientific know-how, generated from a great deal of experience over time should be ridiculed or ignored ‘just because it comes from somewhere else’.

Taking each in turn,

  • Specifications (e.g. the current best known way to perform a task) should be owned by the team that have to perform them…and these should be:
    • of adequate depth and breadth to enable anyone and everyone to professionally perform their roles;
    • suitably flexible to cater for the variety of demands placed upon them (requiring principled guidelines rather than concrete rules); and
    • ‘living’ i.e. continuously improved as new learning occurs6

  • If there is a central function, then their role should be to:

    • understand what is working ‘out there’; and
    • effectively share that information with everyone else (thus being of great value to managers)

without dictating that a specific method should be ‘complied with’.

The point being that we should not think in terms of ‘best practise’…we should be continuously looking for, and experimenting with, better practise, suited to each scenario. This is to remove the (attempted) authority from the centre, and place it as the (necessary) responsibility where the actual work is performed, by the service (manager, and team) on the front line.

In this way, ‘the centre’ can shift itself from being seen as an interfering, bureaucratic and distant police force, to a much valued support service.

To Ponder:

If you are ‘being inspected’ then, yes, I ‘get’ that you currently need to ‘tick those boxes’…but how about thinking a little bit differently:

…how would you show those ‘inspecting you’ that you truly understand, and are improving, your system against its purpose?

You could seriously surprise them!

If you can really answer that one question, then you are likely to be operating a stable, yet continually improving service within a healthy environment, both for your team and those they serve.

Who knows – ‘the inspector’ might want to share what you are doing with everyone else 🙂

And, going back to the top – i.e. Charlie’s reply as to what he really wanted out of an inspection – I reckon he was ‘right on the money’!

A final comment…for all you private sector organisations out there:

Don’t think that this post doesn’t apply to you!

If you have centralised ‘Audit’, ‘Quality Assurance’ and/or ‘Business Performance’ teams (i.e. that are separate from the actual work), then virtually everything written above applies to your organisation.

Footnotes:

1. MAMIL: Middle aged men in lycra

2. To ‘Charlie’ – Please excuse the poetic licence that I have taken in writing the above…and I hope it may be of some use (woof woof 🙂 ).

3. What we are really dealing with: I picked the ‘Inspector Clouseau’ picture to allude to the possibility (probability?) that many an inspector, stuck with their heads in their audit checklist, hasn’t a clue about what is really going on within, and/or what really matters for, the service before them….and for some, this is still true AFTER the inspection has been completed 😦

I’m not trying to ‘shoot at’ inspectors – this is a role that you have (currently) been given. You might also like to ponder the above and thereby look to re-imagine your purpose. Doing so could dramatically improve your work satisfaction…and help improve the services that you support.

4. Purpose: not to be confused with the lottery of attempting to meet a numeric target.

5. A set of Measures that uncover how the system is performing, NOT one supposedly ‘all seeing’ KPI and an associated target.

6. Living: Years of (regularly futile) experience have proven to me that ‘learning aids’ (whether they be documents, diagrams, charts, pictures….) will only ‘live’ (i.e. improve) if they are regularly (i.e. necessarily) used by the workers to do the work. An earlier post (Déjà vu) fits here.

7. I think that a couple of previous posts are foundational and/or complimentary to this one:

The Principle of Mission: That clarification of intent, and allowing flexibility in how it is achieved, is far more important than waiting for, and slavishly carrying out ‘instructions’ from above.

Rolling, rolling, rolling: The huge, and game-changing difference between rolling out and rolling in change. One is static, the other is dynamic and purpose-seeking.