Attribution: DSC_3387 by Philippa Willitts / CC BY-NC 2.0

Peddling nonsense

Reviewing your practice and determining what’s been superseded by something better or just found to be wrong is an essential part of improving.

Owen Ferguson
test > learn > adapt
4 min readOct 27, 2016

--

I’ve been a peddler of nonsense. Not intentionally, but with some enthusiasm at the time.

A number of years ago (more than I care to recall), I delivered a reasonable amount of face-to-face training. One of the first pieces of work I was tasked with was revamping the communication skills module of a learning programme. Like the enthusiastic beginner I was, I did my research, analysing the specific communication needs of the target audience, interviewing them and their managers and taking direction from other internal sources. I went about digging into the lore of the subject, reading books, cribbing from previous courses and digging around some early training sites for inspiration and ideas.

When I was ready, I went about the design of the module as best I knew how, trialled it, tweaked it and rolled it out. Feedback was great, happy sheets had the highest of ratings and it was a genuinely enjoyable experience.

Some time later, at a different organisation, I was given a similar job revamping a communications skills course. I’ve always believed in iterative design, so rather than just rehash what I had done before, I carried out some more desktop research — that’s when I came across an article that shook my pre-existing beliefs about the work I had done.

It was written by an American called ‘Buzz’ Johnson (the internet has since scrubbed the page I visited, but here’s some context http://www.learning-org.com/03.04/0066.html). In it, he tore to shreds the basis of the 55–38–7 rule of communication I had built into the communications module that had received such great feedback.

As I read, I felt embarrassed and guilty. And then angry. I had taken that idea from a multitude of sources. I’d delivered the concept in a way that must have been convincing because I got all the cues from participants on the course to indicate understanding and agreement. I used great examples, they provided even more. We all got completely taken in by the seductive, science-y basis of the idea. And it was worse than useless, it was just wrong.

Wait, maybe ‘Buzz’ was wrong. Maybe he had an axe to grind. But he went back to the source, to Albert Mehrabian’s original research, and so I could go back to the paper too. And it was exactly as ‘Buzz’ had described. [1]

I castigated myself — I knew better. What was the point of all those lectures and science labs at university where we specifically guarded against the kind of weak thinking that I had fallen prey to? I was amazed that nobody — nobody! — on any of the dozens of sessions I’d run had come up with the simple counterexamples that would have exposed the concept as bogus.

But here’s the thing: Mehrabian’s work, twisted into a simple, easily digested concept, was clearly wrong. And no matter how I try to dress it up, I’d presented a bad idea to people. I was wrong, and disabused of my misunderstanding, I could improve.

It was a lesson to dig deeper. To not accept claims at face value and insist on knowing the basis for approaches, ideas and models. More evidence based. More robust interrogation of a concept.

Coming to the present day, we’re awash with similarly weak concepts and ideas (including , incredibly, the Mehrabian Myth — I see it all over the place). From learning styles, MBTI and NLP, through to reductionist leadership models, Maslow and others.

The reason these old, outdated ways of thinking stick around? Bias. Not a conscious, deliberate bias, but a series of hidden cognitive biases. We struggle to accept that something we’ve believed to be true, and have advocated for personally, is simply wrong.

This is especially true when we believe we’ve seen the benefits of a particular approach with our own eyes. But, as with homeopathy, personal experience can be misleading. Placebo effects are discussed widely, but there’s also regression to the mean and the pygmalion effect to name just a couple more ways that our personal experience can trick us into seeing causal effects that don’t really exist. There’s a reason the hierarchy of evidence puts expert opinion and anecdotal experience at the bottom rung.

Via What is employee engagement and does it matter?: An evidence-based approach

Simply being given evidence that we’re wrong isn’t enough, either. There’s a some fascinating research suggesting that being given evidence showing a pre-existing belief isn’t true actually increases the intensity with which we believe it.

We have to be open to being wrong. We have to see being wrong as a positive step towards being better. And that’s hard because in business admitting you’ve been wrong is seen as a weakness more often than not.

For me, a recognition that I’ve got something wrong is the opportunity to get better. Sometimes that hurts because I’ve dragged other people along with me. However, the greater mistake would be to plough on with an approach when the best evidence suggests it’s not up to scratch.

[1] It’s probably worth noting that Buzz Johnston was an NLP practitioner, and so I wouldn’t hang a huge amount of credibility to what he wrote, but the key thing here is he did the research. He traced the 55–38–7 rule back to its source and it was that which was lacking. If nothing else, Buzz was right on the money about the Mehrabian Myth.

Sukh Pabial’s post “Learning Styles, MBTI, NLP and asbestos” motivated me to write this particular post. Many thanks to him for that.

--

--