Evidence-based Practice

More and more, as the idea of evidence-based practice takes hold,

we as counsellors are increasingly likely to find ourselves in situations where we have to “measure” our clients’ progress: employee assistance programmes often require a test at the start and end of the work, and sometimes even one in the middle; and I have even heard a very senior practitioner in the children and young people field suggest, with a perfectly straight face, that some kind of assessment questionnaire at every session was not only appropriate, but therapeutically beneficial!
The reality, though, is that measurements are here to stay. For Adlerians, this is not quite so much of a challenge, as Adlerian psychology has built into it the idea of measuring life task scores, and using scaling questions (“On a scale of 0–10, how do you feel about…?”); for practitioners in other disciplines, this kind of task can feel uncomfortable and at odds with their ways of working.
But there can be a catch. When we ask a client for their life task scores, for example, we are explicitly asking them for their subjective judgement as to where they feel they score in each area – there is little danger that we are going to use those scores to make comparisons between clients, and it would be futile to do so, since what one person might think of as being a 5 could well be a 7 to someone else.
Those dangers are not always so evident to those who crave the evidence that tests like CORE or the plethora of screening tests appear to give. To them, the task is one of trying to achieve an objective rating – independent of the client’s perceptions – that indicates how “well” or otherwise our clients are. This is something which might reasonably be regarded as impossible to achieve.
Worse, there is always the temptation, having gathered together such a body of data, to start aggregating it or using it to make comparisons between practitioners. Data that was initially collected to provide some kind of comparison over time between an initial subjective assessment by the client of their state of mind and a subsequent, also subjective assessment by the same client starts to be misapplied to make comparisons that can be both flawed and risky when it is aggregated across many clients and therapists.
Indeed, in one setting in which I worked, a new CORE graph was issued to us. Along with the bandings indicating the “severity” of the client’s score were two black lines superimposed, which, on further enquiry, turned out to be an upper level beyond which the client was considered too severely affected to be appropriate for us to work with, while the lower line represented a cut-off point indicating a degree of distress too low for us to concern ourselves with.
What this demonstrated was a very fundamental failure to understand the perils of using subjective measurements to make such decisions: by scoring “too low” (as often happens with clients who are not yet secure enough in their counselling to risk giving us the unvarnished truth about how they are feeling), a client was, under that system, able to unwittingly deprive themselves of counselling they wanted and needed, with potentially profound consequences for their own well-being.
This brings us to another point. The act of asking a client to complete a questionnaire or answer scaling questions may well influence the very feelings we are attempting to quantify. When this is being done as an embedded part of the therapeutic process – as Adlerians do – this can be beneficial, as it can help bring into the client’s awareness their own feelings about themselves or their relationships, and the therapist will typically work with the client on the feelings and knowledge identified by that process. However, if the measurement process is somehow imposed onto the therapeutic one, and does not form a seamless part of it, the risk is that it can affect clients without offering the same potential for therapeutic benefit.
Measurement, of one kind or another, is not going to go away, although it is interesting to note that the National Institute for Clinical Excellence is itself moving away from its previous adherence to the evidence-based CBT model as the only suitable approach.
We, as practitioners, are going to need to learn how to use measuring tools in our work in a way that can, ideally, enable us to work more effectively to the client’s benefit, or at least not interfere too much with the therapeutic process. However, it may also be that we have to take the bold step of drawing a firm line across the threshold of our counselling rooms when it comes to funders or service providers making demands on us to measure our clients.
Ultimately, if something threatens to undermine the most important aspect of our work – the therapeutic alliance – we have to ask ourselves where our real responsibilities lie.
On that note, I’ll share a final thought from American Express’s chief marketing officer, who is reported to have said: “We tend to overvalue the things we can measure and undervalue the things we cannot.”

Stephen Allsopp, APPLIED ZETETICS