News Detail back to listing
Learning from Experience: Honesty and malingering by Graham Rogers, Consultant psychologist
- Feb 22, 2017
- Latest News
Having been a qualified psychologist for some 25 years I have been involved in spotting those who are ‘somewhat less than honest’ for most of that time, initially because I began my career working with adolescents with behavioural problems. As anyone with experience working with such a population could tell you, they are not the most reliable when it comes to disclosing information about themselves or the events with which they have been involved; even when their inappropriate behaviours has been directly witnessed by teachers, social workers, police officers or others. Further, if you work with younger populations, their parents are also known to occasionally ‘adjust’ information; that is, there may be a degree of colour and invention added to the account and as such what is shared with you, as the relevant professional, may not be entirely ‘the whole truth.’ However, even in testing, the notion that children exert optimum effort has been challenged for years, as noted by Faust (1995), Hart (1995), and McCann (1998). Hence, if you work with adolescents you learn the many ways in which information can be misrepresented and the value of checking what has been said to build up a more reliable account of the issues at hand.
This initial experience comes from working within local government where the use of a sceptical eye is expected though not all professionals work in such a manner. By contrast, those who train and work initially within the NHS are taught to believe what the client tells them, after-all, why lie over their sickness or that of a close family member. Indeed, lies are seen as counter-productive and even symptomatic of an underlying disorder and a refusal to accept the illness itself.
Again, contrasting this training and experience, those who work and have trained in forensic (criminal) psychology know that their population, their clients, have a proven record of ‘rule breaking.’ Such psychologists, in the broadest terms, expect their clients to lie, where any deviation from what is expected may be seen as an attempt at manipulation.
In my view, the population/s with whom we initially work and train, determines how we address the difficulties presented by clients. This combination of training, experience, and the population, especially in the earlier parts of our careers, determines the tools psychologists use to spot those who may be ‘less reliable.’
For those who work with younger populations, they may prefer the ‘pattern performance method’ (PPM); Slick et al, 1999; Meyers and Volbrecht, 2003, where (1), test results need to develop a clinically meaningful profile of behaviour; (2), the observations of behaviour need to be consistent with this profile; and (3), where descriptions from others come together and coalesce into a reliable whole. Hence, using the PPM, there are multiple areas that need to overlap to make a meaningful clinical picture, one that offers a degree of reliability.
However, at this moment in time, the most popular method used in court for determining reliability are tests designed for such purposes, so-called Performance Validity Tests (PVT’s), which have been available in many forms for years; Rogers and others, 1992; Paulhus, 1999; Vickery et al, 2001; Tombaugh, 1996; Widows and Smith, 2005. These are tests specifically designed to indicate the reliability of the person taking the test and in doing so, these tests act as markers for the reliability of other tests undertaken by the client. Of course, one might argue that one test indicating the reliability of another, often unrelated test, is a substantial assumption; though that is another conversation.
However, as an alternative to the client lying, faking or lacking effort, I argue that we need to consider the behaviour of the psychologist.
Psychologists make mistakes, can be limited by their training, expertise and experiences, and I would argue, they can also be bias. To simply suggest that the psychologist is bound by a code of ethics and conduct or that they are ‘regulated’ in some way thereby preventing such behaviour is naïve and ignores history. More specifically, it ignores Dr Harold Shipman, a GP who appears to have killed more than 200 elderly patients; it ignores Beverley Allitt, a paediatric nurse who killed four babies, attempted to kill 3 others, and caused grievous bodily harm to another 6. It ignores Dr Myles Bradbury, a doctor from Addenbrookes Hospital who admitted abusing 18 sick children under his care. These were all professionals who were regulated but that did not help their numerous victims. Then we look at researchers where in 2009 at least 2% admitted faking their own test results for publication and 14% of researchers knew others who had done so. In regard to psychologists, it is widely accepted that the later research of Professor Cyril Burt was open to substantial doubt with many academics having claimed in the past that his work was based on fabrication. I would argue that we need to keep a closer eye on the professionals and what it is they actually do.
All psychologists are open to making errors, after all, we are human and therefore, according to the late Albert Ellis (1996), fallible. These errors may be ‘technical,’ where the professional makes a simple error with the numbers, or fails to understand aspects of the test, its administration, scoring, or interpretation.
Recent research has shown how the results of the Wechsler Adult Intelligence Scale, fourth edition (WAIS IV), arguably the most well-used adult intelligence test used in the UK and America, can be wrong, due to the behaviour of the psychologist (Styck and Walsh, 2016; McDermott et al, 2014).
David Wechsler (1975, 2008 [p 3]), the founding father behind the Wechsler intelligence scales, notes that one needs to place the results of the tests alongside the client’s history and behaviour. However, he is not alone in noting the range and depth of information required to interpret an intelligence test. (1) One always needs to take an appropriate history and family history. (2) You need to understand how the history impacts the test results. (3) Where differences between the scores of tests are large, where possible, one needs to understand what such differences mean. (4) One has to consider how individual client traits and diagnoses may affect the results; Chelune (2003); Lineweaver and Chelune (2003); Roid and Barram, p.69, 2004; Flanagan and Kaufman, p.122, 2004; Weiss et al, p.103, 2006; Lichtenberger and Kaufman (2009); Brooks et al, p. 443/4, 2009; Sherman and Brooks, p.29, 2012.
Accurately interpreting an IQ test is not as simple as many would have you believe.
In a recent case in which I was involved, a psychologist undertook a WISC IV IQ test with a young person. However, in reviewing their results I noticed that they did not take a history, did not consider the impact of existing diagnoses (autism and ADHD) and did not take into account the large differences between the tests that contribute toward the final IQ score. The psychologist simply said the client had an IQ of 72 and whereas the score itself might have been correct, the interpretation of the score ignored valuable information raising significant doubt about the findings.
To the courts and others, this may come as a shock, the idea that results are wrong, not because of the client, but the actions of the psychologist.
However, in my experience one of the most common ways to mislead the court is simply to re-administer a test previously given to the client. That is, you give the same test to the client a short time after it was first used.
It has been recognised for decades (Cronbach, 1990; Kaufman and Lichtenberger, 1999) that repeating the test produces higher scores on the second occasion due to practice and procedural learning (Chelune 2003; Hawkins and Tulsky, p.226, 2003; Lievens et al, 2007; Wechsler p.48, 2008; Weiss et al, p.176, 2010). Indeed, the latest research indicates that both race and gender may further influence retesting bias (Randall et al, 2016); it has been known for years that retesting is fraught with risk (Brooks et al, p.202 – 204, 2009). However, some psychologists ignore these risks and go on to claim that the re-tested defendant, who scored higher on this second test, was more able than the first test showed, which may not be true and may mislead the court.
A number of years ago I was involved at a criminal case in Kent where I performed the first IQ test on a defendant and the prosecution’s psychologist performed the second opinion, using the same IQ test. At court, I was provided with the second report and we were asked to consult. However, no sooner had we sat down when the judge called us back and decided that this procedure was not required. I think the term that comes next is ‘oops!’
What I spotted was that the defendant’s results, using the same test, were significantly poorer on the second occasion than the first; and they were very low on the first. Indeed, the drop in the scores was most-pronounced in areas known to have the greater increases when retested. In my view, this change of score was highly improbable. Neither the other psychologist nor the judge saw this anomaly, and as such, the case was dismissed on the grounds of ‘fitness’; I was not allowed to speak.
These difficulties also show themselves when using Performance Validity Tests (PVT’s), tests specifically designed to provide an insight into possible faking behaviour, for example, a lack of effort, or a deliberate manipulation of the results. Yet these tests are also open to errors by the psychologist, some ‘accidental,’ while others less so.
I had an experience where in a high profile case the crown’s expert firstly upset the defendant during their interview to such an extent that it made him cry, then used just one test, the Test of Memory Malingering (TOMM). The TOMM assesses the faking of memory and is typically used as part of a series of tests which considers not only if the person being tested has a neuropsychological impairment, but the probability that such an impairment is genuine. “The TOMM is not intended to be the sole instrument of clinical assessment or a substitute for sound clinical judgement that utilises various sources of information, such as clinical interviews and observation,” p 2, Tombaugh, (1996).
In this case, one needs to ask, what is the likely outcome of upsetting a defendant prior to testing; are they likely to cooperate, and if not, why? Here the psychologist chose not to use any other test, simply saying, the defendant failed the TOMM, therefore all other tests would be unreliable?
What makes this case so interesting was that the defendant was illiterate and had attended special education. The evidence of low intelligence was overwhelming. Indeed, in giving evidence after the defendant, the judge stopped me when I began to explain the nature of his learning difficulties, and said “I think we can all see what learning disability means”.
I would argue that what we are seeing here relates to the ethical behaviour of psychologists.
At a recent court case, a psychologist who used two tests of malingering stated that the offender faked their results. Regarding the final test of the day, the second test of malingering, the young man did not try, with behaviour which was open and deliberate; he deliberately ‘gave up’ and chose not to try. However, the psychologist had earlier used a test called the REY 21, which was scored as ‘borderline.’ Yet, in looking through the research I discovered the young man had passed the test, assuming he had a low IQ; I found a low IQ, so did the other psychologists. Again, as with the TOMM above, the cut-off between a ‘true result’ and one that was ‘faked’ had to be adjusted to match the level of intelligence of the person taking the test.
However, in this case I asked the defendant how the crown’s psychologist behaved, offering the bias view that we (psychologists) are usually polite and respectful and if they found them to be the same. Being polite and respectful aids the development of ‘rapport’ and increases cooperation from the client. Sadly, the defendant told me that as the day wore on the psychologist became increasingly rude, short, and argumentative.
I suspected this because within the report of the psychologist it stated that the defendant had not cooperated with one of the tests and deliberately ‘got it wrong’ as noted above. Yet to me, this apparent change in the behaviour led to me asking why? Why did the defendants’ behaviour change?
One needs to ask, if you are rude and disrespectful, does it aid compliance and cooperation or might it have a negative impact on the assessment process? Is the client ‘faking’ or are they withdrawing their cooperation due to being insulted and upset? Does a psychologist know that if they are rude and disrespectful it will alter the behaviour of the defendant?
The ethical implication of using PVT’s with those accused of a crime, where society is seeking to remove their liberty, often for many years, is considerable. I would argue that under such circumstances counsel, judges, and others may wish to take more time considering the interactions between professional and defendant. They may wish to consider if the professional has acted ethically or not.
Interestingly, I was told some time later that the judge had criticised the behaviour of the second psychologist.
Many judges are trained in what psychologists do, developing a considerable range and depth of knowledge which enables them to more fully consider our behaviour. However, what they, and counsel, are not doing is challenging our basic approach to the assessment of the defendant.
In my experience, unethical behaviour is an increasing problem, especially in high profile cases. A few years ago I assessed a defendant in a high profile case, where I saw him over four half-day sessions, considered his medical records, his pre-existing diagnosis, considered the police interview, the court bundle, and due to the complexity of the case consulted widely with professional colleagues including my external consultant based in America. It was an assessment that took several months, at which point I arrived at a conclusion and an opinion about the defendant.
At this point, a second opinion was requested and the psychologist criticised my report. In many respects this is what one would expect as a second opinion enables a critical re-evaluation of all that has gone before; professionally it is a strong position to be in and in my view one of considerable responsibility. The second psychologist can consider different interpretations of the evidence, different approaches to collecting it, the pre-existing medical evidence and medical history and they can re-assess the defendant to compare and contrast the new findings with the original. This would be good professional practice, especially in a complex case such as this.
However, in this case, the second opinion and its criticism were offered without interviewing or re-assessing the defendant and without reviewing the medical evidence. In circumstances such as these, one has to ask, upon what was the psychologist basing their opinions. As the defendant admitted their involvement and was certainly going to prison, one has to ask, what was the motivation behind the psychologist’s behaviour?
In my view, it is not enough for the court to simply look at the defendant when considering the issue of ‘unreliable behaviour;’ it is also necessary to consider the behaviour of the psychologist.
As expert witnesses, psychologists are expected to conduct independent assessments and although their opinions may ultimately favour one side more than the other, their behaviour towards the defendant should always be professional, and polite. Defendants are invariably experiencing high levels of stress when they meet psychologists, they often don’t understand why we are seeing them and their fear generates a defensiveness which we as experienced professionals need to reduce in order that they cooperate and represent themselves to the best; this helps the court and the judge in particular. The psychologist has absolutely no role in sentencing or in the guilt or innocence of the defendant, but what they can offer is an insight into the person who for whatever reason has found themselves within the criminal justice system. It is possible that this insight into the defendant, may assist the jury, but in my experience, the insight is primarily there to assist the judge in deciding what to do during the trial process, for example the inclusion of special measures, or in the event of a guilty verdict, to provide additional information with which to aid in their consideration of sentencing.
As an expert we are not there to support the views of either side, rather, we are there to support the court as a whole whilst remembering at all times that until the defendant is found or pleads guilty, then there is the assumption of innocence. Innocent until proven otherwise has always been the mainstay of British law, but it appears that some experts, either through pressure from, or loyalty to an employer, take a partisan approach. Of course, one might consider whether such an approach does justice any favours. Research in America consistently finds around 4% of the prison population to be innocent at any one time. If this were also the case here in the UK, that would equate to approximately 3500 people. Perhaps this should be the motivating force not only for psychologists and other expert witnesses, but also for the judiciary to more actively challenge what experts do.
Graham Rogers, Consultant psychologist
Graham has experience within health, education and social services and has been actively involved in the protection of vulnerable adolescents and adults. He has been involved in court and other legal work for his entire career, first giving live evidence in 1991, and has been an Acting Head of Service.