Microsoft Word - 10. Contribution - Violence risk assessment.doc Europe’s Journal of Psychology 2/2010, pp. 150-171 www.ejop.org Applying decision making theory to clinical judgements in violence risk assessment Jennifer Murray Glasgow Caledonian University Dr. Mary E. Thomson Glasgow Caledonian University Abstract A considerable proportion of research in the field of violence risk assessment has focused on the accuracy of clinical judgements of offender dangerousness. This has largely been determined through research which compares the accuracy of clinical predictions of offender dangerousness or future violence to mathematical predictions. What has been less researched is the influence of decision making heuristics and biases on clinical judgements of violence risk assessment. The current paper discusses decision making heuristics and biases and applies the theory to clinical judgements in a violence risk assessment context. Based on the current review, it is suggested that in order to improve the effectiveness clinical judgements in violence risk assessment, a greater level of empirical research specifically examining the effects of the heuristics and biases in this context must be conducted, with the possibility of incorporating debiasing training into clinical practice. Keywords – clinical judgement, heuristics, biases, decision making, violence risk assessment. An individual’s ability to process information is often lower than the high volumes of information available to them (Baron & Byrne, 1997). In order to contend with this, cognitive strategies are employed which act to decrease the cognitive effort required, while maintaining a relatively effective means of interpreting the high levels of social information available. These strategies, referred to within social psychology as decision making heuristics or ‘cognitive rules of thumb’ (Tversky & Kahneman, 1974), are inherent in human decision making. While the use of heuristics within day Applying decision making theory to clinical judgements 151 to day decision making most certainly can be considered beneficial (e.g., in reducing the volume of complex cognitive processing, thus cognitive effort, required in decision making), their use has repeatedly been shown to lead to errors in judgement (Pennington, Gillen, & Hill, 1999). The current article shall provide an overview of the use of heuristics and their associated biases within decision making. More specifically, it shall discuss the application of decision making theory to the practice of violence risk assessment within a clinical setting. This application is of particular importance given that, currently, only 14% of the 29 clinical psychology training programmes running in Britain indicate that any specific training or instruction on decision making biases is incorporated into their programmes* (please see Appendix A). The illusory correlation Within the violence risk assessment literature, a number of decision making biases have been highlighted as being important to clinical judgement accuracy. Elbogen (2002) suggested that clinicians may make illusory correlations when assessing the risk of violence. An illusory correlation exists when a decision is influenced by the perception of a correlation existing between two entities that may not actually be correlated (Chapman & Chapman, 1967). This has clear implications within risk assessment practice, as no correlation between risk cues has been consistently found to exist (Hart, 1998). For example, Quinsey and Maguire (1986) found that in a sample of experienced forensic clinicians, judgements of dangerousness were based largely on the seriousness of the individual’s offence and the frequency of assaultive behaviour displayed by the individual while in the institution. The researchers compared these perceived risk factors to those actually occurring in cases of recidivism (measured over an 11 year period) and found the actual risk factors that were successful in predicting dangerousness to be: seriousness of offence, history of crime, criminal commitment (rather than civil commitment), young age and the number of previous correctional confinements. Thus, the clinicians’ consideration of seriousness of offence and frequency of assaultive behaviour acting as significant cues associated with dangerousness was only half supported, with the latter cue having no relationship to offender behaviour following release. Based on their findings, Quinsey and Maguire (1986) asserted that a ‘dangerous’ individual released from a psychiatric institution tended to be young and male, with a history of property crimes and serious crimes against people. In discussing research, such as that conducted by Quinsey and Maguire (1986), however, it must be highlighted that a clinical consideration of a patient being ‘dangerous’ may not be a prediction of violent behaviour, but may represent an opinion that the individual is capable of committing serious and significant harm to Europe’s Journal of Psychology 152 others (Litwack, 2001). Thus, when discussing risk cues in relation to predictions of dangerousness, one must consider the context in which the terms are considered; a murderer may have fewer or no previous offences but this does not make him or her any less dangerous than an individual with many previous offences, in fitting with Quinsey and Maguire’s (1986) typical profile of a ‘dangerous’ individual who is likely to recidivise. Context effects In addition to considering the context of the terminology used, discussed above, when conducting a violence risk assessment one must consider the context in which the assessment is taking place and the reasons for the assessment. Borum (2000) illustrated this point by discussing the difference between an assessment made in an emergency room under volatile conditions in comparison to one made as part of an institutionalised risk management procedure. In the case of a volatile setting, such as an emergency room, the clinician will have very little time to decide upon the course of action, and as such will not base his or her decisions upon a comprehensive review of all of the documentation associated with the case. Decisions in this type of situation are necessarily made quickly, based on the situational factors. On the other hand, in cases where there is time afforded (such as the decision to discharge an individual from a secure facility), the clinician will base his or her decision choices on a larger amount of information, utilizing the suitable documentation and interview information available to them. Borum (2000) highlighted that one of the most significant judgemental errors in violence risk assessment is the failure to properly consider the influence of situational factors. These situational factors are of the utmost importance when considering the level of risk that an individual poses, however, and assessing an individual in a context completely removed from their normal situation may prove hazardous to the assessment. For example, if the individual is known to be prone to behaving violently when under high levels of peer pressure (e.g., from gang members or other peer groups ‘triggering’ this type of behaviour), then the likelihood of the individual behaving violently in this company is higher than if they were not exposed to this type of company. However, the likelihood of the individual experiencing triggers to violent behaviour is often greater out with institutionalised life, and the clinician must therefore take this into account when composing a violence risk assessment in these circumstances. Indeed, a specific set of structured professional guidelines are now available to assess the risk of violence based on situational factors (i.e., Promoting Risk Intervention by Situational Management by Johnstone & Cooke, 2008), further Applying decision making theory to clinical judgements 153 emphasising the importance of the situation on the level of risk posed by an individual. The representativeness heuristic Similar implications to those discussed above for the practice of violence risk assessment may exist with the presence of representativeness in clinical decision making, particularly in relation to the illusory correlation. Representativeness describes the process by which the probability of an event occurring is evaluated based on the extent to which this event resembles another that is perceived by the individual to be related to the target (Tversky & Kahneman, 1974). In a clinical setting, representativeness may be more clearly described in terms of an individual judging one case based on its perceived similarity to other cases. This aspect of human decision making is often applied to stereotyping or categorising people into specific groups. For example, in research conducted by Tversky and Kahneman (1982), participants were given the following scenario: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations. Please check off the most likely alternative: � Linda is a bank teller � Linda is a bank teller and is active in the feminist movement” The majority of Tversky and Kahneman’s (1982) participants chose the latter option, despite the logical probability of the latter option (co-occurrence of two events) being mathematically less likely than the former (single event occurrence). This phenomenon was labelled the conjunctive fallacy (Tversky & Kahneman, 1983), with the above example illustrating the way in which individuals categorise people into specific groups and make assumptions about others’ beliefs and attitudes based on their own preconceived notions. However, as judgements influenced by representativeness ignore the laws of probability, as demonstrated in the above example, these judgements are not always accurate. Selective perception Similarly, the way in which individuals perceive events and their environment may be affected by these preconceived notions. Individuals are known to not only seek out information that conforms to their own expectations (i.e., selective exposure; Europe’s Journal of Psychology 154 Sargent, 2007), but also to attend to and perceive information (environmental or otherwise) in a way that is consistent with their expectations and interests (Plous, 1993; Salemink, van der Hout, & Kindt, 2007). This selective perception was famously illustrated by Bruner and Postman (1949). These researchers presented participants with five playing cards for between 10 milliseconds and one second. One of these cards did not conform to the standardised playing card suits (red heart, red diamond, black spade, and black club), but was instead a ‘black heart’. Participants in Bruner and Postman’s (1949) experiment took over four times as long to recognise this trick card than those belonging to the normal suits, with the majority of the participants displaying a dominance reaction in which the card was perceived to be either a ‘normal’ red heart or black spade, in line with their prior expectations. These findings illustrate that an individual’s prior expectations can strongly influence their perception of an event or object, and therefore their judgements made based on this perception. Indeed, Plous (1993, pg.17) went as far as to state that “When people have enough experience with a particular situation, they often see what they expect to see.” Regarding clinical judgements in violence risk assessment, selective perception may be apparent in a number of ways. For example, through person perception when meeting an offender, or through attending to specific information consistent with the clinician’s preconceived notions and disregarding non-congruent information when assessing the client’s case. In the first example given, clinicians may look for or attend to visual or behavioural cues consistent with their pre-existing schema of an offender. If this was the case, a stereotyped view of the offender would result, possibly influencing the judgements being made about the individual. Much like the use of heuristics and biases in human decision making, stereotyping has been discussed as a necessary cognitive process (Lippmann, 1922), used to reduce and edit the vast amount of sensory input into something manageable and meaningful to the individual (Stewart, Powell, & Chetwynd, 1979). In addition, Stewart et al. (1979) described stereotyping as a process not only used to simplify environmental and social stimuli, but one that also aids the construction of meaning to those stimuli based on attributional expectations. With regards to person perception, Taguiri (1969) discussed the process as the way in which an individual knows and thinks about others, and how they consider their characteristics, qualities and inner states in order to infer information about their intentions, attitudes and emotions. Once considered a veridical process that was central to social interaction, it is now more commonly considered that not all characteristics can be so easily or accurately inferred as once thought; instead, a relatively small number of traits are considered to be easily inferred through the process of person perception (Stewart et al., 1979). It is therefore argued that as there is a relatively large proportion of information Applying decision making theory to clinical judgements 155 about another individual that cannot be inferred directly through person perception, the remaining ambiguous information required in order to infer another individuals’ characteristics, attitudes and intentions is completed through the process of stereotyping. However, it is well documented that inferences made about individuals and their intentions are not always correct. For example, while it has been found that reference to ‘satanic’ interests has a strong influence on mock juror’s finding a defendant guilty (Pfeifer, 1999), Charles and Egan (2009) demonstrated that such ‘gothic’ interests that are assumed to be predictive of offending behaviour are actually mediated by personality, and that intrasexual competition and low-agreeableness actually acted as better predictors of offending behaviour. These researchers proposed this finding as evidence to challenge the stereotypical view that individuals with unusual or ‘gothic’ interests are more likely to be dangerous or act in an anti-social manner. Thus, should any inference be drawn based on a relatively small number of observable characteristics about an individual who is being assessed for violence risk, some concerns over the accuracy of these inferences may be raised. However, as the use of cognitive shortcuts is an evolved process, and as clinicians are known to rely on intuition to produce these types of judgements on a regular basis, one may argue that this concern is largely redundant. When the latter example given is taken into consideration with the notion of stereotyping, however, valid issues relating to the impact of inferring characteristics based on few observable characteristics may be raised. That is, should clinicians attend to or seek out information that is consistent with their preconceived notions and disregard information not fitting with their hypothesis at the initial stages of assessment (i.e., confirmation bias), they will be less likely to seek information opposing their initial schema of the case and will therefore be more likely to produce a biased, or at least ‘skewed’, assessment. Confirmation bias Confirmation bias has been observed in numerous domains outwith clinical judgement (Ask, Rebelius, & Granhag, 2008). One such observation, which is of relevance when considering clinical judgement, is the influence of an interrogator on judgements made in interrogative settings. Kassin, Goldstein, and Savitsky (2003) induced either a high or low degree of suspicion of guilt in a mock interrogation study. The authors found that those participants (acting in the role of interrogator) who had been predisposed to an expectation of ‘high guilt’ interrogated their suspect in a manner which implied guilt presumption to a greater extent than their counterparts in the ‘low guilt’ condition (e.g., participants in the ‘high guilt’ group were seen to press suspects harder in to gain a confession of guilt than those in the Europe’s Journal of Psychology 156 ‘low guilt’ group). This example is of particular relevance to clinical judgement, as it highlights the nature of pre-existing assumptions or beliefs on the way in which an authoritative figure may act towards a person under investigation (whether that is under interrogation or under assessment for violence risk). Further to this example, Ask et al. (2008) investigated investigator perceptions of evidence reliability. The evidence was manipulated to either confirm or disconfirm the suspect of a mock murder case as the perpetrator. Of particular interest to the present article’s focus, clinical judgement in violence risk assessment, were the findings relating to the witness evidence. It was found that witness identification evidence was considered to be less reliable when it challenged (disconfirmed) the suspicions against a suspect than when it confirmed them. This is an important parallel to draw against clinical judgement in a violence risk assessment context, as information given by offenders which is of a positive nature (e.g., the assertion that an individual has never used illicit substances) ought to be treated with more suspicion than negative information (e.g., admissions of committing unrecorded offences). It is clear to see that the reliability of this information is not equal across confirmatory and non-confirmatory information (as shown by Ask et al., 2008). It is, however, also important to take this into account when assessing the risk of violence posed; as preconceived notions are known to not only influence the information sought and attended to in the first instance, but also the perceived reliability of this information, the clinician will be more likely to conduct a biased risk assessment. One reason, demonstrated by Ditto, Scepansky, Munro, Apanovitch, and Lockhart (1998), which may help to account for reliance on preconceived notions and subsequent seeking of confirmatory information, is that non-preferred information (i.e., dissonant information) requires more effortful cognitive processing than preferred information (i.e., information consonant with a hypothesis). As such, it is clear to see that as human judgement, particularly in time-constrained environments, relies on cognitive biases, seeking and attending to confirmatory information is yet another evolved way to reduce cognitive effort in judgement and decision making. Borum, Otto, and Golding (1993) proposed corrective measures to decrease the influence of this bias on clinical judgment. The authors suggested that clinicians should seek information that is in opposition to their initial hypotheses during the data gathering stages of case assessment. This process would act to modify the initial impressions of a case if and where necessary. Borum et al. (1993) further suggested that this disconfirmatory information should be considered alongside the other data gathered, thus preventing the sole consideration of information that supports the clinician’s original impressions of and hypotheses about a case. Applying decision making theory to clinical judgements 157 The availability heuristic Similar implications may arise from the use of the availability heuristic (Tversky & Kahneman, 1973, 1974) when conducting a violence risk assessment. The availability heuristic occurs when an individual unconsciously judges the probability of an event occurring, for example, by the ease with which it can be retrieved from memory. Thus, more recently available information is retrieved from memory faster and with greater ease and is therefore perceived as more important, likely or frequent than less easily retrieved information. For example, in cases of high profile murders involving teenagers, great media interest is taken (Charles & Egan, 2009). In the case of Luke Mitchell, who was found guilty of murdering his girlfriend, it was widely reported in the press that he had ‘satanic’ interests (HMA v Luke Mitchell, 2004). As pointed out by Charles and Egan (2009), while the level of evidence to support the causal effect of these kinds of cases is varied, the assumption is not often strongly questioned. With the wide reporting of causal links between satanic interests and high profile murder cases, in combination with the unchallenged nature of these claims, it is fair to say that the causal link, whether true or not, may become engrained into acceptance, and thus a stereotyped view will emerge. As this stereotype is then more readily available as part of an individual’s general schema (i.e., a cognitive shortcut), it is therefore more easy to access from memory and, according to the availability heuristic, will be considered as more likely than less easily conceived alternatives, as illustrated in the aforementioned research by Pfeifer (2009). This decision making heuristic is thought to arise from an inability on the part of the individual making judgements to imagine sources of uncertainty or construct relevant hypothetical situations. In relation to violence risk assessment, a number of risk assessment tools have been developed that may help to reduce the effects of availability. For example, the HCR-20 (Webster, Douglas, Eaves, & Hart, 1997) and the PRISM (Johnstone & Cooke, 2008) encourage the user to develop ‘same-’, ‘best- ’ and ‘worst-’ case outcome scenarios, and therefore develop suitable risk management plans for each of these hypotheses. Thus, with the correct use of these relatively new semi-structured violence risk assessment tools, the biasing influence of the availability heuristic on clinical judgments of violence risk assessment can at worst be reduced and at best be negated. Learnability and the hindsight bias Of key concern in violence risk assessment from a decision making and judgement point of view is the fact that little to no feedback is provided to clinicians after the event. That is, once the violence risk assessment and follow-ups, where possible, Europe’s Journal of Psychology 158 have been completed, the individuals involved in conducting the violence risk assessment and developing the risk management plan will not be made aware of the success, or indeed failure, of the assessment and interventions suggested. This has a number of implications in terms of judgement accuracy. First, the concept of learnability (Bolger & Wright, 1994; Thomson, Onkal, Avcioglu, & Goodwin, 2004) must be considered. According to this concept, in situations where there is little or no performance feedback, judgment should be expected to be poor, and in the case of expertise, expert judgement should not be expected to be significantly better than that of a lay-person. In the case of clinical assessments of violence risk, improving learnability is a difficult task, as often clinicians have a large workload and are under immense time pressure, and are therefore unable to supplement this poor feedback by, for example, scrutinising other clinician’s reports (which would be beneficial in terms of vicarious retrospective learning). Second, the impact of hindsight bias on the judgments made in relation to the violence risk assessment must be considered. Hindsight bias (Fischhoff & Beyth, 1975) describes the phenomenon whereby people are rarely surprised by the outcome of an event, even in cases where the outcome- prediction was extremely unlikely and where the individual was not able to predict the outcome beforehand. Thus, individuals are seen to exaggerate post-event what could have been anticipated with foresight pre-event. In addition, individuals are unable to explain why the event was ‘predictable’ afterwards, despite having been unable to predict the outcome beforehand. Borum et al. (1993) discussed the implications of hindsight bias affecting forensic assessments. Borum et al. (1993) pointed out that hindsight bias is most likely to occur when clinicians are asked to assess the professional practice of a colleague in cases where an outcome (usually a negative outcome) has occurred post-assessment, typically in the instance of malpractice allegations. Based on Arkes, Faust, Guilmette, and Hart’s (1988) research, which found a reduced influence of hindsight bias in clinicians who had been made to create a list of alternative diagnoses as compared to those who had not, Borum et al. (1993) suggested that when assessing cases, the clinician should therefore create a list of possible outcomes that could have occurred in response to their colleague’s actions or interventions, in addition to noting alternative courses of action and their hypothetical possible outcomes. While this would be effective in reducing hindsight bias, this course of action would not necessarily be beneficial in preventing the mistake being investigated from occurring in a wider population than simply the clinicians directly involved. Hindsight bias is thought to occur due to the way in which memory is structured. That is, memory is a constructive process and as such, Applying decision making theory to clinical judgements 159 memories are not exact copies of the past, but, rather, are ‘constructed’ with logical inferences and associated memories being used to fill in missing details (Plous, 1993). It has been noted that hindsight bias appears to reconstruct memory (Plous, 1993). That is, with the new information given (i.e., the outcome of the event), individuals incorporate this new information into their past memories, thus changing their recollection of the event in a way that makes more sense to the individual in light of this new information. It is this reconstruction of memories in hindsight that prevents individuals from learning from their past mistakes. Arkes et al.’s (1988) and Borum et al.’s (1993) suggestions of note taking and creating hypothetical situations and outcomes may, therefore, act to decrease hindsight bias through decreasing the over-reliance of the clinician on memory. As Redelmeier (2005) pointed out, however, the possibility of learning from previous errors in a clinical setting is made difficult by the fact that often the errors made are too distal in time and place for the clinician to first, be aware of, and second, to learn from. A further strategy that may act to reduce hindsight bias was suggested by Dernevick, Falkheim, Holmqvist, and Sandell (2001). These authors suggested that hindsight bias is brought about through the negative feedback loop that arises in re- offending contexts. That is, as the individuals who are correctly assessed as non- violent are often not heard of again (i.e., as they do not re-offend, they do not re- enter the system for risk assessment) and the individuals who are incorrectly assessed as non-violent but who do re-offend are the ones who are re-assessed for violence, clinicians are not able to adjust their heuristics or learn from their errors. In order to reduce the influence of hindsight bias (and improve risk assessment practice in general), therefore, Dernevick et al. (2001) suggested that group assessments involving experienced assessors ought to be conducted, whereby the assessors monitor one another’s assessments would act to increase the validity and decrease bias in assessment. In addition, Dernevick et al. (2001) suggested that continuous training of staff, involving detailed feedback from all risk assessments carried out (i.e., outcome feedback to determine if an assessment was accurate or not) would further act to improve risk assessment. In particular, this latter approach would act to reduce hindsight bias. However, these ideals are not entirely practical in the real- world context of clinical practice, in which time is unfortunately a limiting factor to these approaches. Confidence Tversky and Kahneman (1982) further outlined that the extent to which an individual is confident in their predictions or judgements depends largely on the level of representativeness present in their decision making (i.e., the level of closeness of their Europe’s Journal of Psychology 160 judgement to that upon which they are basing their choice). Thus confidence can be considered to be high when an individual’s interpretation is a close match to their target, regardless of probability. This effect is maintained even when the individual has been made aware of the limiting factors in their predictions (Kahneman & Tversky, 1973). This ‘illusion of validity’ is highly applicable to violence risk assessment: as clinicians involved in violence risk assessment are often required to predict the likelihood of an offender committing future crimes, the clinician’s confidence in his or her own decisions must be reasonably high, for both moral and ethical reasons. For example, if a clinician’s confidence in his or her judgement of a specific case is relatively low, it would not then be ethical for them to make recommendations regarding offender treatment within a courtroom setting. McNeil, Sanburg, and Binder (1998) investigated the effects of confidence on decision accuracy in clinicians working in a psychiatric unit. These authors found that clinicians displayed greater decision making accuracy when they were highly confident and lower accuracy when less confident. They suggested that the use of actuarial tools as a supplement may act to increase clinician confidence in circumstances where confidence in clinical judgement is low, thus improving decision making. Further to this, Rabinowitz and Garelik-Wyler (1999) found that while confidence was not significantly related to accuracy in violence prediction, higher levels of confidence existed when predicting non-violent behaviour than when predicting violent behaviour. These authors suggested that with higher confidence in predicting non-violent behaviour, clinicians may engage in greater risk-taking behaviour, in that no preventative measures would be applied. Under-using base rate data In addition, with the use of representativeness in decision making, it has been well established that base rate data go largely underused (Monahan, 1981). To clarify what is meant by ‘base rates’ in the present context, Borum (2000, pg.1275) provided a succinctly defined the term as: “the known prevalence of a specific type of violent behaviour within a given population over a given period of time.” In a classic example, Kahneman and Tversky (1973) presented participants with a number of fictitious personality descriptions depicting different characters. Participants were told that the descriptions were drawn from a sample of 100 professionals, either 70 engineers and 30 lawyers or 70 lawyers and 30 engineers, and were asked the likelihood of the character in the description being either an engineer or a lawyer. Based on the numerical information provided, one would logically expect to find a higher response of ‘engineer’ in the former example and a Applying decision making theory to clinical judgements 161 higher response of ‘lawyer’ on the latter example. However, the authors found that participants across the two conditions produced the same likelihood of the individual being described as being either a lawyer or an engineer. From these findings the researchers concluded that the judgements made were largely dependent on the descriptive factors within the personality profiles provided rather than on the actual statistical probabilities associated with the categories. In a violence risk assessment context, Monahan (1981) indicated that by not consulting base rate information, the accuracy of violence risk assessment is greatly reduced, and asserted that in assessing the level of risk of violence that an individual poses base rates for violence should be a key consideration. It can be seen that acknowledging base rates is of the utmost importance in increasing the accuracy of judgements made and that therefore, with an over-reliance on representativeness, important aspects such as numerical information providing statistical probabilities becomes redundant in judgements made. However, often the events that merit risk assessment in a clinical context are highly individual, providing low base rates for comparison, if these rates are indeed available. Borum (2000) therefore suggested that whether base rate information is available or not, it is of great importance to consider the risk assessment as an individualised case, and that the clinician must recognise that, while base rate information can be of use when anchoring a prediction of the likelihood of future violence, there may be some relevant, recurring risk factors in a particular case that are not generally found in the population as a whole. The anchoring and adjustment heuristic In addition to the representativeness heuristic and the availability heuristic, discussed earlier, the final ‘classic’ heuristic (Cioffi, 1997), anchoring and adjustment, described by Tversky and Kahneman (1974), shall now be discussed. The anchoring and adjustment heuristic describes the tendency of people to make estimates of an outcome based on an initial value (the anchor) and adjusting from this anchor in order to reach a final judgement. One possible circumstance that an anchor on which a clinician could weight their decision may come about in a violence risk assessment context is during the interpretation and review stages of case assessment. When reviewing and assessing a case, clinicians will typically have access to the previous risk assessment information and evaluations conducted. With the presentation of such information, the assessing clinician may form an anchor based on, for example, previous assertions of low-, medium-, or high- risk of the individual acting violently or through the outcomes of actuarial assessments of risk that may have been conducted. In addition, an anchor can be formed not only when provided to the clinician explicitly; an anchor can also be the product of an Europe’s Journal of Psychology 162 incomplete computation (Tversky & Kahneman, 1974). With regards to clinical judgment, this has been discussed in terms of forming an anchor based on information that has been gathered in the earlier stages of the evaluation, therefore affording less weight to the ‘newer’ information, leading to insufficient adjustments to the initial impression formed of the case (Borum et al., 1993), and also through the personal knowledge and experience held by the clinician through his or her clinical experience acting as a baseline or anchor on which judgments are based (Cioffi, 1997). While, of course, the violence risk assessment must include and incorporate information gathered from previous risk assessments and be based on clinical experience and knowledge, it has been shown that typically individuals weight the initial value, the anchor, too highly and therefore are prone to making insufficient judgments away from the anchor value (Slovic & Lichtenstein, 1971; Tversky & Kahneman, 1974). Thus, the outcome of the judgment will be dependent on this starting point, and it has been shown the magnitude of the estimates or judgments associated with this starting point are biased towards the initial value. One way in which this issue could be resolved is for the clinician to solely use an actuarial tool to assess the risk of violence. These tools were developed in order to remove the clinician’s own ‘subjective’ judgements of the case from the assessment (Reeves & Rosner, 2009), and instead act to predict the risk of violence based on the statistical weighting of the risk cues measured by the particular tool being used. While the removal of the clinician’s own judgments may certainly reduce, or indeed remove, the risk of the anchoring and adjustment heuristic affecting the outcome of the assessment, actuarial tools have been highly criticised within the literature for being too rigid, based on historical data and therefore not taking into account dynamic variables pertinent to the case either presently or in the future, and that they provide no indication of how best to manage the predicted level of risk (Douglas, Ogloff, & Hart, 2003). Thus, while neither method is perfect, it is likely that the method chosen for use would be based solely on the clinician’s preference. Attribution effects Though not strictly speaking a decision making bias, the influence of attribution on clinical judgements in violence risk assessment must be acknowledged. Devernick et al. (2001), again, emphasised the importance of considering context. The researchers highlighted the fact that the majority of research relating to violence risk assessment has been conducted in a controlled, experimental manner, and that Applying decision making theory to clinical judgements 163 considerably fewer studies have focused on data collected in a manner representing actual clinical practice (i.e., collected in the field). This is an important distinction to draw, as in the research context the research does not have any direct influence in the process of risk assessment, and as such is objective. In practice, however, the clinician has a real life, direct influence on the assessment procedure. In addition, the assessment made by the practitioner may also be affected by the real world circumstances surrounding a case. For example, Devernick et al. (2001) suggested that the clinician may attribute causal relationships when reviewing information about an offender, such as attributing the cause of delinquent behaviour seen in later life to childhood events. As discussed by Murray and Thomson (2009), attribution effects can be detrimental to the treatment and assessments of offenders; should an internal cause (e.g., a personality trait) be attributed as the cause of an offender’s behaviour, the treatment and overall risk rating will be markedly different than for an offender whose behaviour has been attributed to an external cause (e.g., need for money). For a more detailed discussion on attribution effects on clinical judgements of violence risk assessment see Murray and Thomson (2009). Points of departure While there are some studies specifically investigating the effects of heuristics and biases on clinical judgements of violence risk assessment (e.g., Devernick et al., 2001; Murray, Thomson, Cooke & Charles, in press; Quinsey & Cyr, 1986), much work is still required in this area (Elbogen, 2002). Accordingly, a number of pointed future directions for research in this area shall now be suggested, based on the reviewed literature presented in the current manuscript: • First, as indicated by Elbogen (2002), a greater deal of descriptive research is required to investigate what actually occurs in clinical practice. This type of research would provide the best possible platform on which to build experimental research on, thus allowing the experimental research to be applicable to practice, not only theory. • Clearly, research which is targeted to investigating the influence of specific heuristics and biases on clinical judgements of violence risk assessment is required. Empirical research in this area would allow the field to move on from the mainly theoretical links between these concepts being presented as possible factors in violence risk assessment, as has been presented in the current review and those of the past (e.g., Borum, 2000; Borum, Otto, & Golding, 1993). With empirical findings from well designed studies, based on both the theoretical links and on actual practice, not only would more Europe’s Journal of Psychology 164 information become available in the academic literature, but this knowledge could be used to best improve decision making and judgement in clinical judgement, where appropriate. • Before embarking upon research into the effects of heuristics and biases on judgements of violence risk assessment, the most appropriate heuristics and biases for investigation must first be identified. The present article has outlined a number of appropriate heuristics and biases for possible investigation. However, based on the current review, the authors suggest that targeted investigation into the following biases is of particular importance: attribution effects; selective perception; confirmation bias; and hindsight bias. These particular biases have been highlighted as particularly important in terms of initial investigation as they are somewhat less theoretically based and as such research into these biases can be most readily based on, and, therefore, applied to clinical practice. This is not, however, to say that research into the remaining heuristics and biases discussed should not be conducted; just that more theoretically based research would be required in these areas before application to practice could be achieved. • With regard to attribution effects on clinical judgements of violence risk assessment, there are already published empirical articles in this area. For example, Quinsey and Cyr (1986) and Murray et al. (in press) found that clinicians and lay people alike rated offender dangerousness (among other traits) differently across internally and externally manipulated crime based vignettes. Specifically, offenders depicted in internally manipulated vignettes were considered to be significantly more dangerous than those depicted in the external manipulations. However, while these researchers illustrated that differences exist, their findings do not give information about whether or not this is a positive or a negative effect. That is, do these attribution effects negatively affect judgements of violence risk assessment, or are they beneficial to the process? This is an important next step in this line of research. Conclusion From the discussed research it can be seen that the use of heuristics and their associated biases in clinical decision making can indeed be problematic in terms of achieving the greatest accuracy in predicting violent behaviour. However, it has been suggested that by being made aware of their use of biases, clinicians can improve the quality, thus accuracy, of their risk assessments (Borum et al., 1993; Elbogen, 2002). It is therefore concluded that, in order to improve the effectiveness of violence risk assessments and to improve clinical judgements in this context, it may Applying decision making theory to clinical judgements 165 be beneficial in clinical practice to incorporate some form of biases training. However, prior to this it is necessary for a greater level of up to date empirical research examining the effects of the different heuristics and biases discussed in the current paper on clinical judgements of violence risk assessment to be conducted in order to assess both whether the administration of biases training would indeed be beneficial in this context, and also to identify which heuristics and biases may exert influence over the judgements and decisions made in this context. References: Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988). Eliminating the hindsight bias. Journal of Applied Psychology, 73, 305-307. Ask, K., Rebelius, A., & Granhag, P. A. (2008). The ‘elasticity’ of criminal evidence: A moderator of investigator bias. Applied Cognitive Psychology, 22, 1245-1259. Baron, R. A., & Byrne, D. (1997). Social psychology (8th Ed.) London: Allyn and Bacon. Bolger, F., & Wright, G. (1994). Assessing the quality of expert judgement. Decision Support Systems, 11, 1-24. Borum. R. (2000). Assessing violence risk among youth. Journal of Clinical Psychology, 56(10), 1263-1288. Borum, R., Otto, R., & Golding, S. (1993). Improving clinical judgment and decision making in forensic evaluation. Journal of Psychiatry and Law, 21, 35-76. Bruner, J. S., & Postman, L. J. (1949). On the perception of incongruity: A paradigm. Journal of Personality, 18, 44-47. Chapman, L., & Chapman, J. (1967). Genesis of popular but erroneous psychodiagnostic observations. Journal of Abnormal Psychology, 72, 193-204. Charles, K. E., & Egan, V. (2009). Sensational interests are not a simple predictor of adolescent offending: Evidence from a large normal British sample. Personality and Individual Differences, 47, 235-240. Chen, M., Froehle, T., & Morran, K. (1997). Deconstructing dispositional bias in clinical inference: Two interventions. Journal of Counseling and Development, 76, 74-81. Europe’s Journal of Psychology 166 Cioffi, J. (1997). Heuristics, servants to intuition, in clinical decision making. Journal of Advanced Nursing, 26, 203-208. Cooke, D. J., Michie, C., & Ryan, J. (2001). Evaluating risk for violence: a preliminary study of the HCR-20, PCL-R and VRAG in a Scottish prison sample. Scottish Prison Service Occasional Paper Series 5/2001. Curtis, K. A. (1994). Attributional analysis of interprofessional role conflict. Social Science & Medicine, 39(2), 255-263. Dernevick, M., Falkheim, M., Holmqvist, R., & Sandell, R. (2001). Implementing risk assessment procedures in a forensic psychiatric setting: Clinical judgement revisited. In D. Farrington, C. Hollin, & M. McMurran (Eds.). Sex and violence: The psychology of crime and risk assessment (pp.83-101). London: Harwood Academic Press. Douglas, K. S., Cox, D. N., & Webster, C. D. (1999). Violence Risk Assessment: Science and Practice. Legal and Criminological Psychology, 4(2), 149-184. Douglas, K. S., Ogloff, J. R. P., & Hart, S. D. (2003). Evaluation of a model of violence risk assessment among forensic psychiatric patients. Psychiatric Services, 54, 1372-1379. Edwards, W., & Tversky, A. (1967). Decision making: Selected readings. Middlesex, England: Penguin Books Ltd. Elbogen, E. B. (2002). The process of violence risk assessment: A review of descriptive research. Aggression and Violent Behaviour, 7, 591-604. Fischhoff, B., & Beyth, R. (1975). I knew it would happen: Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13, 1-16. Forsterling, F. (1988). Attribution theory in clinical psychology. New York: Wiley. Hamilton, P. R., & Jordan, J. S. (2000). Most successful and least successful performances: Perceptions of causal attributions in high school track athletes. Journal of Sport Behavior, 25, 245-254. Hammond, K. R. (1955). Probabilistic functioning and the clinical method. Psychological Review, 62, 255-262. Applying decision making theory to clinical judgements 167 Hart, S. D. (1998). Psychopathy and risk for violence. In D. J. Cooke, A. E. Forth and R. D. Hare (Eds.), Psychopathy: Theory, research, and implications for society (pp.355-373). Netherlands: Kluwer Academic Publishing. Haynes, S. N., & Williams, A. E. (2003). Case formulation and design of treatment programs: Matching treatment mechanisms to causal variables for behavior problems. European Journal of Psychological Assessment, 19, 164-174. Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley. Johnstone, L., & Cooke, D. J. (2008). PRISM: Promoting risk intervention by situational management. Structured professional guidelines for assessing situational risk factors for violence in institutions. Burnaby, British Columbia, Canada: Simon Fraser University, Mental Health, Law, and Policy Institute. Jones, E. E., & Davis, K. E. (1965). From acts to dispositions: the attribution process in person perception. In L. Berkowitz (Ed.), Advances in experimental psychology (Vol.2, pp.219-266). New York: Academic Press. Jones, E. E., & Nisbett, R. E. (1972). The actor and the observer: Divergent perceptions of the causes of behavior. In E. E. Jones, D. E. Kanhouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp.79-94). Morristown, NJ: General Learning Press. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 273-251. Kassin, S. M., Goldstein, C. C., & Savitsky, K. (2003). Behavioural confirmation in the interrogation room: On the dangers of presuming guilt. Law and Human Behavior, 27, 187-203. Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levine (Ed.), Nebraska symposium on motivation (pp.192-238). Lincoln: University of Nebraska Press. Lippmann, W. (1922). Public opinion. New York: Harcourt, Brace & Co. Litwack, T. R. (2001). Actuarial versus clinical assessments of dangerousness. Psychology, Public Policy and Law, 7(2), 409-443. Europe’s Journal of Psychology 168 McAuley, E., Duncan, T. E., & Russell, D. W. (1992). Measuring causal attributions: The revised causal dimension scale (CDSII). Personality and Social Psychology Bulletin, 18(5), 566-573. McNeil, D. E., Sanburg, D. A., & Binder, R. L. (1998). The relationship between confidence and accuracy in clinical assessment of psychiatric patients’ potential for violence. Law and Human Behavior, 22, 655-667. Monahan, J. (1981). The clinical prediction of violent behavior. Rockville, MD: National Institute of Mental Health. Murray, J., & Thomson, M. E. (2009). An application of attribution theory to clinical judgement. Europe’s Journal of Psychology [online], 2009(3), 96-104, available: http://www.ejop.org/archives/2009/08/an_application.html. Murray, J., Thomson, M. E., Cooke, D. J., & Charles, K. E. (in press). Influencing expert judgement: Attributions of crime causality. Legal and Criminological Psychology. Pennington, D. C., Gillen, K., & Hill, P. (1999). Social psychology. London: Edward Arnold. Pfeifer, J. E. (1999). Perceptual biases and mock juror decision making: Minority religions in court. Social Justice Research, 12, 409-419. Plous, S. (1993). The psychology of judgment and decision making. New York: McGraw- Hill. Plous, S., & Zimbardo, P. G. (1986). Attributional biases among clinicians: A comparison of psychoanalysts and behavior therapists. Journal of Consulting and Clinical Psychology, 54, 568-570. Quinsey, V. L. (1995). The prediction and explanation of criminal violence. International Journal of Law and Psychiatry, 18, 117-127. Quinsey, V. L., & Cyr, M. (1986). Perceived dangerousness and treatability of offenders: the effects of internal versus external attributions of crime causality. Journal of Interpersonal Violence, 1, 458-471. Quinsey, V. L., & Maguire, A. (1986). Maximum security psychiatric patients: Actuarial and clinical predictions of dangerousness. Journal of Interpersonal Violence, 1, 143-171. Applying decision making theory to clinical judgements 169 Rabinowitz, J., & Garelik-Wyler, R. (1999). Accuracy and confidence in clinical assessment of psychiatric inpatients risk of violence. International Journal of Law and Psychiatry, 22(1), 99-106. Redelmeier, D. A. (2005). The cognitive psychology of missed diagnoses. Annals of Internal Medicine, 142, 115-120. Reeves, R., & Rosner, R. (2009). Forensic Considerations. In F. M. Saleh, A. J. Grudzinskas, J. M. Bradford, & D. J. Brodsky (Eds.), Sex offenders: Identification, Risk assessment, treatment, and legal issues. New York: Oxford University Press. Rogers, R., & Shuman, D. W. (2000). Conducting insanity evaluations (2nd ed.). New York: Guilford. Rosenbaum, R. M. (1972). A dimensional analysis of the perceived causes of success and failure. Unpublished Dissertation. University of California, Los Angeles. Russell, D. (1982). The causal dimension scale: A measure of how individuals perceive causes. Journal of Personality and Social Psychology, 42, 1137-1145. Salemink, E., van der Hout, & Kindt, M. (2007). Selective attention and threat: Quick orienteering versus slow disengagement and two versions of the dot probe test. Behaviour and Research Therapy, 45(3), 607-615. Sargent, S. L. (2007). Images on selective exposure to computer mediated news stories. Computers in Human Behavior, 23(1), 705-726. Slovic, P., & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Human Performance, 6, 649-744. Stewart, R. A., Powell, G. E., & Chetwynd, S. J. (1979). Person perception and stereotyping. Westmead: Saxon House. Tagiuri, R. (1969). Person perception. In G. Lindzey & E. Aronson (Eds), The handbook of social psychology (2nd ed., Vol. 3, pp.395-449). Reading, MA: Addison-Wesley Thomson, M. E., Onkal, D., Avcioglu, A., & Goodwin, P. (2004). Aviation risk perception: A comparison between experts and novices. Risk Analysis, 24(6), 1585-1595. Europe’s Journal of Psychology 170 Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207-232. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131. Tversky, A., & Kahneman, D. (1982). Judgments of and by representativeness. In D. Kahneman, P. Slovic & A. Tversky (Eds), Judgment under uncertainty: Heuristics and biases (pp.84-98). Cambridge, England: Cambridge University Press. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293-315. Webster, C. D., Douglas, K. S., Eaves, D., & Hart, S. D. (1997). HCR-20. Assessing the risk of violence. Version 2. Vancouver, Canada: Simon Fraser University and Forensic Psychiatric Servvices Commission of British Columbia. Weiner, B., Frieze, I. H., Kukla, A., Reed, L., Rest, S., & Rosenbaum, R. M. (1971). Perceiving the causes of success and failure. New York: General Learning Press. Weiner, B. (1985). An attributional theory toward achievement motivation and emotion. Psychological Review, 92, 548-573. Weiner, B. (1985). Foreword. In F. Forsterling (Ed.), Attribution theory in clinical psychology (pp.ix-xi). New York: Wiley. Weiner, B. (1986). An attributional theory of motivation and emotion. New York: Springer-Verlag. Appendix A *Data gathered via a simple telephone survey asking: “I understand that you deliver training in clinical psychology. I am currently writing a paper discussing the impact of decision making biases on clinical decision making in violence risk assessment and would like to ask a quick question regarding your course training. Do you currently include de-biasing training or any other form of instructions to reduce the likelihood of cognitive biases in your course?” Applying decision making theory to clinical judgements 171 About the authors: Jennifer Murray* Jennifer Murray is a PhD student in the Department of Psychology, Glasgow Caledonian University. Her Thesis investigates the effects of attributional manipulations in clinical judgements of violence risk assessment, at various stages of the violence risk assessment process. The current paper is completed in part-fulfilment of Jennifer Murray’s PhD thesis. *Requests for reprints should be addressed to Jennifer Murray, Dept. of Psychology, Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow, G4 0PP, Scotland, UK (email: Jennifer.murray@gcal.ac.uk) Dr. Mary E. Thomson Dr. Mary E. Thomson is a reader of Decision Science from the Department of Psychology, Glasgow Caledonian University, who has published in various books and journals, including Risk Analysis, Decision Support Systems, the International Journal of Forecasting, the European Journal of Operational Research and the Journal of Behavioral Decision Making.