Diagnosis
Traditionally, diagnosis has been seen as the domain of the medical practitioner, with nurse involvement being informal and often unacknowledged (1,2). However the Crown II review (3) recognized that part of the role of the independent prescriber is to establish a diagnosis and/or management plan. This would suggest that nurse/pharmacist involvement in diagnosis is now formally acknowledged, at least within those areas where nurses/pharmacists can prescribe independently. Within the context of supplementary prescribing, it is expected that the independent prescriber would be responsible for the initial diagnosis and management plan. However, nurses/pharmacists working as supplementary prescribers would be expected to raise their concerns with the independent prescriber if they suspected an incorrect diagnosis.
In reaching a working diagnosis, the practitioner will go through several stages of data collection and analysis. Tate (4) discusses in some depth the process of clinical decision making and establishing a working diagnosis. He suggests that particularly within primary care, it may not always be possible or even desirable to make a firm diagnosis in order to formulate a management plan. In reality, there will often exist an element of uncertainty about a diagnosis. It has been suggested that nurses can find tolerating uncertainty difficult (5). Traditionally, nurse training has not prepared nurses for this, which is something that prescribing nurses will need to learn to manage as they accept responsibility for their own prescribing decisions.
References:
1. Baird A. Diagnosis and prescribing. Primary Health Care. 2001; 11(5); 24-
26.
2. Walby S and Greenwall J (1994) Medicine and Nursing; professions in a
changing health service. London, Sage.
3. Department of Health (1999) Review of prescribing, supply and
administration of
medicines; final report. March 1999, Department of Health, London.
Crown Copyright.
4. Tate P (2003) The Doctor's Communication handbook, 4th Edition.
Radcliffe Medical
Press 2003, Oxon.
5. Luker KA, Hogg C, Austin L, Ferguson B, Smith K. Decisions making; the
context of nurse prescribing, J Adv Nurs. 1998 mar; 27(3); 657-65.
Read below the two papers on diagnosis and the pitfalls attached
- BMJ 2005;330:781-783 (2 April), doi:10.1136/bmj.330.7494.781 Education and debate Five pitfalls in decisions about diagnosis and prescribing Jill G Klein,
- BMJ 2002;324:729-732 ( 23 March ) Evidence base of clinical diagnosis Clinical problem solving and diagnostic decision making: selective review of the cognitive literature
Evidence base of clinical diagnosis Clinical problem solving and diagnostic decision making: selective review of the cognitive literature
Arthur S Elstein, Alan Schwarz, This article reviews our current understanding of the cognitive processes involved in diagnostic reasoning in clinical medicine. It describes and analyses the psychological processes employed in identifying and solving diagnostic problems and reviews errors and pitfalls in diagnostic reasoning in the light of two particularly influential approaches: problem solving and decision making. Problem solving research was initially aimed at describing reasoning by expert physicians, to improve instruction of medical students and house officers. Psychological decision research has been influenced from the start by statistical models of reasoning under uncertainty, and has concentrated on identifying departures from these standards.
Summary points Problem solving and decision making are two paradigms for psychological research on clinical reasoning, each with its own assumptions and methods The choice of strategy for diagnostic problem solving depends on the perceived difficulty of the case and on knowledge of content as well as strategy Final conclusions should depend both on prior belief and strength of the evidence Conclusions reached by Bayes's theorem and clinical intuition may conflict Because of cognitive limitations, systematic biases and errors result from employing simpler rather than more complex cognitive strategies Evidence based medicine applies decision theory to clinical diagnosis |
Problem solving
Diagnosis as selecting a hypothesis
The earliest psychological formulation viewed diagnostic reasoning as a process of testing hypotheses. Solutions to difficult diagnostic problems were found by generating a limited number of hypotheses early in the diagnostic process and using them to guide subsequent collection of data (1). Each hypothesis can be used to predict what additional findings ought to be present if it were true, and the diagnostic process is a guided search for these findings. Experienced physicians form hypotheses and their diagnostic plan rapidly, and the quality of their hypotheses is higher than that of novices. Novices struggle to develop a plan and some have difficulty moving beyond collection of data to considering possibilities.
It is possible to collect data thoroughly but nevertheless to ignore, to misunderstand, or to misinterpret some findings, but also possible for a clinician to be too economical in collecting data and yet to interpret accurately what is available. Accuracy and thoroughness are analytically separable.
Pattern recognition or categorisation
Expertise in problem solving varies greatly between individual clinicians and is highly dependent on the clinician's mastery of the particular domain. This finding challenges the hypothetico-deductive model of clinical reasoning, since both successful and unsuccessful diagnosticians use hypothesis testing. It appears that diagnostic accuracy does not depend as much on strategy as on mastery of content. Further, the clinical reasoning of experts in familiar situations frequently does not involve explicit testing of hypotheses. Their speed, efficiency, and accuracy suggest that they may not even use the same reasoning processes as novices. It is likely that experienced physicians use a hypothetico-deductive strategy only with difficult cases and that clinical reasoning is more a matter of pattern recognition or direct automatic retrieval. What are the patterns? What is retrieved? These questions signal a shift from the study of judgment to the study of the organisation and retrieval of memories.
Problem solving strategies
|
Viewing the process of diagnosis assigning a case to a category brings some other issues into clearer view. How is a new case categorised? Two competing answers to this question have been put forward and research evidence supports both. Category assignment can be based on matching the case to a specific instance ("instance based" or "exemplar based" recognition) or to a more abstract prototype. In the former, a new case is categorised by its resemblance to memories of instances previously seen. This model is supported by the fact that clinical diagnosis is strongly affected by contextfor example, the location of a skin rash on the body
even when the context ought to be irrelevant.
The prototype model holds that clinical experience facilitates the construction of mental models, abstractions, or prototypes. Several characteristics of experts support this viewfor instance, they can better identify the additional findings needed to complete a clinical picture and relate the findings to an overall concept of the case. These features suggest that better diagnosticians have constructed more diversified and abstract sets of semantic relations, a network of links between clinical features and diagnostic categories.
The controversy about the methods used in diagnostic reasoning can be resolved by recognising that clinicians approach problems flexibly; the method they select depends upon the perceived characteristics of the problem. Easy cases can be solved by pattern recognition: difficult cases need systematic generation and testing of hypotheses. Whether a diagnostic problem is easy or difficult is a function of the knowledge and experience of the clinician.
The strategies reviewed are neither proof against error nor always consistent with statistical rules of inference. Errors that can occur in difficult cases in internal medicine include failure to generate the correct hypothesis; misperception or misreading the evidence, especially visual cues; and misinterpretations of the evidence. Many diagnostic problems are so complex that the correct solution is not contained in the initial set of hypotheses. Restructuring and reformulating should occur as data are obtained and the clinical picture evolves. However, a clinician may quickly become psychologically committed to a particular hypothesis, making it more difficult to restructure the problem.
Diagnosis as opinion revision
From the point of view of decision theory, reaching a diagnosis means updating opinion with imperfect information (the clinical evidence). The standard rule for this task is Bayes's theorem. The pretest probability is either the known prevalence of the disease or the clinician's subjective impression of the probability of disease before new information is acquired. The post-test probability, the probability of disease given new information, is a function of two variables, pretest probability and the strength of the evidence, measured by a "likelihood ratio."
Bayes's theorem tells us how we should reason, but it does not claim to describe how opinions are revised. In our experience, clinicians trained in methods of evidence based medicine are more likely than untrained clinicians to use a Bayesian approach to interpreting findings. Nevertheless, probably only a minority of clinicians use it in daily practice and informal methods of opinion revision still predominate. Bayes's theorem directs attention to two major classes of errors in clinical reasoning: in the assessment of either pretest probability or the strength of the evidence. The psychological study of diagnostic reasoning from this viewpoint has focused on errors in both components, and on the simplifying rules or heuristics that replace more complex procedures. Consequently, this approach has become widely known as "heuristics and biases."
Errors in estimation of probability
AvailabilityPeople are apt to overestimate the frequency of vivid or easily recalled events and to underestimate the frequency of events that are either very ordinary or difficult to recall. Diseases or injuries that receive considerable media attention are often thought of as occurring more commonly than they actually do. This psychological principle is exemplified clinically in the overemphasis of rare conditions, because unusual cases are more memorable than routine problems.
RepresentativenessRepresentativeness refers to estimating the probability of disease by judging how similar a case is to a diagnostic category or prototype. It can lead to overestimation of probability either by causing confusion of post-test probability with test sensitivity or by leading to neglect of base rates and implicitly considering all hypotheses equally likely. This is an error, because if a case resembles disease A and disease B equally, and A is much more common than B, then the case is more likely to be an instance of A. Representativeness is associated with the "conjunction fallacy"
incorrectly concluding that the probability of a joint event (such as the combination of findings to form a typical clinical picture) is greater than the probability of any one of these events alone.
Heuristics and biases
|
Probability transformations
Decision theory assumes that in psychological processing of probabilities, they are not transformed from the ordinary probability scale. Prospect theory was formulated as a descriptive account of choices involving gambling on two outcomes, and cumulative prospect theory extends the theory to cases with multiple outcomes. Both prospect theory and cumulative prospect theory propose that, in decision making, small probabilities are overweighted and large probabilities underweighted, contrary to the assumption of standard decision theory. This "compression" of the probability scale explains why the difference between 99% and 100% is psychologically much greater than the difference between, say, 60% and 61%.
Support theory
Support theory proposes that the subjective probability of an event is inappropriately influenced by how detailed the description is. More explicit descriptions yield higher probability estimates than compact, condensed descriptions, even when the two refer to exactly the same events. Clinically, support theory predicts that a longer, more detailed case description will be assigned a higher subjective probability of the index disease than a brief abstract of the same case, even if they contain the same information about that disease. Thus, subjective assessments of events, while often necessary in clinical practice, can be affected by factors unrelated to true prevalence.
Errors in revision of probability
In clinical case discussions, data are presented sequentially, and diagnostic probabilities are not revised as much as is implied by Bayes's theorem; this phenomenon is called conservatism. One explanation is that diagnostic opinions are revised up or down from an initial anchor, which is either given in the problem or subjectively formed. Final opinions are sensitive to the starting point (the "anchor"), and the shift ("adjustment") from it is typically insufficient. Both biases will lead to collecting more information than is necessary to reach a desired level of diagnostic certainty.
It is difficult for everyday judgment to keep separate accounts of the probability of a disease and the benefits that accrue from detecting it. Probability revision errors that are systematically linked to the perceived cost of mistakes show the difficulties experienced in separating assessments of probability from values, as required by standard decision theory. There is a tendency to overestimate the probability of more serious but treatable diseases, because a clinician would hate to miss one.
Bayes's theorem implies that clinicians given identical information should reach the same diagnostic opinion, regardless of the order in which information is presented. However, final opinions are also affected by the order of presentation of information. Information presented later in a case is given more weight than information presented earlier.
Other errors identified in data interpretation include simplifying a diagnostic problem by interpreting findings as consistent with a single hypothesis, forgetting facts inconsistent with a favoured hypothesis, overemphasising positive findings, and discounting negative findings. From a Bayesian standpoint, these are all errors in assessing the diagnostic value of clinical evidencethat is, errors in implicit likelihood ratios.
Educational Implications
Two recent innovations in medical education, problem based learning and evidence based medicine, are consistent with the educational implications of this research. Problem based learning can be understood as an effort to introduce the formulation and testing of clinical hypotheses into the preclinical curriculum. The theory of cognition and instruction underlying this reform is that since experienced physicians use this strategy with difficult problems, and since practically any clinical situation selected for instructional purposes will be difficult for students, it makes sense to provide opportunities for students to practise problem solving with cases graded in difficulty. The finding of case specificity showed the limits of teaching a general problem solving strategy. Expertise in problem solving can be separated from content analytically, but not in practice. This realisation shifted the emphasis towards helping students acquire a functional organisation of content with clinically usable schemas. This goal became the new rationale for problem based learning.
Evidence based medicine is the most recent, and by most standards the most successful, effort to date to apply statistical decision theory in clinical medicine. It teaches Bayes's theorem, and residents and medical students quickly learn how to interpret diagnostic studies and how to use a computer based nomogram to compute post-test probabilities and to understand the output.
Conclusion
We have selectively reviewed 30 years of psychological research on clinical diagnostic reasoning. The problem solving approach has focused on diagnosis as hypothesis testing, pattern matching, or categorisation. The errors in reasoning identified from this perspective include failure to generate the correct hypothesis; misperceiving or misreading the evidence, especially visual cues; and misinterpreting the evidence. The decision- making approach, views diagnosis as opinion revision with imperfect information. Heuristics and biases in estimation and revision of probability have been the subject of intense scrutiny within this research tradition. Both research paradigms understand judgment errors as a natural consequence of limitations in our cognitive capacities and of the human tendency to adopt short cuts in reasoning.
Both approaches have focused more on the mistakes made by both experts and novices than on what they get right, possibly leading to overestimation of the frequency of the mistakes catalogued in this article. The reason for this focus seems clear enough: from the standpoint of basic research, errors tell us a great deal about fundamental cognitive processes, just as optical illusions teach us about the functioning of the visual system. From the educational standpoint, clinical instruction and training should focus more on what needs improvement than on what learners do correctly; to improve performance requires identifying errors. But, in conclusion, we emphasise, firstly, that the prevalence of these errors has not been established; secondly, we believe that expert clinical reasoning is very likely to be right in the majority of cases; and, thirdly, despite the expansion of statistically grounded decision supports, expert judgment will still be needed to apply general principles to specific cases.
References
- Elstein AS, Shulman LS, Sprafka SA. Medical problem solving: an analysis of clinical reasoning. Cambridge, MA: Harvard University Press, 1978.
- Bordage G, Zacks R. The structure of medical knowledge in the memories of medical students and general practitioners: categories and prototypes. Med Educ 1984; 18: 406-416
- Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: theory and implications. Acad Med 1990; 65: 611-621
- Kahneman D, Slovic P, Tversky A, eds. Judgment under uncertainty: heuristics and biases. New York: Cambridge University Press, 1982.
- Sox Jr HC, Blatt MA, Higgins MC, Marton KI. Medical decision making. Stoneham, MA: Butterworths, 1988.
- Mellers BA, Schwartz A, Cooke ADJ. Judgment and decision making. Ann Rev Psychol 1998; 49: 447-477
- Chapman GB, Sonnenberg F, eds. Decision making in health care: theory, psychology, and applications. New York: Cambridge University Press, 2000.
- Hunink M, Glasziou P, Siegel J, Weeks J, Pliskin J, Elstein AS, et al. Decision making in health and medicine: integrating evidence and values. New York: Cambridge University Press, 2001.
- Patel VL, Groen G. Knowledge-based solution strategies in medical reasoning. Cogn Sci 1986; 10: 91-116
- Groen GJ, Patel VL. Medical problem-solving: some questionable assumptions. Med Educ 1985; 19: 95-100
- Brooks LR, Norman GR, Allen SW. Role of specific similarity in a medical diagnostic task. J Exp Psychol Gen 1991; 120: 278-287
- Norman GR, Coblentz CL, Brooks LR, Babcock CJ. Expertise in visual diagnosis: a review of the literature. Acad Med 1992; 66(suppl): S78-S83.
- Rosch E, Mervis CB. Family resemblances: studies in the internal structure of categories. Cogn Psychol 1975; 7: 573-605
- Lemieux M, Bordage G. Propositional versus structural semantic analyses of medical diagnostic thinking. Cogn Science 1992; 16: 185-204.
- Kassirer JP, Kopelman RI. Learning clinical reasoning. Baltimore: Williams and Wilkins, 1991.
- Bordage G. Why did I miss the diagnosis? Some cognitive explanations and educational implications. Acad Med 1999; 74(suppl): S138-S142
- Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology: a basic science for clinical medicine. 2nd ed. Boston: Little, Brown, 1991.
- Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. New York: Churchill Livingstone, 1997.
- Elstein AS. Heuristics and biases: selected errors in clinical reasoning. Acad Med 1999; 74: 791-794
- Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science 1982; 211: 453-458.
- Tversky A, Kahneman D. Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain 1992; 5: 297-323
- Fischhoff B, Bostrom A, Quadrell M J. Risk perception and communication. Annu Rev Pub Health, 1993; 4: 183-203
- Redelmeier DA, Koehler DJ, Liberman V, Tversky A. Probability judgment in medicine: discounting unspecified probabilities. Med Decis Making 1995; 15: 227-230
- Wallsten TS. Physician and medical student bias in evaluating information. Med Decis Making 1981; 1: 145-164.
- Bergus GR, Chapman GB, Gjerde C, Elstein AS. Clinical reasoning about new symptoms in the face of pre-existing disease: sources of error and order effects. Fam Med 1995; 27: 314-320
- Barrows HS. Problem-based, self-directed learning. JAMA 1983; 250: 3077-3080
- Gruppen LD. Implications of cognitive research for ambulatory care education. Acad Med 1997; 72: 117-120
BMJ 2005;330:781-783 (2 April), doi:10.1136/bmj.330.7494.781 Education and debate Five pitfalls in decisions about diagnosis and prescribing Jill G Klein,
Everyone makes mistakes. But our reliance on cognitive processes prone to bias makes treatment errors more likely than we think
Introduction
Psychologists have studied the cognitive processes involved in decision making extensively and have identified many factors that lead people astray. Because doctors' decisions have profound effects on their patients' health, these decisions should be of the best possible quality. All doctors should therefore be aware of possible pitfalls in medical decision making and take steps to avoid these unnecessary errors. In this article, I present five examples of cognitive biases that can affect medical decision making and offer suggestions for avoiding them.
Psychology of decision making
Doctors often have to make rapid decisions, either because of medical emergency or because they need to see many patients in a limited time. Psychologists have shown that rapid decision making is aided by heuristics-strategies that provide shortcuts to quick decisions-but they have also noted that these heuristics frequently mislead us. Good decision making is further impeded by the fact that we often fall prey to various cognitive biases.
To make correct decisions in clinical practice, doctors must first gather information on which to base their judgments. According to decision making experts Russo and Schoemaker, the best way to do this is to ask the most appropriate questions, to interpret answers properly, and to decide when to quit searching further. Straightforward though this sounds, misleading heuristics and cognitive biases create pitfalls throughout this process.
Doctors may believe that, as highly trained professionals, they are immune to these pitfalls. Unfortunately, they are just as prone to errors in decision making as anyone else. Even worse, it is common for people who are particularly prone to cognitive biases to believe that they are good decision makers. As Shakespeare put it, "The fool doth think he is wise, but the wise man knows himself to be a fool."(1) Studies based on both simulated cases and questionnaires show that doctors are susceptible to decision making biases, including insensitivity to known probabilities, overconfidence (2), a failure to consider other options,(3) the attraction effect(4), and the availability heuristic(5). The good news is that training in these dangers can reduce the probability of flawed medical decision making(6).
Pitfall 1: the representativeness heuristic
The representativeness heuristic is the assumption that something that seems similar to other things in a certain category is itself a member of that category. Kahneman and Tversky showed this heuristic in a classic experiment in which they presented participants with descriptions of people who came from a fictitious group of 30 engineers and 70 lawyers (or vice versa). The participants then rated the probability that the person described was an engineer. Their judgments were much more affected by the extent to which the description corresponded to the stereotype of an engineer (for example, "Jack is conservative and careful") than by base rate information (only 30% were engineers), showing that representativeness had a greater effect on the judgments than did knowledge of the probabilities.
The representativeness heuristic has also been shown in nursing. Nurses were given two fictitious scenarios of patients with symptoms suggestive of either a heart attack or a stroke and asked to provide a diagnosis. The heart attack scenario sometimes included the additional information that the patient had recently been dismissed from his job, and the stroke scenario sometimes included the information that the patient's breath smelt of alcohol. The additional information had a highly significant effect on the diagnosis and made it less likely-consistent with the representativeness heuristic-that the nurses would attribute the symptoms to a serious physical cause. The effect of the additional information was similar for both qualified and student nurses, suggesting that training had little effect on the extent to which heuristics influenced diagnostic decisions.
How can we avoid being led astray by the representativeness heuristic? The key is to be aware not only of the likelihood of a particular event (such as a stroke) based on situational information (such as alcohol on the breath), but also how likely the event is in the absence of that information. In other words, it is important to be aware of base rates of the occurrence of a particular condition and to avoid giving too much weight to one piece of information. By the same token, if a disease is extremely rare, it may still be unlikely to be the correct diagnosis even if a patient has the signs and symptoms of that disease.
Pitfall 2: the availability heuristic
When we use the availability heuristic, we place particular weight on examples of things that come to mind easily, perhaps because they are easily remembered or recently encountered. In general, this guides us in the right direction, as things that come to mind easily are likely to be common, but it may also mislead. The availability heuristic is apparent after a major train crash, when some people choose to travel by car instead of by rail, in the incorrect belief that it is safer (7).
In the medical setting, one study asked doctors to judge the probability that medical inpatients had bacteraemia. The probability was judged to be significantly higher when doctors had recent experience of caring for patients with bacteraemia. Another example is the documented tendency of doctors to overestimate the risk of addiction when prescribing opioid analgesics for pain relief and to undertreat severe pain as a result (8,11). Risk of addiction is actually low when patients receive opioids (particularly controlled release formulations) for pain, but opiate addiction tends to receive high publicity and so-through the availability heuristic-its likelihood may be overestimated.
To avoid falling prey to the availability heuristic, doctors should try to be aware of all the diverse factors that influence a decision or diagnosis. They should ask if their decision is influenced by any salient pieces of information and, if so, whether these pieces of information are truly representative or simply reflect recent or otherwise particularly memorable experiences. Knowing whether information is truly relevant, rather than simply easily available, is the key.
Rules for good decision making
|
Pitfall 3: overconfidence
To use our knowledge effectively, we must be aware of its limitations. Unfortunately, most of us are poor at assessing the gaps in our knowledge, tending to overestimate both how much we know and how reliably we know it (see bmj.com for an example). Research has shown that almost all of us are more confident about our judgments than we should be. Since medical diagnoses typically involve some uncertainty, we know that almost all doctors make more mistakes in diagnosis than they think they do. Overconfidence also comes into play when doctors rate their clinical skills. Larue et al found that both primary care doctors and medical oncologists rated their ability to manage pain highly, even though they actually had serious shortcomings in their attitudes toward and knowledge of pain control.
The dangers of overconfidence are obvious. Doctors who overestimate their management of a condition may continue to prescribe suboptimal treatment, unaware that their management could be improved. Also, overconfidence in diagnostic abilities may result in too hasty a diagnosis, when further tests are needed. It is critical, therefore, to be aware of the limits of your knowledge and to ensure that knowledge is kept up to date. Awareness of your shortcomings makes it more likely that you will gather further information. It can also be helpful to make a habit of seeking the opinions of colleagues.
Pitfall 4: confirmatory bias
Confirmatory bias is the tendency to look for, notice, and remember information that fits with our pre-existing expectations. Similarly, information that contradicts those expectations may be ignored or dismissed as unimportant. Confirmatory bias has been shown to affect peer-reviewers' assessments of manuscripts. Mahoney sent fictitious manuscripts with identical methods but different results to reviewers. Reviewers gave significantly better ratings to the methods section when the results supported their pre-existing beliefs.
Once again, doctors are not immune to confirmatory bias. In taking medical histories, doctors often ask questions that solicit information confirming early judgments. Even worse, they may stop asking questions because they reach an early conclusion, thus failing to unearth key data. More generally, the interpretation of information obtained towards the end of a medical work-up might be biased by earlier judgments.
The confirmatory bias can also lead to treatment errors. It is natural to expect that the drug you are about to administer is the correct drug. Apparently obvious information that you have the wrong drug-for example, a label marked ephedrine instead of the expected epinephrine-may be ignored or misinterpreted to confirm your expectation that the drug is correct.
Summary points Psychologists have extensively studied the cognitive processes involved in making decisions Heuristics and biases that lead to poor decisions are widespread, even among doctors Awareness of the cognitive processes used to make decisions can reduce the likelihood of poor decisions |
Although the danger of confirmatory bias is greatest when making decisions about diagnosis, ongoing treatment decisions are also affected. It is thus critical to remain constantly vigilant for any information that may contradict your existing diagnosis, and to give any such information careful consideration, rather than dismissing it as irrelevant. It is also a good idea to try to think of specific reasons why your current theory might be wrong and to ask questions that could potentially disprove your hypothesis. Always be aware of alternative hypotheses and ask yourself whether they may be better than your current ideas.
Pitfall 5: illusory correlation
Illusory correlation is the tendency to perceive two events as causally related, when in fact the connection between them is coincidental or even non-existent. (It has some overlap with confirmatory bias when causes that fit with pre-existing ideas are noticed.) Homoeopathy provides an excellent example of illusory correlation. Homoeopaths will often notice when patients improve after being treated with a homoeopathic remedy and claim this as evidence that homoeopathic treatment works. However, no convincing evidence exists that homoeopathic treatments are effective (12,13). Illusory correlation is probably at work: homoeopaths are likely to remember occasions when their patients improve after treatment.
Falling prey to illusory correlation can reinforce incorrect beliefs, which in turn can lead to the persistence of suboptimal practices. Ask yourself whether any instances do not fit with your assumed correlations. A straightforward way to do this is simply to keep written records of events that you believe to be correlated, making sure that all relevant instances are recorded.
Conclusions
Doctors often have to make decisions quickly. However, the greatest obstacle to making correct decisions is seldom insufficient time but distortions and biases in the way information is gathered and assimilated. Being aware that decisions can be biased is an important first step in overcoming those biases. In real life, of course, biases may not necessarily fit neatly into any one of the categories I described above but may result from a complex interaction of different factors. This increases the potential for poor decisions still further. The good news is that it is possible to train yourself to be vigilant for these errors and to improve decision making as a result.
References
- Kahneman D, Slovic P, Tversky A, ed. Judgement under uncertainty: heuristics and biases. Cambridge: Cambridge University Press; 1982.
- Russo JE, Schoemaker PJH. Winning decisions: how to make the right decision the first time. London: Piatkus, 2002.
- Bornstein BH, Emler AC. Rationality in medical decision making: a review of the literature on doctors' decision-making biases. J Eval Clin Pract 2001;7: 97-107.
- McDonald CJ. Medical heuristics: the silent adjudicators of clinical practice. Ann Intern Med 1996;124: 56-62.
- Dawson NV. Physician judgment in clinical settings: methodological influences and cognitive performance. Clin Chem 1993;39: 1468-78 (discussion 1478-80).
- Hershberger PJ, Part HM, Markert RJ, Cohen SM, Finger WW. Development of a test of cognitive bias in medical decision making. Acad Med 1994;69: 839-42
- Borak J, Veilleux S. Errors of intuitive logic among physicians. Soc Sci Med 1982;16: 1939-47.
- Kahneman D, Tversky A. On the psychology of prediction. Psychol Rev 1973;80: 237-51.
- Brannon LA, Carson KL. The representativeness heuristic: influence on nurses' decision making. Appl Nurs Res 2003;16: 201-4.
- Poses RM, Anthony M. Availability, wishful thinking, and physicians' diagnostic judgments for patients with suspected bacteremia. Med Decis Making 1991;11: 159-68.
- Weinstein SM, Laux LF, Thornby JI, Lorimor RJ, Hill CS Jr, Thorpe DM, et al. Physicians' attitudes toward pain and the use of opioid analgesics: results of a survey from the Texas Cancer Pain Initiative. South Med J 2000;93: 479-87.
- Morgan JP. American opiophobia: customary underutilization of opioid analgesics. Adv Alcohol Subst Abuse 1985;5: 163-73.
- Potter M, Schafer S, Gonzalez-Mendez E, Gjeltema K, Lopez A, Wu J, et al. Opioids for chronic nonmalignant pain. Attitudes and practices of primary care physicians in the UCSF/Stanford Collaborative Research Network. University of California, San Francisco. J Fam Pract 2001;50: 145-51.
- McCarberg BH, Barkin RL. Long-acting opioids for chronic pain: pharmacotherapeutic opportunities to enhance compliance, quality of life, and analgesia. Am J Ther 2001;8: 181-6.
- Brookoff D. Abuse potential of various opioid medications. J Gen Intern Med 1993;8: 688-90.
- Larue F, Colleau SM, Fontaine A, Brasseur L. Oncologists and primary care physicians' attitudes toward pain control and morphine prescribing in France. Cancer 1995;76: 2375-82.
- Koriat A, Lichtenstein S, Fischhoff B. Reasons for confidence. J Exp Psychol Hum Learn Mem 1980;6: 107-18.
- Mahoney MJ. Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cognit Ther Res 1977;1: 161-75.
- Wallsten TS. Physician and medical student bias in evaluating diagnostic information. Med Decis Making 1981;1: 145-64.
- Nott MR. Misidentification, in-filling and confirmation bias. Anaesthesia 2001;56: 917.
- Alvarez, M. P., Agra, Y. (2006). Systematic review of educational interventions in palliative care for primary care physicians.. Palliat Med 20: 673-683
- Bramwell, R., West, H., Salmon, P. (2006). Health professionals' and service users' interpretation of screening test results: experimental study. BMJ 333: 284-
- BLOWER, A., LANDER, R., CRAWFORD, A., ELLIOT, R., McNULTY, C., HOLMES, C., SRIREDDY, P., MULCAHY, M., MINNIS, H. (2006). Views of child psychiatrists on physical contact with patients. Br. J. Psychiatry 188: 486-487
- (2005). Hit parade. BMJ 330: 1396-1396
Bradley, C. P (2005). Can we avoid bias?. BMJ 330: 784-784
Read Thompsons paper below which illustrates some simple techniques for minimizing the impact of negative influences in heuristics
Thompson C. Clinical experience as evidence in evidence-based practice. Journal of Advanced Nursing 2003; 43;230-237 and Thompson C, Dowding D. Clinical Decision Making and Judgement in Nursing. Churchill Livingstone 2002. ISBN 0 443 07076 8