REFLECTION – TEACHING AND LEARNING IN CLINIC THE STORIES CLINICIANS TELL Stefan H. Krieger[footnoteRef:1] [1: *Professor of Law and Director Emeritus of Clinical Programs, Maurice A. Deane School of Law at Hofstra University. I wish to thank Theo Liebmann, Serge Martinez, and Rick Wilson for their guidance and assistance. I especially express my appreciation to Maxim Tomoszek and Palacký University for their invitation to speak at their conference, Complex Law Teaching: Knowledge, Skills and Values. ] Hofstra University, USA Introduction Unbelievably, it has been more than 15 years since I last visited Palacký during the inaugural year of its Clinic. So, in preparation for today’s talk, I thought I should go back to my files to refresh my memory of events a decade-and-a-half ago. As I reviewed those documents, it struck me how the events of that time were much more complicated than I initially remembered. In fact, I saw how my retelling of the events reflected in those papers could be constructed – with appropriate spin -- into different stories -- some of which could be dramatically divergent from others. When we teach our students legal storytelling, we show them how lawyers can filter out certain details in their cases and focus on others to craft a compelling narrative for one party or the other. Using this approach in describing my Palacký Clinic file, I could tinker with the evidence in it and craft multiple narratives. So to begin my presentation at this conference, I thought that instead of addressing complex law teaching in the abstract, I would start more concretely -- to tackle the subject by sharing with you two very divergent tales that can be woven about the establishment of the Palacký Clinic. Perhaps consideration of these stories will help to give us some insights on the problems faced in addressing the difficult issues raised by complex law teaching. So let us begin this morning’s story hour with my first narrative: I entitle it, Clinical Education Comes to Central Europe: Hofstra’s and Palacký’s Partnership in Training Law Students in the Practice of Public Interest Law. Our narrative begins after the 1989 Velvet Revolution when Palacký’s rector Josef Jarab sought to establish a law school at the University to train a new generation of lawyers for practice in a democratic legal system dedicated to the rule of law. Soon thereafter, Rector Jarab fortuitously met Hofstra Professor Richard Neumann at a conference in New York, and a fruitful relationship was created between Hofstra School of Law and Palacký. Five years later, the two schools began to explore the development of a clinical program at Palacký. Seeking funding for the new project, the schools jointly applied for a Ford Foundation grant to establish a live-client Housing Rights Clinic, patterned after my own Clinic at Hofstra. In the United States in the 1960s, Ford had been instrumental in spearheading efforts to integrate clinical education into mainstream law school curricula. Now, thirty years later, it was hoped that Ford would assist the development of clinical education in law schools in the newly democratized Eastern Bloc. With the overall goal of helping democracy succeed in the Czech Republic, the grant application listed four goals: i) to demonstrate to citizens on the ground in the Czech Republic that concrete actions using the legal mechanisms of a democratic state could lead to the rule of law; ii) to train students in the importance and skills of practicing law in the public interest; iii) to create a model law school clinic, which other law schools in the region could emulate to expand the training of public interest lawyers; and iv) to heighten the legal consciousness of government officials, present and future. Ford approved this application, and in early 1996, this ambitious project began. Pursuant to the grant, Palacký hired an experienced Olomouc commercial law attorney as a clinical teacher to establish the Clinic. Early in 1996, she visited my Clinic to learn methods of clinical teaching and observe my students in action handling cases -- in the law office and courts. Returning to Olomouc, this clinician established a Clinic in fall, 1996. Palacký created a law office with its own computer with Internet access. Ten students enrolled in the Clinic and enthusiastically began to work on a variety of cases -- an eviction case by a landlord who wanted to have his daughter live in the flat; a marital dispute over the right to a flat; a town's attempts to evict a tenant. For these cases, students interviewed clients, drafted memos, and wrote letters to clients and adversaries. Based on her experiences at Hofstra, the clinician developed creative seminar classes focused on ethical issues and client relationships. In these seminars, students also had the opportunity to engage in simulated arguments of court cases. In fall, 1996 and then spring, 1997, I visited the Clinic. I was deeply impressed with the students’ commitment to their clients and their command of the cases. The clinician was a natural teacher and had a warm relationship with her students. I had some good meetings with the Dean and other law school administrators, imbibed slivovitz (your wonderful plum brandy) with them, and shared my thoughts about the program. While Ford decided not to refund the program, the work of that year laid the foundation for the vibrant clinical program now existing at Palacký under the wonderful leadership of Maxim Tomoszek. Now let me tell you my second yarn: The Bumpy Road: Unrealistic Expectations Confront a Newly Democratic Legal System. This story begins much the same as our first tale -- the development of the relationship between Hofstra and Palacký; the application to Ford and its approval; and Palacký's hiring of a clinician. At this point, however, the two narratives diverge quite dramatically. The grant was approved, and in early 1996, the Palacký clinician visited my Clinic at Hofstra. She enjoyed the seminar component of the class and supervisory sessions with the students. But she was very frustrated with the court appearances. My students' cases were in several courts -- lower state trial courts and federal court. But every time, after the students prepared for a hearing, when we appeared in court, after hours of waiting, the cases were adjourned to another date. As her stay in the United States concluded, she stopped going to court hearings questioning the benefit of all this student preparation with the only payoff being delay. In fall, 1996, the clinician started the Palacký clinic with ten students and eventually each two-student team had one client to represent. When I visited Olomouc in December, 1996, I was very impressed with the students' commitment to their clients and the clinician's talents as a teacher. I did, however, express two concerns: (1) the law school had made no attempt to obtain approval for student practice in the courts; and (2) no referral network had been created for developing a stream of low-income clients for the Clinic. As to the first issue, the law school responded that the Czech Bar and Courts were strongly resistant to any student practice. In regard to the second issue, the clinician said she was attempting to get more cases referred. All in all, though, as I returned home, I felt fairly positive about the prospects for the Clinic. But when I returned in May, 1997, I got a different feeling. Very few new cases had been taken. In all my meetings with students, they were still enthusiastic about the first opportunity in the words of one student, "to feel like a lawyer," but they were very frustrated with the fact that the Clinic had so few cases and that they could not argue in court. Many of the students and the clinician emphatically told me they wanted to be able to accept cases from any possible client, not just poor people. While my meetings with the administration five months earlier had been upbeat, this time around, the mood was cordial but cool. While I continued to applaud the clinician’s teaching methods and seminar classes, I expressed dismay at the lack of movement on the issues of student practice and the development of referral resources for needy clients. The administration was adamant that both the Czech Bar and judiciary were dead set against student practice and that the limitation on the types of clients which could be accepted by the Clinic only stood in the way of the development of a vibrant clinical program. I kept referring to my experiences in America. And the response could be encapsulated by the comment of one of the participants at the meeting, “In the Czech Republic, things must be gradual.” I reported to Ford, and the foundation decided to terminate the grant. ************** I am not relating these two stories to you because I believe one is more truthful than the other. Each of them is based on facts reflected in my notes from 15 years ago. And certainly, as a guest of your wonderful law school, my intent in telling these two tales is not to cast aspersions on Palacký nor its administration or faculty, nor for that matter Hofstra, Ford, or myself. All of the participants acted in good faith, with the best of intentions, and with great passion. There is no doubt that in that one year, the students, faculty, and administration of Palacký took great strides in starting the process of the development of experiential education in the Czech Republic and that those efforts sowed the seeds for the strong clinical program that now exists fifteen years later. But, at least in my opinion, primarily retelling the first story and disregarding the second has profound consequences. Quite honestly, I love to tell people the first tale. It emotionally makes me feel quite successful about my work with your law school. But, if we want to be honest about what actually happened, we cannot ignore the second tale. In fact, that story has some significant takeaways from which we can learn a great deal: about the tensions, for example, between the cultural perspectives of American legal educators and their Czech counterparts; about both the benefits and limitations of outside influence in the development of law schools in newly-democratized countries; and the different roles of skills training and social justice in experiential education. As heartwarming as it may be to sit over a couple of drinks and share the first story with you, nuance is lost. It is only through the messy details of the second tale that we are able to gain some insights into how we can improve complex legal education. Use of Persuasion and Inquiring Modes Unfortunately, too much of the current literature on experiential legal education has the attributes of my first tale – moving essays which are not necessarily grounded in a critical examination of the messy details. A good example of this storytelling approach to issues in experiential legal education is the treatment of an empirical study I conducted a few years ago on the effectiveness of clinical legal education.[footnoteRef:2] In that study, I compared the legal reasoning strategies used in solving a legal problem by different groups of students at the University of Chicago Law School: second-year students who had not enrolled in a clinic; third-year students who had not taken a clinic; and third-year students who had been enrolled in a clinical program. When it came to the issue of the effectiveness of clinical training, my findings were mixed. I found that – at least in terms of these particular groups -- those subjects who had enrolled in a clinic outpaced their nonclinical counterparts in identifying client interests and the next steps to take in the case. But I also discovered that participation in a clinic may not lead to better proficiency in fact analysis or identification of relevant rules. [2: Stefan H. Krieger, The Effect of Clinical Education on Law Student Reasoning: An Empirical Study, 35 Wm. Mitchell L. Rev. 359 (2008).] Surprisingly, despite these mixed findings, several of the subsequent articles which cite this study disregard or downplay the findings that are negative about clinical education and suggest that my study shows overall benefits of clinical experience. One article, for example, argues, “The benefits of moving from the traditional passivity of the Socratic dialogue to adding experience to doctrinal courses via simulation exercises are myriad. In fact, this hypothesis has been tested empirically.”[footnoteRef:3] The authors then cite my article with a parenthetical stating that the study “conclude[s] that students who participated in experiential education activities in law school were better able to identify some relevant facts in a legal fact pattern, identify legal rules relevant to a client's problem, identify client interests, and consider next steps in a client representation.” Apparently, because it would undercut their overall argument, the authors say nothing whatsoever about the more negative findings. In another article touting the benefits of experiential legal education, the authors cite the study to recite the same list of the purported superior performances by clinical subjects but then hide the mixed results in a footnote.[footnoteRef:4] [3: Lisa T. McElroy & Christine N. Coughlin, Failure is Not an Option: An Essay on What Legal Educators Can Learn from NASA’s Signature Pedagogies to Improve Student Outcome, 75 J. Air L. & Com. 503, 509 n.25 (2010). ] [4: Christine N. Coughlin et al., See One, Do One, Teach One: Dissecting the Use of Medical Education’s Signature Pedagogy in the Law School Curriculum, 26 Ga. St. U.L. Rev. 361, 397-98 (2010).] Another example in this storytelling genre about experiential legal education, is a significant – and much heralded – recent report on American legal education by a major clinical educators association, Best Practices for Legal Education.[footnoteRef:5] In it, the authors review one-sided polemics in support of experiential education. And then conclude the tale with the bald assertion: [5: Roy Stuckey et al, Best Practices For Legal Education: A Vision and a Roadmap (2007).] We encourage law schools to follow the lead of other professional schools and transform their programs of instruction so that the entire educational experience is focused on providing opportunities to practice solving problems under supervision in an academic environment. This is the most effective and efficient way to develop professional competence. The authors of this report apparently saw no need to consider research contrary to their preconceived conclusion or even to tone down the language to recognize that a valid counter-narrative might exist. The type of storytelling reflected in these articles and Best Practices, by its very nature, falls into the category of what Robert Condlin calls the use of persuasion mode. Condlin posits that two types of reasoning are at the core of an attorney’s work: persuasion mode and learning mode. In persuasion mode, the lawyer tries to manipulate a situation to achieve a particular goal. A lawyer in persuasion mode tends to act more or less based on strategic motives. She minimizes any self analysis, tentativeness, doubt, or perplexity over the unknowable and gray areas of her cases. In learning – or what I will call inquiring mode – the lawyer’s reasoning is open ended. She follows her curiosity and interest in exploring things regardless of consequences. A lawyer in inquiring mode is not trying to accomplish anything except to learn more about a subject.[footnoteRef:6] [6: See generally Robert Condlin, The Moral Failure of Clinical Education in Lawyers’ Roles and Lawyers’ Ethics 318 (D. Luban ed., 1983); Robert Condlin, Socrates’ New Clothes: Substituting Persuasion for Learning in Clinical Practice Instruction, 40 Md. L. Rev. 223 (1981).] Obviously, effective attorneys need to use and function well with both modes of reasoning. In interviewing, counseling, and mediation, for example, use of the inquiring mode may be crucial to understanding the complete picture of what has occurred in a dispute or what a party seeks to obtain in a particular transaction. But in other arenas – such as trial work or adversarial negotiation -- persuasion mode is usually the most effective means of attaining a client’s goals. Applying Condlin’s model to our own work on complex legal education, it is clear that storytelling in the persuasion mode can be very beneficial to those of us who are committed to experiential legal education. Our field is a relatively new movement. From its inception, some traditional legal academics have been quite hostile to the notion that practice-based learning should have had any role in law schools. In America – and elsewhere in the world – many clinical and skills teachers have been deprived of comparable compensation with other law professors, have been denied full participation in governance of their institutions, and remain in second-class status. Given this context, it is quite natural that teachers in the field of experiential legal education have been prone to use persuasion mode. We have a strategic goal: to persuade our colleagues and institutions of the value of our pedagogy and to become full-fledged members of the legal academic community. And to achieve that goal, we sometimes filter out damaging facts and gloss over doubts and perplexities in regard to the unknowable and gray areas of the case we are making. But I believe that for a movement that is now half a century old, it is high time that we start to refocus our energies from persuasion to inquiring mode. Obviously, we should not sell ourselves short, especially in situations in which others in the legal academy want to shut the door on our status and pedagogy. But, at least in my opinion, we need to step back from our storytelling. In inquiring mode, we need to critically examine our teaching methods, the relative roles of experiential and other forms of legal instruction, and the impact of our teaching on our graduates years into practice. We should explore these issues regardless of the consequences – even if, for example, we find that other types of pedagogy are beneficial throughout legal training or that some of our methods and approaches are simply counterproductive. I believe it is time for us to abandon our shibboleths and familiar narratives. We need to engage in the type of rigorous inquiry that will help us to improve not only our contributions to context-based learning but also to legal education as a whole. Four hundred years ago, the influential reformer Jan Amos Komensky traveled these parts challenging prevailing educational theories and advocating the use of new teaching methods. In Komensky’s spirit, I hope that we can also challenge prevailing theories in our field in order to develop more effective methods for educating our students. So, for the remainder of this presentation, I would like to give a critique of some of the trendy stories in experiential legal education these days and raise some questions about their validity. Before I begin, however, I want to make clear; I am not calling for a total rejection of these narratives. Just as there is some truth to my first story today about the Palacký Clinic, there is some validity in the stories I will be discussing. They have some important points to make. My point, however, is that the evidence supporting these narratives is not as clear-cut as some of those storytellers would like to believe. I submit that we should be willing and eager to explore all the evidence – both pro and con – in regard to the tales now being told in our field. Tale 1: The Practical Apprenticeship Model: the Vehicle for 21st Century Legal Education The first story I would like to address can be entitled, The Practical Apprenticeship Model: the Vehicle for 21st Century Legal Education.[footnoteRef:7] [7: For a full discussion of the issues raised in this section, see Stefan H. Krieger & Serge Martinez, Performance Isn’t Everything: The Importance of Conceptual Competence in Outcome Assessment of Experiential Learning, 19 Clinical L. Rev. 251 (2012).] This story has been at the forefront of recent efforts for changes in legal education by many in the American experiential legal education community. The source of this narrative was the 2007 Carnegie Report on Legal Education – Educating Lawyers: Preparation for the Profession of Law.[footnoteRef:8] It has generated numerous favorable articles; an array of conferences on how best to implement its recommendations; and the creation of law school committees throughout the United States on Carnegie reform. Unfortunately, however, very few scholars have sat back and given the Report the critical analysis it requires. [8: WILLIAM M. SULLIVAN et al., EDUCATING LAWYERS: PREPARATION FOR THE PROFESSION OF LAW (2007).] One of the primary recommendations of the Report is increased emphasis on what it calls the “practical apprenticeship.” Attempting to adapt the traditional legal apprenticeship model to present-day legal education, the Report calls for studying the performance of experts to distill and simplify their techniques. These are the expert’s toolkit. Then based on those techniques, we should teach students “scaffolds” for practice: “the rules, protocols, and organizing metaphors for approaching situations or problems.” A scaffold could be, for example, a particular interviewing procedure, a protocol for problem solving, a technique for negotiating a deal, or a method for drafting a contract. In the Carnegie model, increased competence comes as a student gradually accumulates a “toolkit of well-founded procedures” in different areas of legal practice. Within this performance framework, “the prime learning task of the novice in law is to achieve a basic acquaintance with the common techniques of the lawyer’s craft.” According to Carnegie, then, the primary focus of experiential education should be on performance: repeated experiences in which students use expert techniques. In this approach, student reasoning takes a backseat to learning these techniques. In fact, Carnegie argues that reasoning and attention to context by novice learners is unhelpful; instead, it posits that students should be taught to “recognize certain well-defined elements of the situation and apply precise and formal rules to these elements, regardless of what else is happening.” Carnegie’s story may at first glance seem very enticing to those of us who are committed to context-based learning. In fact, it has been wildly acclaimed by many in the American clinical community. The crucial problem underlying Carnegie’s focus on performance, however, is that it does not rest on a sound theoretical or empirical foundation. Carnegie’s Reliance on the Dreyfus Theory Carnegie’s theory of expert training is based entirely on the work of two brothers, Hubert and Stuart Dreyfus, educated respectively as a philosopher and an engineer. The Dreyfus brothers posit that expertise is simply a matter of pattern recognition. They argue, for example, that we are able to ride bikes because of prior experiences operating them, not because we are engaging in some kind of cognitive process. As they observe, “No detached choice or deliberation occurs. It just happens, apparently because the proficient performer has experienced similar situations in the past and memories of them trigger plans similar to those that worked in the past and anticipation of events similar to those that occurred.”[footnoteRef:9] They argue, “Normally, experts do not solve problems and do not make decisions; they do what normally works.” [9: Hubert L. Dreyfus & Stuart E. Dreyfus, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer 28 (1986).] With this theoretical outlook, the Dreyfuses assert that acquisition of this kind of expert intuition requires the novice to learn protocols and strategies for identifying the facts and features of a particular situation and performing in response to these facts. They assert that novices progress through different stages of accumulated experience. These stages of development, the Dreyfuses claim, reflect an evolution from the abstract toward the concrete, “from … following abstract rules, to involved skilled behavior based on accumulation of concrete experiences and the unconscious recognition of new situations similar to whole remembered ones.” In the Dreyfuses’ own words, as students become experts, they act “arationally.” In other words, expert performance is essentially mindless. Accordingly, under the Dreyfus approach, expertise is not reflected as much in cognitive competencies as in mindless performances responding to perceived situations. Consistent with Dreyfus, Carnegie envisions that students should first learn rules, strategies, methods, and protocols to enable them to recognize patterns and perform in particular situations. Following Dreyfus, the Report contends that after numerous experiences, students progress through stages and acquire expertise. As they develop expertise, they stop relying on abstract rules and instead respond unconsciously to new situations by perceiving similarities to whole, remembered past experiences. From this perspective, a student’s action, rather than her reasoning process, has paramount importance. 1. Cognitive science critique of Dreyfus Cognitive science research challenges the Dreyfus expertise theory and suggests a much different approach to training for expertise. Most cognitive scientists do recognize the role that pattern recognition plays in expert performance. Nonetheless, they reject the notion that intuitive pattern recognition alone is determinative of expert performance. In fact, the Dreyfus theory conflicts with a number of empirical findings on expert decision making. First, contrary to the Dreyfus theory, studies show that in many domains requiring complex problem solving, expertise does not produce a decrease in abstract thought and a concurrent increase in concrete thinking. Indeed, in these domains, experts have been found to analyze problems at a deeper, more abstract level than nonexperts.[footnoteRef:10] [10: See Krieger, supra note 6, at 265.] Second, the existence of progressive stages in expert development is not supported by the evidence. The Dreyfus theory suggests that the more experience individuals have in a particular area, the more intuition they acquire, and the more expertise they gain. Studies have shown, however, that those individuals with extensive experience in a field do not necessarily perform better than people with less training. In fact, the number of years of experience in a field is a poor predictor of attained performance.[footnoteRef:11] [11: Id.] Many of us know lawyers who have practiced for decades who simply have not developed expertise in a field. Finally, neuroscience evidence does not support the notion of similarity recognition in complex problem solving. This research demonstrates that complex decision making entails a rich connection of different neural subsystems (explicit and implicit) and an interplay between them.[footnoteRef:12] [12: Id. at 266.] In contrast to the Dreyfus pattern recognition theory, cognitive scientists contend that, in fact, experts do use particular cognitive processes in their decision making. These processes are not always conscious and deliberate. Rather, they reflect the interaction between implicit and explicit knowledge. Complex decision making entails both unconscious abstract representations that experts have acquired through experience and explicit representations -- their knowledge of the domain -- which are conscious and can be verbalized. Especially in domains like law and medicine, in which complex knowledge systems and symbolic representations play an integral role, more is involved in making decisions than mere pattern recognition of previous similar situations.[footnoteRef:13] [13: Id.] For example, although a physician may not be aware of all the cognitive processes involved, when she evaluates a patient, she is conscious of the patient’s characterization of his symptoms, her own diagnosis of the problem, and her requests for tests. By overlooking the complex and rich interaction between implicit and explicit knowledge, the Dreyfus model fails to explain skills that are not just routines but instead involve complex tasks, such as finding solutions to problems. Unlike driving a car or riding a bike, handling a legal problem in practice requires more than intuition based on pattern recognition. Lawyers must juggle, for example, the substantive legal doctrine, the procedural context, the particular facts of the situation, the client’s needs, and the cultural and social context. The Dreyfus theory simply does not address the kinds of complex decision making required in most lawyering. Lawyers make decisions at a much more complex conceptual level than just recognizing patterns, and real expertise is associated with this higher level. Several researchers in the field of medical education have concluded that the Dreyfus model is just too simple to account for the complex pattern of phenomena linked to expert medical intuition.[footnoteRef:14] So too should we reach the same conclusion in regard to the practice of law. These insights from cognitive science suggest that expert lawyers need more than a toolkit of simple rules, protocols, and strategies to facilitate pattern recognition. They need to acquire cognitive processes that help them organize and juggle the abundance of information pertinent to a case. [14: Id, at 267. ] For client interviewing, for example, students need to learn more than general scaffolds for developing rapport, gathering information, and probing memory. They need to develop the abilities to identify basic doctrinal issues raised by a client’s problem; to consider the interplay between different procedural, substantive, and ethical issues raised in the interview; and to understand the difference between routine issues in a particular area and more difficult ones that require consultation with more experienced practitioners. Yet the Carnegie story largely ignores these and other essential cognitive processes. As a result, experiential education based on that narrative may not provide students with the rich experiences necessary to develop as true experts in practice. Tale 2: The Enchanted Standardized Client The second story which has gained quite a following in experiential education circles is what I call The Enchanted Standardized Client. According to this narrative, experiential educators should use standardized client simulations to evaluate lawyer performance.[footnoteRef:15] [15: For a full discussion of the issues raised in this section, see Krieger, supra note 6.] The American Best Practices for Legal Education report, for example, glowingly tells the story of an experiment using standardized clients at Glasgow Graduate School of Law in 2006 to assess student communication skills in interviewing. In this experiment, instructors used eight explicit criteria to evaluate student proficiency in interviewing: i. Were the greeting and introduction appropriate? ii. Did the lawyer listen to the client? iii. Did the lawyer use a helpful approach to questioning? iv. Did the lawyer accurately summarize the client’s situation? v. Did the client understand what the lawyer was saying? vi. Did the client feel comfortable with the lawyer? vii. Would the client feel comfortable having the lawyer deal with her situation? viii. Would the client come back to this lawyer if she had a new legal problem? For each of these eight elements, proficiency was assessed on a highly specific scale and given a score between 1 and 5. Although such an approach is touted as a straight-forward method for assessing student learning, the standardized client story, like the Carnegie tale, has very little empirical basis. Advocates for this approach point to the use of standardized patients in medical education, but ignore the fact that very little study of that method has been conducted in the health sciences field. One major study in the medical field, however, suggests that in assessing clinical ability, reasoning ability may be at least as important, if not more important, as performance.[footnoteRef:16] In this study, researchers examined the relationship between patient complaints to medical regulatory authorities about the nature of their physician’s care and the physician’s previous performance on the Canadian medical licensing exam. The research sample included all physicians – over 3,000 doctors -- who took the licensing exam between 1993 and 1996 and were licensed to practice in Ontario and/or Quebec. [16: Robyn Tamblyn, Physician Scores on a National Clinical Skills Examination as Predictors of Complaints to Medical Regulatory Authorities, 298 J. AM. MED. ASS’N 993 (2007).] Researchers then compiled data on all complaints filed with provincial regulatory authorities between 1993 and 2005 which were investigated and found to be valid. For each physician, they determined complaint rates, derived by dividing the number of valid complaints by years of practice time for two different types of complaints: those concerning communication issues and those concerning quality of care. Finally, the researchers compared the two different complaint rates with each physician’s performance on the various components of the licensing exam. One part of the licensing exam assessed medical knowledge using approximately 450 multiple-choice questions about different areas of medicine. A second component assessed clinical decision-making skills using write-in or menu-selection response formats on 36 to 40 clinical problems concerning critical aspects of diagnosis or management. Grades on these problems were not based on a single correct answer but on the relative quality of the responses regarding critical decisions in situations in which errors could affect the patient outcome. The final part was a performance-based standardized patient examination which asked candidates to interact with simulated patients for five to ten minutes. Trained physician-observers assessed candidates in a number of areas, including data collection (e.g. medical history and physical examination) and communication skills (e.g., whether the test-taker used condescending, offensive, or judgmental behaviors or ignored patient responses). After examining the data, researchers found that the best predictor of quality-of-care complaints was the licensing exam’s clinical decision-making component, which focused on the cognitive ability of candidates to solve problems. The better the test-taker’s score on that part of the exam, the lower the complaint rate for that physician. Although high scores on the communications component of the performance exam were not as good a predictor of low quality-of-care complaint rates, researchers also found a statistically significant inverse correlation between that measure and such rates. In regard to communication complaints, researchers found that scores on both the communication part of the performance exam and on the clinical decision-making exam served at nearly the same level, as predictors of communication complaint rates. Finally, researchers surprisingly found a statistically significant inverse relationship between overall complaint rates and scores on the multiple-choice test. Now, I certainly am not describing this study to you so that you can return to your law schools and report that the introductory speaker at a conference on complex legal education called for increased use of multiple-choice tests. In fact, the study seems to suggest that we should be focusing on teaching students how to problem solve and make decisions in practice more than regurgitate information. The primary reason I have discussed this research is to raise questions about the effectiveness of the use of standardized-clients to assess student development. The appeal to teachers of a checklist approach to assessment is not insignificant. This kind of method is relatively straightforward and unambiguous in its application, with clear goals and criteria for evaluation. Students are also likely to embrace performance-based assessment. They will be graded favorably if they simply select and apply the proper tool from their toolkit of lawyering techniques. But in its straightforwardness, the standardized-client approach may detract us from focusing on more significant competencies for long-term practice, such as clinical reasoning. This medical study, then, raises significant questions about faddish narratives such as the use of standardized client assessment which are enticing in their simplicity, but which have little empirical support. As Geoff Norman, a researcher on medical education observes, “I fear that in a few years the outcomes movement too will emerge as one more educational fad, whose major impact was on committee hours reported by academics. This would be unfortunate. The goal of achieving some kind of uniformity is laudable, but the means to the end appear[] too simplistic to be successful.”[footnoteRef:17] [17: Geoff Norman, Editorial – Outcomes, Objectives, and the Seductive Appeal of Simple Solutions, 11 Advances in Health Sci. Educ. 217, 219 (2006).] Tale 3: The Gospel of Teaching to Learning Styles The final narrative that pervades some quarters of experiential education is The Gospel of Teaching to Learning Styles. For many years now, skills instructors in the States – especially clinicians and legal writing teachers – have enthusiastically preached this gospel. According to the tale, there are at least five learning styles: (1) verbal; (2) visual; (3) oral; (4) aural; and (5) tactile. Different learners process information most efficiently through different methods: verbal learners -- through writing and reading written texts; visual learners – through pictures, diagrams, and other visual formats; oral – through verbal discourse; aural – through listening; and tactile – through doing. The preachers of this gospel acknowledge right up front that students generally rely to a greater or lesser degree on most, if not all five, methods. But they go one step farther: according to their story, instruction -- to some extent -- should be tailored to a student’s learning style. Individual learning styles should be assessed, and instruction should be focused on that style. The Best Practices Report recommends, for example, that law schools create faculty-supervised learning centers to provide academic support for students. These centers, the Report contends, would assist all learners as individuals to make demonstrable progress at their own pace taking their learning styles into account without stigma. And the literature is replete with exhortations such as this one from a clinical teacher at a prominent clinical program, “[R]eaching a learner through his or her preferred learning mode can have a substantial positive effect on learning efficiency and outcomes for that student. When designing an effective learning-friendly classroom community, professors can draw upon these understandings of preferred learning modes.”[footnoteRef:18] [18: Kate E. Bloch, Cognition and Star Trek: Learning and Legal Education, 42 J. Marshall L. Rev. 959, 968 (2009).] Unfortunately, however, the preachers of the Learning Style Gospel have not taken a hard look at the recent empirical evidence about tailoring instruction to learning styles. In fact, that evidence, calls into question the accuracy of this story. A recent article in the journal Medical Education reviewed the extensive literature on learning styles and concluded, “[A] thoughtful review of the data provides no support for style-based instruction.”[footnoteRef:19] Research has shown that people, when asked, will volunteer preferences about their preferred mode of taking in new information and studying. Such preferences, however, do not demonstrate that assessing a student’s learning style would be helpful in assessing the most effective mode of instruction for that student. [19: Doug Rohrer & Harold Pashler, Learning Styles: Where’s the Evidence, 46 Med. Educ. 630 (2012).] The authors of that Medical Education article observe that the only research design that would support style-based teaching would require the evaluation of the outcomes from different instruction using different modes of instruction. Specifically, an appropriately-designed study would require that subjects be divided into two different groups (for example, visual and verbal) based on a learning styles test; the subjects would be randomly assigned to instruction in the different modes so that one-half of each group would receive the right mode of instruction and half would receive the wrong mode; and all the subjects would then be given a test for assessment. These researchers found that only a relatively few studies used this methodology, and most of them that did showed no correlation between the subject’s performance on the assessment test and a subject’s instruction in her preferred learning mode. The authors conclude, “[T]here presently is no empirical justification for tailoring instruction to students’ supposedly different learning styles. Educators should instead focus on the most effective and coherent ways to present particular bodies of content.” So here with the Gospel of Learning Styles we have another popular story in the experiential education library that – under close inspection – is not quite as significant as its narrators try to make it. Again, I want to be clear that I am not contending that there is no such thing as different learning styles or that, when possible, a teacher should try to use different modes of instruction. In fact, the researchers who have conducted these critical reviews of learning style literature uniformly suggest that different modes can be helpful if the content taught can effectively be taught with that approach. My only point is that before we get on the tailored-learning-style bandwagon, we need to seriously consider the validity of the research underlying that narrative. Before law schools expend substantial funds on learning centers focused on individual learning styles or instructors use their limited course time to testing of individual learning style, we need some critical inquiry of the subject. Where do we go from here? These stories -- The Practical Apprenticeship Model: the Vehicle for 21st Century Legal Education, the Enchanted Standardized Client, and Learning Style Gospel are just three of the popular narratives that dominate many discussions in the field of experiential education. Unfortunately, there are other similar stories. In classic persuasion mode, these tales are used to validate our pedagogy and approaches to legal education. But, for the most part, they are not the subject to critical inquiry or discussion. These narratives may have some validity, but most scholars in the field have shied away from approaching them with the lenses of inquiring mode. So how do we do that? I would like to spend the remainder of my presentation describing some proposals on how we can use inquiring mode to approach Complex Legal Education – Knowledge, Skills, and Values. First, I suggest that we need to subject our pedagogy and teaching models to evidence-based scrutiny. Unlike researchers in other fields such as medical education, scholars in legal education have substituted empirical examination of their work with arguments and theories based on the authors’ own experience in the classroom. Perhaps, it is in our DNA. As lawyers, most of us would rather argue positions rather than openly explore issues. We do not have the scientific bent of physicians. Most scholarship on complex legal education has been no different. In large part, it is based solely on anecdotal experiences in the classroom or clinic or on informal surveys of students in skills courses. Nice stories; but too little critical inquiry. The purportedly momentous Carnegie Report on American legal education, for instance, based most of its assertions about the role of clinical education in law schools on an informal survey of clinical programs at several different law schools. In selecting these schools and analyzing the data, the authors of the report used no methodological controls. Accordingly, they have provided us with no basis for assessing the validity of their findings. Skills teachers, myself included, are quite gratified by the positive feedback we receive from our students who consistently repeat the mantra,”Your course is one of the only classes in law school where I did something practical!” But such acknowledgements or student satisfaction surveys, such as the American Law School Survey of Student Engagement (“LSSSE”), are not substitutes for a hard look at teaching methods. They simply do not tell us whether our pedagogy – in the long run – will transfer into effective representation of clients in practice. What I believe we need to do is approach our research in an inquiring mode with an open mind, not to “prove” a particular theory or validate a pet method for training students. Rather, we should attempt to learn as much as possible about the subject to develop inferences and explanations about it. A whole array of subjects in the area of complex legal education are out there that are ripe for empirical study: a) the relative effectiveness of different instruction models -- live-client clinics; simulation courses; and externships -- in training students in particular skills; b) the impact of different modes of instruction -- large-group lectures, seminars, role plays -- on development of specific skills; c) the effectiveness of computer-based learning in complex legal instruction; d) the efficacy of early lawyering skills courses in laying a foundation for later skills training; e) the effectiveness of different methods of outcome assessment for evaluating student performance; and f) the long-term impact of clinic courses on attorneys in practice. And there are many other issues I am sure you can identify. I suggest that throughout this conference, we consider issues which we can empirically test. Now I can read the thought bubbles over your heads responding to this proposal: “It’s impossible to test with any accuracy most, if not all, of the issues you’ve identified. Complex legal education is simply too complex to subject to empirical scrutiny.” While this concern is quite legitimate, I do not think that it undermines -- in any significant way – the benefits of such inquiry. No, we are not going to be able to run the kinds of large subject quantitative studies used in some medical research. But we can use the rigorous methods developed for small sample size qualitative research that can start to give us some insights into the questions we wish to study. We can design such valid research by: i) the crafting of concrete and narrow hypotheses; ii) the development – even with small sample sizes -- of selection criteria for subjects which attempt to eliminate bias; iii) the use of a methodology, such as video recording or document capturing software, which assures the collection of the full array of data; and iv) the development of valid and reliable rules – explicit coding protocols- - for measurement of the data collected. Moreover, the validity of this type of research can be enhanced by replication and sharing of data. The methods of qualitative empirical study require transparency in the research process. Researchers disclose all the steps of their studies: hypothesis generation; subject selection criteria; methodology; and measurement protocols. And after they publish their studies, they post their data on websites accessible to other researchers. In this way, other scholars have the ability to review and critique the research methods and to conduct their own analysis of the data. They also have the ability to tweak the methodology and attempt to replicate the study. Research, then, becomes a social enterprise, in which multiple researchers are not just telling their discrete stories about a subject but building on the work of others. Second, I propose that we collaborate much more extensively with colleagues in other disciplines who are exploring similar complex educational issues. Researchers, especially in the area of health sciences, have been studying issues of professional education for decades. We are far behind them, and we can learn a lot from them. I could not have done my own empirical and theoretical research on student legal reasoning without the significant assistance of faculty members at Columbia Medical School’s Department of Biomedical Informatics. A few years ago, after I published a piece on the role of domain knowledge in teaching legal problem solving, a law professor at another school was highly critical of my conclusions. I suggested that we develop a study together to test our divergent hypotheses. He declined my offer candidly telling me that empirical research is simply just too time-consuming. While such research may be very labor intensive, at least in my opinion, that factor is not an adequate excuse for rejecting empirical scrutiny of our work. By working with researchers in other fields, some of the labor intensive aspects of empirical research can be alleviated. They can help us frame hypotheses and develop research methodologies. Many of the issues we are now tackling have been the subject of research in other fields. These studies can be used to frame our own research. For example, medical educators have conducted numerous studies of the issue of the use of simulated versus actual patients in clinical instruction. And the issue of learning transfer – using a concept learned in one context to solve a problem in a different context – is now a very hot topic for research in a number of areas of educational psychology. With necessary revisions, we can attempt to replicate this research. Researchers in other disciplines can also train us in the methodology of qualitative empirical research. The Columbia faculty, for example, have helped train my own research assistants in the use of the think-aloud protocol for interviewing law school subjects in my studies on legal reasoning. And in a study which I am currently conducting with a colleague on the different reasoning strategies of students using print versus electronic media, a colleague in Hofstra’s Sociology Department is assisting us in using advanced statistical software to analyze the data.[footnoteRef:20] [20: Subsequent to this conference, this study was published. See Stefan H. Krieger & Katrina Fischer Kuh, Accessing the Law: An Empirical Study Exploring the Influence of Legal Research Medium, 16 Vand. J. Ent. & Tech. L. 757 (2014).] Additionally, colleagues in other fields can give us a deeper understanding of the theories and approaches in other disciplines than we can gather from reading one or two articles. I am the first to admit that empirical research is not the be-all-and-end-all of critical inquiry of our field. Serious study of the relevant theoretical literature in other fields can also be quite productive. Far too often, however, legal scholars will discover a particular social, psychological, or educational theory and latch onto it, without a full grasp of the place of that theory in the discipline or critiques of its validity, and present it as accepted dogma. Apparently, that is what happened with the Carnegie Report in which the authors focused solely on the Dreyfus theory of expertise without considering the contradictory evidence and theories in the cognitive science literature. Researchers in other disciplines can provide us a more nuanced grasp of the present state of thinking in those fields and introduce us to all the important literature on a subject. In a similar vein, colleagues in other fields can help us to tamp down our tendency to slide into persuasion mode. Several times in reviewing drafts of my articles describing my empirical studies, my colleagues at Columbia have cautioned that my data do not fully support some of my extravagant conclusions. They have helped me to understand that any one study is only one piece of the puzzle, and that other research – by me and others – will over time provide a fuller picture. Finally, I propose that we reach out to other colleagues in our law schools – especially those who are critical or skeptical of experiential education – to collaborate on studies of effective pedagogy. Even after all the decades in which clinical and skills teaching have been part of the curriculum, there is still a divide between “them” and “us.” In my own experience, I have seen that phenomenon even in schools that have a strong tradition of skills education. And it is not just the nonclinical teachers who have this superiority complex that their teaching is deeper and more significant. Often clinicians will confide among themselves that students are getting their only real training in clinics. At least in my opinion, if legal education is going to improve significantly, our studies of teaching methods and pedagogy should not be limited to assessing experiential education. We need to explore the entire enterprise of legal education. As the Canadian study of performance in practice suggests, there are benefits in complex professional education from both traditional doctrinal study and clinical fieldwork. And it may not be as simple as the Carnegie Report suggests, as having a first year devoted to study of basic legal reasoning and final years devoted primarily to skills training. We need to persuade our colleagues to work with us to develop studies to test the relative effectiveness of different teaching methods and approaches. Both groups need to recognize that arguments in persuasion mode about the relative benefits of different teaching methods are not going to resolve the matter. And both need to approach this research with open minds willing to accept results that are counter to their present positions. Conclusion After all this criticism of the scholarship on complex legal education, some of you may be wondering whether I am going to conclude this presentation questioning the merits of my more than three decades as a clinical teacher. But that is far from the case. While I might not have substantial empirical support for my beliefs, I do feel that clinical education has had significant benefits for my students and thousands of others over the past half a century. I acknowledge that much of this feeling is based primarily on stories, but when I talk to graduates twenty or thirty years out of law school and hear tales of the value of clinics on them even today, I know we had some real effect. But, at the same time, I am worried that we will complacently continue – much like our nonclinical colleagues – to rely on stories to support our work rather than critical inquiry. I certainly understand the risks of our questioning the effectiveness of our teaching methods. While experiential legal educators have come a long way, at a vast majority of schools, our status is still less secure than doctrinal faculty. If we follow the path I suggest, we may be giving our critics ammunition to use against us. But if our goal is to improve our teaching, better train our students, and provide quality representation for clients, I, for one, believe the risk is worth it. We need to show our critics that we are confident enough in our pedagogy that we will not shy away from rigorously assessing it. When I look at the alternative stories of the establishment of the Palacký Clinic, I have to say I like the second one much better. It is not a heroic tale. It highlights the limitations of both the faculties at Hofstra and Palacký. But in so doing, it helps us -- so much better than the first narrative -- to explore what we can learn about the establishment of clinical education in newly-developed democracies and the nature of experiential education. In fact, given the present success of the Palacký Clinic, that story may be a good lesson for us about the need for slow, grassroot development of clinics rather than close direction by experts from abroad. I hope that throughout this conference, we do share our stories of our successes in our field. But I also hope that we do not shy away from identifying our failures, problems, and doubts. Most importantly, I hope we can consider ways of collaborating in the future to reflect critically on the important issues now facing complex legal education. 246