How Much Basic Science Does a Clinician Need to Know?
How much basic science should medical students learn, and how much of the investigative science cognitive process can and should be transplanted to the clinical setting? Does it make sense to cram a first year medical student’s head full of nuances about the null hypothesis when that space might better be used for nuances about antibiotics? Is an understanding of the chemical reactions behind histological stains germane to a diagnosis of squamous metaplasia? Is it helpful for the physician to understand how frameshift mutations to the CFTR gene lead to the formation of viscid mucous in the lungs, or is it more important to understand how viscid mucous in the lungs leads to recurrent lung infection?
With the 1910 publication of Abraham Flexner’s “Medical Education in the United States and Canada” and the revolution in medical education that was inspired by it, these questions have been part of north American medical education for a century. Flexner’s pivotal report, sponsored by the Carnegie Foundation for the Advancement of Teaching, was intended to provide recommendations on how to make the American medical education system more coherent and effective.
Flexner, who was an educator rather than a physician, recommended that two years of instruction in the basic sciences (principally chemistry and the biological sciences) precede clinical training, in order to give physicians insight into the principles behind the treatments they were expected to apply. This curriculum, consisting of two years of pre-clinical science followed by two years of clinical rotations, quickly became the norm.
Flexner, who was an educator rather than a physician, recommended that two years of instruction in the basic sciences (principally chemistry and the biological sciences) precede clinical training, in order to give physicians insight into the principles behind the treatments they were expected to apply. This curriculum, consisting of two years of pre-clinical science followed by two years of clinical rotations, quickly became the norm.
The ‘Flexner Revolution’ (and the ensuing ‘post-Flexner era’) was a happy turning point in the history of medicine, upgrading it from a high-functioning trade, often taught through an apprenticeship, to a profession learned through a standardized course of formal study, and dedicated to best practices based on evidence rather than anecdote and tradition. Prior to the Flexner Revolution, ‘expert opinion’ was considered to be one of the most important considerations in deciding which medical treatments to use in given situations. The reason for this was simple. If medicine was a trade, passed from master to apprentice, those who had been practicing the ‘art’ the longest would have the best intuition on effective treatments.
The Flexner Revolution transformed medicine from an art to a science, where even the novice has access to valuable insights into the most effective treatments, based on published studies. Indeed, a more recent revolution known as the ‘Evidence Based Medicine (EBM) Revolution,” even formalized a ranking system for different forms of evidence, giving the highest rankings to carefully controlled, double blinded clinical studies, with standard statistical
The Flexner Revolution transformed medicine from an art to a science, where even the novice has access to valuable insights into the most effective treatments, based on published studies. Indeed, a more recent revolution known as the ‘Evidence Based Medicine (EBM) Revolution,” even formalized a ranking system for different forms of evidence, giving the highest rankings to carefully controlled, double blinded clinical studies, with standard statistical
thresholds of significance. Expert opinion was correctly given the lowest rank in the EBM hierarchy.
There is no question that including basic science in the medical school curriculum was the right thing to do. What is less obvious, however, is just how much scientific training is necessary to be a good clinician. As usual the devil is in the details, and once the floodgates for inclusion of science in medical education had been opened there was apparently no stopping it. Given the exponential increase in both scientific and medical knowledge over the last century it was no surprise that medical students quickly became overwhelmed with facts to learn and principles to understand.
The problem was sometimes exacerbated by the polite deference of the clinicians who taught third and fourth year to the research scientists (often borrowed from the Faculty of Science) who taught first and second year to include whatever principles of basic science they thought might be relevant to an understanding of medicine. To be blunt, this is a mistake. There are only a few people in the world who are qualified to judge what aspects of science are relevant to the practice of medicine, and they are all clinicians turned researchers. These people should be sought out and recruited by medical schools, and given ultimate control over the entire curriculum.
The problem was sometimes exacerbated by the polite deference of the clinicians who taught third and fourth year to the research scientists (often borrowed from the Faculty of Science) who taught first and second year to include whatever principles of basic science they thought might be relevant to an understanding of medicine. To be blunt, this is a mistake. There are only a few people in the world who are qualified to judge what aspects of science are relevant to the practice of medicine, and they are all clinicians turned researchers. These people should be sought out and recruited by medical schools, and given ultimate control over the entire curriculum.
Speaking as somebody who has spent much of his career in biomedical research, determining which parts of basic science are germane to the clinical practice of medicine is a task that should not be left to scientists. We are too easily lost in nuance and detail, and for good reason, since scientific advancement depends on nuance and detail. But medical students have only a limited amount of time in the classroom before being sent out to practice their craft on live human beings. There is no margin for error, and not much room for a post-graduate learning curve. While it may be useful to give science students ambiguous problems to solve as part of Problem Based Learning (PBL) or Task Based Learning (TBL) assignments, and let them walk away from the tutorial with an (accurate) feeling of uncertainty and a healthy yearning to dig deeper and learn more, this is not always appropriate for medical students. They will soon encounter ambiguous situations of their own, which will require a unique form of reasoning called ‘Clinical Reasoning’ (the subject of another commentary). But for now, the objective of first and second year medical school should be to cram as many relevant facts into a young brain as possible.
To be sure, activity based learning, problem based learning, and self directed learning will all come in the clinical years. While PBL and TBL can be used to augment factual learning and rote memorization, in my opinion they should not be used as the primary vehicles for teaching in first and second year medical school. Particularly when the teaching staff only have a superficial understanding of how they work. (This is a frequent hobby horse of mine, and will be discussed often in this blog.)
To be sure, activity based learning, problem based learning, and self directed learning will all come in the clinical years. While PBL and TBL can be used to augment factual learning and rote memorization, in my opinion they should not be used as the primary vehicles for teaching in first and second year medical school. Particularly when the teaching staff only have a superficial understanding of how they work. (This is a frequent hobby horse of mine, and will be discussed often in this blog.)
What is the Most Efficient Teaching Method to Use for a ‘Factual-Overload’ Curriculum?
In 1993, the General Medical Council (GMC) of the United Kingdom published a policy recommendation paper for UK medical schools (subsequently updated in 2003 and 2009) making several recommendations. Among these was the recommendation that some effort be made to reduce factual overload in the medical school curriculum. A second recommendation suggested a switch from traditional, didactic educational methods to more self-directed learning methods. This second recommendation was based on evidence (some real and some apocryphal) that student self-directed learning methods are more efficient, and lead to a deeper understanding of the material than rote memorization. (Although, comparing just about any learning method to rote memorization is pretty much a straw man!) Thus, two needs were highlighted.
The first was a need to reduce the TMI effect, and pare the medical school syllabus down to just the important concepts. I think most people would agree that all medical school is intended to do is provide students with a skeleton (no pun intended) of medical information which they can flesh out during their years of residency. The second was a need to explore better teaching and learning methods so that the remaining material can be learned more efficiently.
The first was a need to reduce the TMI effect, and pare the medical school syllabus down to just the important concepts. I think most people would agree that all medical school is intended to do is provide students with a skeleton (no pun intended) of medical information which they can flesh out during their years of residency. The second was a need to explore better teaching and learning methods so that the remaining material can be learned more efficiently.
Soon medical schools were abuzz with catch phrases normally heard in university education departments, like ‘learner-centered learning,’ ‘metacognition’ and ‘constructivist vs. instructivist models of learning.’ Problem Based Learning, being the flavour of the month, quickly became the norm in most medical schools. Between 1993 and 2004, eight of Canada’s 13 English language medical schools had switched to PBL as the primary training vehicle for the first two years of medical school (Ford 2005). By this time, most of the other university departments, along with virtually all the elementary and high schools had also adopted PBL as a teaching tool.
Having fallen victim to Cuisenaire rod arithmetic and phonetic spelling in elementary school, however, I was personally aware of how vulnerable the education profession is to fad influence, and wondered how much scientific evidence supported the idea that PBL was a superior way of training physicians. I wondered the same thing about Cuisenaire rods, and after years of remedial math, did an informal study of how this failed experiment in elementary school education had come about. As it turned out, there was plenty of evidence that the Cuisenaire method worked. The only problem was that the teachers who were expected to apply it had not been adequately trained in its use. Something that often happens when a proven technique or principle morphs into a popular fad.
I am concerned that PBL is taking a turn down the Cuisenaire road, and is now being applied by people who are unfamiliar with its nuances. It certainly has attractive features, not the least of which is it makes the teacher’s job easier! Standing in front of a blackboard inhaling chalk dust for three hours certainly takes more effort than forming your students into little groups around tables, and then settling intothe role of ‘facilitator’ while they struggle with open ended questions. But is there evidence that thisactually results in better learning. The clinical evidence is unclear, at least to me. The only detailed meta study on the subject that I’m aware of proudly trumpets that PBL results in increases in at least three clinical skills, including communication skills, coping with uncertainty, and appreciation for legal and ethical aspects of health care (Koh et al., 2008). However, these are skills that you would expect to increase if you simply formed people into groups and had them discuss current affairs on a regular basis. There is no evidence (as far as I’m aware) that other clinical skills increase. And, given that PBL as a teaching tool takes considerably longer than traditional teaching methods, and consumes ore personnel, is it worth the effort? This is a question I’d like to discuss.
I’ll address common misuses of, and misconceptions about PBL in my next post.
Ford, J “Influence of a problem-based learning curriculum on the selection of pathology as a career: evidence from the Canadian match of 1993-2004.” Human Pathology (2005) 36, 600-604.
Koh et al. “The effects of problem-based learning during medical school on physician competency: a systematic review.” CMAJ (2008) 8, 34-41.
Comments