One concern that has a ring of truth to it is that young doctors have become great “looker-uppers,” and have lost the sense of what it’s like to actually read and study medicine. While doctors enter the profession with a commitment to lifelong learning, some of us fear that the young folk only go far enough to commit to lifelong googling.
While I replied initially in a comment on the forum (and am not the only one to disagree with this stance), I have taken the opportunity to respond more formally because I have found that too often we have this conversation muttering under our breath instead of out in the open. Boomers/GenXers mutter about us “not paying attention” or “not studying enough”. Meanwhile, while we Millennials mutter about how our exams are “just about memorizing a bunch of bull$#!* that will be out of date in 5 years.”
I am a Millenial who will be starting as an attending this year. I still passed my exams and still had to deal with learning the “old fashioned way.” In the end, memorizing factoids and endless lists was the way to beat the exam, but these skills do not always translate well into my daily practice as a physician. I am certain studying to add to my existing knowledge about cytochrome P450 or the oxidative phosphorylation chain will help someday, but rarely does that problem present itself in my emergency medicine practice.
I think that a big problem underlying the current examinations systems in most specialties and jurisdictions is that they ask questions that often have not changed with the times. Most importantly, they value the lower levels of learning (e.g. Bloom’s Taxonomy level = ‘Remember’ and perhaps ‘Apply’) rather than critical reasoning and problem solving.
My experience with “googling” is that it has prevented medical errors and possibly saved lives. I can recall several occasions when I have being presented with a difficult patient problem and turned to the wisdom of the internet to resolve the issue. I do not, however, just go to the first random website I find, but use legitimate resources (usually peer reviewed articles, guidelines, etc..) to support and guide my decisions. In the past my ego (or dare I say, my Id) might have had me presenting arguments to my colleagues that are unsupported by the literature. Instead, my colleagues and I use our superegos and, with the assistance of the internet, arrive at the best answer for the patient.
The days of holding all of medical knowledge in our heads have gone the way of the dinosaur. The lawyers have long known that it is not your actual memorization of the law that makes one a great lawyer, but what you do with it. Most other professions do not kid themselves into thinking that their experts can remember it all. Instead, they allow for open book exams (e.g. the Bar exam, the Canadian Accounting exams) – because it is the reasoning and articulation of thoughts that demonstrate true knowledge. The vast sum of medical knowledge is now much larger in volume than any criminal code or even the tax code, so why are we so stuck on the idea that our doctors should just ‘know it all’ (and off the top of their heads)?
When we ‘pimp’ our medical students we ask them about core knowledge, something that can be easily accessed by Google now. However, it is not whether they have memorized a fact or can look it up quickly that will make them a great physician. In fact, by making memorizing factoids and statistics the mainstay of medical ‘expertise’, we have historically relegated other important aspects of our job (e.g. interpersonal skills, leadership, etc.) to the wayside. Certification exams like the ABEM/ABIM can not measure whether a finishing physician can convince a patient to be adherent to a regime of medications or lifestyle changes. It can only measure that he or she knows that the patient could benefit from that intervention.
So, I pose three questions:
1) What use is that knowledge if effective action does not follow?
2) And, why do our exams not focus on the difficult clinical reasoning required to treat challenging patient presentations?
3) Finally, why do our exams not ask us to negotiate with standardized patients or nurses in difficult situations?
Likely because these skills are difficult to examine. And so, as educators, we have chosen the path of least resistance and given our learners a scantron sheet and #2 pencil.
Luckily, there is change afoot as leaders in medical education have laid the groundwork, and are now bringing us forward towards a new era of competency-based medical education. In response to Dr. Schumann‘s blog post, Dr. Jason Frank stated on his twitterfeed that:
— Dr Jason Frank (@drjfrank) June 30, 2013
I would like to stand up for my generation in saying that we are not dumber doctors as was posited in the Glass House piece. We are just different. We have been taught with frameworks like CanMEDs and the ACGME Competencies and we see the practice of medicine as more than just bubbles to be filled in, or lists to be recited.
In his final point, Dr. John Schumann notes that:
in today’s era of restricted work hours, something has to give. Too often, when residents must complete the same amount of work in a limited amount of time, what’s sacrificed is the didactic portion of the education: the stuff we do by running through case after case, discussing subtleties and action plans. When time is limited, the work’s simply gotta get done.
Since work hours restrictions limit clinical work, using this logic shouldn’t exam scores have gone up? Much of that self-directed memory work we call ‘studying’ is not best done at work. Have you ever tried to read a chapter of Rosen’s Emergency Medicine while seeing 30-something patients in a busy Emergency Department? Is it even possible to memorize approaches to malaria and other parasitic diseases from Harrison’s while on call for Internal Medicine and admitting your 4th patient with a COPD exacerbation at 4am? If you think about it, if “reduced work hours” has anything to do with pass rates on exams then it should have translated into more off-the-clock studying time and increased pass rates.
To think that residents learn only at work is highly teacher-centered.
We Millenials are a group with little agency in the matter of how we write exams. However, I posit that our whole examinations system need radical change. While studying for my final examination in residency, I was advised to answer questions as a ‘expert doctor from 2008’ instead of an expert and up-to-date doctor of 2013 because my examiners and the exam-setters will not have read the 2013 update on STEMI care, nor the 2012 IDSA guidelines, etc. Many colleagues and friends have recounted similar quandaries when sitting for their exams.
Not only do these examinations test lower levels of knowledge because those are the easiest to test, but they may have far more insidious implications for professionalism and patient safety.
The current system from medical school onward highly values guessing and intellectual bravado (or as I like to call it, the ‘My-Brain-is-Bigger-than-Yours’ syndrome). In a field that should value double-checking and the use of guidelines/checklists to prevent errors, the current system rewards “guess work” through multiple choice exams and the art of ‘Pimping’ (especially when the ‘gold standard’ is the ‘Pimper’ and not a referenced source). In a culture where surgical checklists and multidisciplinary teams are the norm – why must the doctor be the pillar of all knowledge that stands alone, unsupported by the vast amount of medical knowledge that is literally at our fingertips?
I dare say that most of us want to train physicians who are humble enough to look things up on our smartphones. If we want our clinical clerks and residents to know their limits – why must I choose “D” or randomly guess on items I don’t know? With modern computers, should we not be able to allow learners to say “I don’t know, but I know where to look it up”? Would that not be a better habit to form?
One recent study (2011) showed that most of our patients are alright with us looking up information (only 9% decreased their confidence level, and only 7% perceived lower quality of care), and we often over-estimate their negative perceptions (MDs predicted 51% decrease in confidence, and 33% decrease in quality of care). This study suggested that younger patients are also savvy about random internet search engine use and negatively responded to such uses of technology (to a significant value of <0.001).
We must be mindful that our hidden curriculum and assessment does not suggest that if you’re a “real doctor” you should just bluff and guess if you’re not sure – lest you risk looking like a fool. Instead, we need to cultivate a culture that encourages each person to become more self-aware so they know their limits and how to overcome them.
When I ask questions (i.e. ‘pimp’) my students and junior residents, I have always rewarded them for saying “I don’t know.” I am happy if they are able to recognize that they have their limits of knowledge. I reward them further when they ask for 2 minutes to look it up because then I can watch them reason through their thinking, vet the course with which they navigate the vast knowledge base of our discipline, and possibly guide them towards the right answer. In fact, I have used this as a teaching technique, with 4 learners collaborating via smartphone to make a quick learning guide after being posed a question they could not answer.
Some day when I am not there and they are beginning practice, I will feel successful if they retain only that skill of being able to look up and vet the answers in front of them. This may be through an adjudicated Google search, or Bing, or whatever search engine du jour. Search engines, you see, are portals to finding further information, and not an end unto itself. I hope to train savvy enough medical practitioners so that they may click on any link and judge the content that lies beyond.
With the rate of medical research and scientific discovery, I have no doubt that the “medical fact” I teach them now will be vastly out of date and possibly viewed as “medical myth” by then. The only thing that will allow our learners to continue to change and grow with our discipline will be those skills of point-of-care research and evaluation.
I dream of a day when we might acknowledge that it is okay to know your limits as a physician, and to seek help when needed. And that those who know less aren’t shunned for not knowing which drugs cause AKI, because literally you can look that up in about 10 seconds with the right database. Perhaps, as Erik Venos stated, there could merely be time penalties on exams for using the internet (or pocket books)? Or perhaps you might even be high-tech enough to capture which sources the test-taker used to see if it was a legitimate or peer-reviewed source? Perhaps you could even reward those who used better sources and got to better answers, while being time efficient.
The way forward is not necessarily to repeat history. Let’s challenge ourselves to lead our field into the future instead of regressing into, or merely repeating, the past. It was not always better “back in the day.”
*Pending FRCPC status, as I have not yet paid any RC dues 😀
Peer reviewed by Brent Thoma