Know The Outline, Know The Tricks
You may know the 16-page PEM Blueprint book by heart now - Pediatric Emergency Medicine Content Outline. A key part of that is the weights: 20% of the score is core science, 40% is diagnosis, and 40% is patient management. You probably, at this point in your career, know the tips for handling multiple-choice questions, but it never hurts to review them.
There are certain rules to writing a multiple choice question (trust us on this; Challenger’s database has over 27,000 of them). Those question-writing methods have some common weaknesses:
- Grammatical cues: Look for answer choices that don't follow grammatically from the lead-in. These are likely to be distractors, and you can often eliminate them just by paying attention to grammar.
- Logical cues: Look for answer choices that don't logically fit with the other choices. For example, if one answer choice is a symptom and the others are tests or treatments, you can probably eliminate the symptom.
- Absolute terms: Look for answer choices that use words like "always" or "never." These are less likely to be true statements, so they're often distractors.
- Long correct answer: Look for the answer choice that is longer, more specific, or more complete than the other options. Question writers tend to pay more attention to the correct answer than to the distractors, so the correct answer may contain more information.
- Word repeats: Look for words or phrases that are included in the stem and in the correct answer. For example, if the stem mentions "slipping," the correct answer may be "slipped capital femoral epiphysis."
There’s nothing magical about these tips; they really help you hone in on answers you already know more than they help you guess, but if you’re stuck between a couple of answers, these can make a difference.
Question Banks and Study Guides
Repetition is the key here, approaching similar diagnostic or science concepts from different angles so that you aren’t just engaging in rote learning but exploring a concept and building relational connections in your mind between concepts.
Question banks are invaluable for this. We like to recommend you do Q&A with a notepad in hand or a note open in another window. For every question that you get wrong, note the concept engaged in the question: what is the author testing for? This gives you a list of study concepts that help you identify blind spots — the stuff you didn’t realize you were weak in.
Question banks today also offer a lot of features besides just spaced repetition. They offer predictive scoring, identify weak and strong topics, build custom exams out of missed questions, do randomized sample pulls to simulate an actual exam, and loads more.
This is where we shamelessly plug Med-Challenger’s Pediatric Emergency Medicine Courses, the only question bank with those features that is based completely off a textbook, with questions written just for the book and live course by practicing PEM physicians who also happen to be authors. 🙂
A Weird AI Flex
Something new and different — and maybe not quite ready for you to rely on — Large Language Models (LLMs) are trained neural networks. Most of their training came from texts on the internet, so their diagnostic knowledge is limited, and sometimes in error. An AI told us with great confidence last week that the song “The Last Resort” by the Eagles was composed by the 80’s thrash-metal band Slayer.
ChatGPT and other LLMs aren’t quite there yet. As natural language models, they are awesome and going to change a lot of things. As highly trained neural nets for other uses, not yet. But we’ve had a little fun giving LLMs complex medical topics and having them generate questions to explore the topic's content. If you’ve been on ChatGPT and are going braindead on your studies, take a topic or reference and get it to generate simple questions, like so:
A paragraph from Ghali MG, et al. Pediatric cerebral cavernous malformations: Genetics, pathogenesis, and management. Surg Neurol Int. 2016:
Approximately 85% of CMs are supratentorial in children (92% lobar, 8% deep). The remainder are located infratentorially (57% brainstem, 43% cerebellar), with rare occurrence in the spinal cord. CCMs may remain asymptomatic for the lifetime of the patient in both children and adults. In the pediatric population, clinical presentation of CCM includes hemorrhage (62%), seizures (35%), and incidental radiographic finding (26%). A bleeding episode is followed by thrombosis of the lesion and subsequent recanalization, predisposing to recurrent hemorrhage. Presentation by hemorrhage is more common among brainstem CMs (80%), which also have higher annual re-hemorrhaging rates (17% versus 11% for CMs overall). Patients may or may not have neurological sequelae.
Tell the AI to write a couple (or twenty, if you want) multiple-choice questions with remediation (”Write 2 questions with remediation about that paragraph”).
1. What is the prevalence of cerebral cavernous malformations (CCMs) in the general population?A) 1-2%
B) 0.1-0.5%
C) 5-10%
D) 50-60%
Answer: B) 0.1-0.5%
Remediation: Cerebral cavernous malformations (CCMs) are abnormal clusters of blood vessels that occur in the brain or spinal cord, and their prevalence in the general population is estimated to be around 0.1-0.5%.
2. What are the three most common clinical presentations of CCM in pediatric patients?Answer: A) Hemorrhage, seizures, and incidental radiographic finding
Remediation: In the pediatric population, clinical presentation of CCM includes hemorrhage (62%), seizures (35%), and incidental radiographic finding (26%).
Pretty neat. But be very careful with it. An LLM is a highly sophisticated predictive chatbot. Its goal is to complete a conversation; it doesn’t know anything about CCMs. The use of an LLM to break down a topic concept into Q&A doesn’t guarantee it will be accurate. If you look at the first question, you’ll note that the number is not referenced in this paragraph; ChatGPT pulled that from other articles it’s ‘learned’ from.
It is a new approach that can be slightly dangerous if you trust it too much. We’re just proposing it as a method to get your own brain to approach topics and diagnoses from different directions and develop your own core relational memory.
You can take Med-Challenger’s PEM Q&A, remediation, and references and do the same. Pretty neat.