Qualitative Descriptive Analysis of Clinical Case Presentations: A Pilot Study on the Importance of Moving from Intuition to Educational Action in Supervision ()
1. Introduction
Clinical reasoning is a thought and decision-making process that calls upon acquired knowledge to perform the action deemed most appropriate in a specific context of health problem-solving (Boshuizen & Schmidt, 1992). It is one of the most essential skills a physician should acquire.
Clinical reasoning involves logical thinking processes that lead to the determination of diagnosis and disease management (Gruppen, 2017). Moreover, insufficient attention to the teaching of clinical reasoning skills increases the incidence of diagnostic errors, contributing to medical errors (Patel et al., 2012).
Clinical reasoning becomes more concretely developed in students once they begin their clinical rotations and practice in healthcare settings, thereby establishing direct contact with patients and clinical educators. Throughout their clinical training, students encounter a wide variety of clinical presentations, which enriches their clinical scripts (Charlin et al., 2007) and strengthens their ability to analyze and diagnose a range of clinical cases. This learning and development process would not be successful without effective supervision from clinical educators (Audétat et al., 2017).
Thus, supervision of clinical reasoning (CR) holds a crucial role in medical training. This concept dates back to William Osler, a pioneer of bedside teaching, who introduced the direct supervision of students as early as 1889, underscoring the importance of direct patient-centered teaching (Dornan, 2005). To ensure the quality of this training, including when dealing with the time constraints inherent to clinical settings (Ramani & Leinster, 2008), it is essential that clinical supervisors possess both strong clinical expertise and educational skills, particularly regarding mastery of clinical reasoning processes (Ramani & Leinster, 2008).
The oral presentation of clinical cases is the most frequently used tool in clinical settings to assess and stimulate the learner’s clinical reasoning (Melvin & Cavalcanti, 2016).
It involves presenting the case in the following format:
- Patient introduction (identity, age, occupation, marital status)
- Reason for consultation or hospitalization
- History of the present illness
- Medical history
- Physical examination
- Additional tests
- Diagnosis
- Management plan
The objective of our study was to explore and identify the gaps, challenges, and risks inherent in the conventional method for supporting the development of learners’ clinical reasoning.
2. Methodology
2.1. Type of Study
We conducted an exploratory descriptive study aimed at analyzing oral case presentations by postgraduate medical students. This exploratory study was carried out at the Benbadis University Hospital Center in Constantine.
2.2. Participant Recruitment
The participants were postgraduate medical residents specializing in internal medicine, infectious diseases, hematology, and cardiology. This selection aimed to increase the diversity of participants in our study.
We conducted in-person visits to the respective departments included in the study to meet with the residents, providing them with information about the study, its objectives, benefits, methodology, and data confidentiality. Participants were invited to voluntarily participate in the study. It was clarified that their participation or non-participation would have no impact on their evaluations. We obtained their consent for audio recording of the oral case presentations. These recordings were transcribed, and a thematic analysis of these transcripts was conducted.
2.3. Data Collection
2.3.1. Case Presentations-Bordage (DTI) Instrument
Participants were first invited to complete the Bordage DTI questionnaire, designed to assess two dimensions of diagnostic thinking: flexibility of thought and the degree of structure of knowledge in memory (Bordage et al., 1990). Subsequently, they were asked to give oral presentations of patient cases within their respective specialties, as they would in their routine practice. The participants did not receive any specific training and made presentations using the usual method intuitively, repeating what they had learned in each discipline.
2.3.2. Ethical Considerations
Approval for this research was granted by the Committee for Integrity and Ethics in Health Professions Education Research. The project was conducted with the informed consent of participants, and collected data were anonymized to preserve confidentiality.
2.4. Data Analysis
The collected data, including case presentation recordings and DTI scores, were analyzed quantitatively and qualitatively in a descriptive manner.
The participants’ clinical reasoning was analyzed globally and in terms of its two aspects, flexibility and memory structuring, using the scores from the Diagnostic Thinking Inventory (DTI) (Bordage et al., 1990).
We also analyzed the duration (length) of the oral presentations and identified components of the clinical reasoning process. To accomplish this, we based our approach on existing literature and, following discussion and consultation with the authors of existing research (Audétat et al., 2017), developed a series of questions (indicators) aimed at identifying the various stages of clinical reasoning. These were then incorporated into a grid (Table 1).
Table 1. Qualitative analysis grid for oral presentations (Audétat et al., 2017).
|
Themes |
Indicators |
Steps of Clinical Reasoning |
Initial representation of the problem, characterization of the problem (Complaints/symptoms, history/timeline) |
Does the learner identify the key elements of the clinical situation? Can they form an initial portrait of the situation to be evaluated? Does the learner accurately identify the symptoms and medical history of the patient during the characterization of the problem? |
Data Collection |
Does the learner perform a targeted physical examination guided by the initial representation? |
Generation of Hypotheses |
Does the learner generate hypotheses appropriate to the patient’s context? Do they generate multiple hypotheses? Do they prioritize hypotheses by eliminating the worst-case scenario? |
Generation of Hypotheses Adapted to Collected Data |
Do they ask key questions related to these hypotheses? Are their history taking and physical examination performed in a way that verifies the hypotheses? Can I guess which hypotheses the trainee is verifying when I observe data collection? Do they note the clues provided by the patient and generate new hypotheses that take this information into account? |
Hypothesis Verification |
Do they remain open to other possible hypotheses and do they verify them? Do they prioritize the hypotheses correctly? Does the learner integrate complementary examinations to verify their hypotheses? |
Acquisition of Additional Data |
Does the learner ask relevant questions and perform additional examinations to obtain more data? Does the learner request laboratory and imaging tests, justifying them with hypotheses? |
Steps of Clinical Reasoning |
Interpretation/Analysis of Data |
Does the learner manage to identify the most important elements among those they have collected? Can they connect them to form a comprehensive representation of the problem? Do they recognize critical elements (“red flags”) that indicate hypotheses that must be excluded? Do they tend to minimize or, conversely, overemphasize an element? Do they know how to adequately interpret the specific findings from the clinical examination? Does the learner interpret the results of tests and examinations, relating them to the formulated hypotheses? |
Global Representation of the Problem |
Does the learner manage to form a comprehensive representation of the clinical situation, integrating all key elements coherently? |
Implementation of fallback Strategies |
Does the learner use strategies to clarify the situation? |
Differential Diagnosis-Presumptive Diagnosis |
Are the retained differential diagnoses consistent with the described clinical situation? Does the trainee manage to adequately select the diagnostic hypotheses to retain, justifying their choice with positive or negative elements obtained during the consultation? Is the final presumptive diagnosis made? |
Intervention Plan/Evaluation |
Does the learner develop a complete intervention plan detailing complementary examinations, treatments, and follow-up for the patient? Is the proposed plan targeted, coherent, and aligned with the specific aspects of the clinical situation? Does the learner prioritize the problems in their management plan? Does the learner evaluate the results of their actions (therapeutic interventions, investigations)? |
Additional |
Confidence during Oral Presentation |
Does the learner feel confident during the oral presentation of clinical cases? During the oral presentation of the case, does the learner demonstrate a logical and coherent continuity in their clinical reasoning, connecting the different stages of the case? |
Engagement with the Case |
During the presentation, does the learner clearly express their interest in the details of the case? When presenting, does the learner describe the situation in a way that makes it easy for the supervisor to envision the patient? Does the learner actively engage in case management by proposing solutions? In case of difficulties, does the learner express a continuous interest in solving them rather than considering them insurmountable? Does the learner present complex cases by focusing on possible solutions rather than obstacles? When the learner recognizes their gaps, do they show interest in learning and improving? |
Continued
Additional |
Learning Stance/Reflexivity |
Does the learner question what they have learned from the case? Does the learner ask questions about the case to their supervisor to deepen their understanding of the situation? |
Collaboration (referral letters, case presentations on files, multidisciplinary consultation meetings) |
Does the learner share their clinical reasoning and hypotheses with other healthcare professionals? Does the learner perceive collaboration as a constraint? Does the learner encourage participation and contributions from other healthcare professionals in problem-solving? |
Emergency Situation |
Does the learner quickly identify the emergency situation in the presented case? Do they generate the key hypotheses to exclude? Does the learner mention time and urgency as constraints to a relevant case presentation, explicitly articulating their clinical reasoning? |
Each question in the grid was assessed using a 5-point Likert psychometric scale, where response options were: 1. Not at all; 2. Rarely; 3. Moderately; 4. Well; 5. Very well.
We calculated the overall average of the Likert score for each step of clinical reasoning. The score results thus allowed us to evaluate the quality of the presentations according to the usual method and the participants’ performance in articulating each step of their clinical reasoning. Meanwhile, we used ATLAS.ti software version 24.0.0.29576 to analyze the transcripts of the oral case presentations.
3. Results
3.1. Participants
In October 2023, eight postgraduate medical residents voluntarily responded to our invitation to participate in this pilot study. The participants included five women and three men from various years of residency training: second year (2 participants), third year (2 participants), fourth year (1 participant), and fifth year (3 participants), with an average age of 29 ± 2 years.
Due to the unavailability of hematology residents for oral case presentations, we included participants only from three specialties: internal medicine, infectious diseases, and cardiology.
3.2. Presentation Duration
The presentation durations varied, with an average of 5 minutes and 92 seconds ± 2.78 minutes (minimum 3 minutes, maximum 10 minutes and 41 seconds).
3.3. Analysis of Presentations Using the Assessment Grid
The assessment grid (Table 2) and the average Likert scale scores (Table 3) indicated that across all eight presentations, using the usual method, the initial problem representation was well-constructed, with thorough data collection, clear symptom characterization, and problem identification based on complaints/symptoms stated clearly at the beginning of the presentation. The history of illness was introduced clearly and narrated chronologically, providing a clear presentation of the patient. Relevant personal and family medical history was documented in most presentations. The learner was able to identify the key elements of the clinical situation and to establish an initial understanding of the case.
However, the Likert scale scores revealed gaps in the expression of clinical reasoning (Table 3).
Table 2. Overall average of Likert scores for the steps of the clinical reasoning process from the eight presentations according to the usual method.
Steps of Clinical Reasoning |
Overall mean Likert
score by item |
Initial representation of the problem, characterization of the problem (Complaints/symptoms, history/timeline) |
3.62 |
Data Collection |
2.62 |
Generation of Hypotheses |
1.54 |
Hypothesis Verification |
1.56 |
Acquisition of Additional Data |
1.99 |
Interpretation/Analysis of Data |
2.16 |
Global Representation of the Problem |
2 |
Presumptive Diagnosis |
1.56 |
Intervention Plan/Evaluation |
1.87 |
Table 3. Gaps in the expression of clinical reasoning identified according to the analysis grid and the Likert scores of the presentations using the usual method.
Steps of Clinical Reasoning |
Comments |
Generation of Hypotheses Adapted to Collected Data |
It was often difficult to determine which hypotheses the trainee was testing when presenting the patient’s history and physical examination. Hypotheses were rarely stated and supported with relevant positive or negative signs. When hypotheses were formulated, they were not prioritized, including the worst-case scenario. |
Hypothesis Verification |
The results of additional tests were not incorporated to confirm or rule out the hypotheses. |
Acquisition of Additional Data |
The physical examination was not targeted in most presentations and systematically covered all systems. Laboratory tests were requested but rarely justified by hypotheses. Additional tests were sometimes requested before hypotheses were generated. |
Interpretation of Laboratory Test Results in Relation to Formulated Hypotheses |
Was rarely formulated. |
Global Representation of the Clinical Situation and Problem |
Was not clearly formulated, and the summary did not coherently synthesize all the relevant and key elements of the patient’s history, physical examination, and diagnostic tests. Often, the summary simply repeated the reason for hospitalization along with the history. |
Final Presumptive Diagnosis |
Mentioned in only one presentation. |
Diagnostic/Therapeutic Plan |
In most presentations, only the primary complaint was addressed, with omission of other associated issues in the patient (e.g., diabetes management, therapeutic education, dyslipidemia, hypertension, anemia…). The therapeutic plan was not supported by a final presumptive diagnosis. |
Intervention Plan |
There was no detailed management plan with a prioritized list of problems in the presentations. |
Evaluation of the Intervention Plan |
Was mentioned in a few presentations. |
Use of Fallback Strategies |
The use of fallback strategies was implemented in a few observations to clarify the problem and the diagnosis. |
Other findings |
Confidence |
During the oral presentations of the cases, the learners displayed a certain degree of confidence, but it was sometimes noted that the presentation was hesitant, with repetitions and omissions that could indicate a lack of confidence. |
Collaboration-Communication |
In some presentations, it was noted that the learner shared their clinical reasoning and hypotheses about the case with other healthcare professionals, indicating interprofessional communication and collaboration. |
Responsiveness to Emergencies |
In one of the eight observations, it was noted that the learner quickly identified the emergency situation and the critical elements in their patient. However, the treatment plan did not clearly mention the emergency measures taken. |
3.4. Analysis of Oral Case Presentation Transcripts: ATLAS.ti
Figure 1 illustrates the findings. Data collection related to the main complaint, history, examination, and history of illness appeared in all presentations, with a total of 32 citations identified. In contrast, hypothesis generation was less present and identified in only three presentations. The theme of final presumptive diagnosis was mentioned only once among the eight transcripts reviewed.
Figure 1. Sankey diagram illustrating the steps of clinical reasoning identified in the presentations according to the usual method.
3.5. Additional Findings
In the pilot study results, we identified two additional themes (Table 2) that emerged during the analysis of oral case presentations: collaboration (in 3 presentations) and responsiveness to urgency among participants (in 3 presentations).
3.6. Analysis of the Diagnostic Thinking Inventory (DTI) Score by Bordage
We conducted a quantitative analysis of participants’ clinical reasoning, finding an average global DTI score of 185 ± 12, with an average flexibility of thought score of 92 ± 9, and an average knowledge structuring in memory score of 93 ± 6 (Table 4). These scores were higher among female participants compared to male participants (Table 4).
Table 4. Bordage DTI Score and its two subscales of participants by gender.
|
Gender |
Mean ± standard deviation |
Minimum |
Maximum |
DTI Score |
185 ± 12 |
173 |
211 |
|
Female |
190 ± 14 |
176 |
211 |
Male |
177 ± 4 |
173 |
180 |
Flexibility of thinking |
92 ± 9 |
81 |
109 |
|
Female |
95 ± 9 |
85 |
109 |
Male |
86 ± 7 |
86 |
94 |
Structure of memory |
93 ± 6 |
83 |
102 |
|
Female |
94 ± 5 |
91 |
102 |
Male |
91 ± 7 |
83 |
97 |
Figure 2 shows the DTI scores by Bordage and its two subscales, flexibility of thought and memory structuring, by year of study. Notably, global DTI scores were higher among second- and third-year participants compared to fourth and fifth-year participants.
Figure 2. Bordage DTI Score and its subscales among participants by residency education level.
4. Discussion
In this study, we examined the quality of eight patient case oral presentations following the usual method used in routine practice. Oral presentation is an essential aspect of clinical medicine. Presentation skills rely on the ability to collect, process, and organize patient data, with clinical reasoning being fundamental to the development of these skills.
The results of this pilot study revealed a diversity of presentation methods. This could be linked to the unique characteristics of each discipline and individual cases or to the lack of a clearly defined, common applicable strategy.
The duration of oral presentations varied. The length of clinical presentations remains a topic of debate in the literature. Higgs described clinical reasoning (Higgs et al., 2008) as an upward and outward spiral, emphasizing it as an ongoing, cyclical, and dynamic process rather than a static one. Each loop of the spiral involves data input, data interpretation (or reinterpretation), and problem formulation (or reformulation), leading to a progressively broader and deeper understanding of the clinical problem (Higgs et al., 2008). This perspective suggests that the duration of an oral presentation may be less critical than the quality and depth of the clinical reasoning process demonstrated within it. We acknowledge that exploring a potential correlation between the duration of presentations and the quality of clinical reasoning expressed would provide valuable insights in future studies. However, in this research, we focused on the quality of the presentations by identifying the steps of the clinical reasoning process.
The analysis of verbatim transcripts showed that learners were able to develop an initial representation of the problem, a crucial step in the clinical reasoning process. Gruppen and et al. emphasized the importance of collecting clinical data, reporting that medical students who had the correct diagnosis in mind were four to nine times more likely to arrive at the correct diagnosis after the interview, which underscores the importance of gathering relevant and targeted information for early hypothesis generation (Gruppen et al., 1993).
Furthermore, our results noted that the final presumptive diagnosis was only mentioned once among the presentations reviewed, raising questions about the clarity and precision of the diagnostic process and patient management. The hypothesis-generation step was not consistently addressed in the presentations.
There are several factors to explain this observation, notably cultural ones that we address in our main study. We believe that presentations using the traditional method are focused on case resolution, and participants tend to quickly skim over hypothesis generation. Additionally, we think that the traditional method frequently places students in a more passive role (Wolpaw et al., 2003).
In summary, the verbatim analysis indicated that the usual method of oral case presentations did not foster the verbalization and clarification of clinical reasoning at all stages. The literature supports these findings, as Melvin noted that oral case presentations are the most frequently used tool in routine practice for evaluating and stimulating learners’ clinical reasoning (Melvin & Cavalcanti, 2016).
However, during these oral case presentations, medical students and inexperienced junior physicians tend to focus primarily on facts and rarely spontaneously express their thoughts and reasoning, which makes evaluation challenging (Irby, 1995; Foley et al., 1979). Engel suggested that discussions associated with presentations can be valuable for student learning and should be conducted in the best possible way (Edwards et al., 1987).
Despite their widespread use, oral case presentations have been scarcely studied in the medical education literature (Melvin & Cavalcanti, 2016). Brose (Brose, 1992) distinguished two types of case presentations in medicine: the traditional method and the sequenced method. The traditional method, although used during rounds and small group discussions, proved to be less optimal for teaching problem-solving skills. In contrast, the sequential method, involving the presentation of information in small segments, was more effective in formal teaching for both small and large groups. Known as the “chunked” format by Kassirer and Kopelman (Kassirer & Kopelman, 1990; Kassirer & Kopelman, 1991), this approach has been widely implemented in various forms at Ohio University College of Osteopathic Medicine since 1982, as well as at other teaching institutions.
The traditional method has limitations related to a lack of enthusiasm and interaction (Brose, 1992). The sequential method, on the other hand, minimizes these limitations by promoting more interactive discussion and offering more effective learning opportunities (Brose, 1992).
Quantitatively, Bordage’s Diagnostic Thinking Inventory (DTI) has been used in multiple studies to assess thought flexibility and memory structure across various levels of clinical experience, with mixed results (Bordage et al., 1990; Keshmiri et al., 2021; Rahayu & McAleer, 2008). Among our participants, the structuring score was higher than the flexibility score, which was also observed in Hermasari’s study (Hermasari et al., 2023).
We noted that second and third year residents had higher DTI scores than fourth and fifth year residents. We hypothesize that the phenomenon of “Loss of innocence” in student training, as described by Boshuizen during the transition to clinical settings (Boshuizen, 1996), may be a contributing factor to this result. Fourth and fifth year students experience a period of destabilization, during which they reorganize and restructure their knowledge to construct clinical scripts (Schmidt et al., 1990). This process aligns with the logic of skill development, where progress occurs at an individual pace and varies from one student to another.
On the other hand, Bordage’s study emphasizes significant differences in scores among nine groups representing various stages of medical education and clinical practice, including first year and third-year medical students, interns, senior house officers, registrars, senior registrars, consultants, general practice trainees, and general practitioners (Bordage et al., 1990). Furthermore, supporting this result, the Indonesian study by Rahayu found significant differences in DTI scores across different years of study, with scores generally increasing throughout medical training and practical experiences (Rahayu & McAleer, 2008), but DTI scores were lower in 2nd year, 3rd year than those in 1st year. This highlights that clinical reasoning skills develop progressively through education and hands on experience.
As noted by Ching and colleagues, failures in clinical reasoning have been identified as an important factor in diagnostic and treatment errors (Lee et al., 2022).
Despite the small number of participants, this pilot study allowed us to identify gaps requiring the close attention of clinical supervisors to adopt teaching methods that foster and facilitate the expression of clinical reasoning by learners. Our findings suggest that learners tended not to generate hypotheses, and that additional tests were ordered without being justified by hypotheses, which could expose them to the risk of diagnostic errors. Nendaz highlighted that diagnostic errors account for more than 8% of adverse events in medicine and up to 30% of malpractice claims. The mechanisms of error can be related to the work environment, but cognitive issues are involved in about 75% of cases, either alone or in association with system failures in 19% of cases and mixed causes in 46% (Graber et al., 2005).
The majority of cognitive errors are not related to a lack of knowledge but rather to flaws in the collection, integration, and verification of data that can lead to premature diagnostic closure (Nendaz & Perrier, 2012). Graber et al. (2005) reported that cognitive factors, meaning physicians’ thinking processes, contribute to diagnostic errors in nearly 75% of cases, with the most common cognitive factors involving faulty synthesis. This underscores the importance of understanding physicians’ thought processes, their decision-making, and clinical reasoning processes. The majority of cognitive errors are not due to a lack of knowledge (3%) but rather to defects in data collection (14%), data integration (50%), and data verification (33%) (Graber et al., 2005; Bordage, 1999). This is observed in various fields, such as internal medicine (Graber et al., 2005), anesthesia, and neurology. The number of cognitive errors during the reasoning process can even predict the occurrence of patient-harmful events (Zwaan et al., 2012).
Pelaccia also emphasized that reasoning errors are the main cause of misdiagnoses and that it is therefore crucial for supervisors to identify the challenges faced by students and address them (Pelaccia et al., 2020).
Our results suggest that the usual presentation method exposes residents to academic challenges. As noted in many publications, ten to fifteen percent of medical trainees face academic challenges, the majority of which are cognitive, specifically related to clinical reasoning. Medical schools need to be more involved in developing and establishing tools to encourage direct observation of clinical reasoning development in medical learners, as well as to enhance the teaching skills of educators (Audétat et al., 2013).
On the other hand, our study results indicate that clinical supervisors are likely not satisfied with the usual presentation method, as the resident fails to articulate their clinical reasoning in a structured manner and to present a final presumptive diagnosis. In this regard, it has been noted that most supervisors can quickly observe that their trainee’s cognitive approach is unsatisfactory. However, this perception remains intuitive and does not allow the supervisor to clearly identify the deficits in their trainee’s thought process (Weller and et al., cited by Audétat et al., 2019). To overcome this challenge, it is necessary for supervisors to translate their intuitions into pedagogical actions (Audétat et al., 2019).
We can deduce that the usual presentation method does not encourage the verbalization of clinical reasoning. This observation raises important questions about the effectiveness of the traditional presentation method in developing the clinical reasoning of postgraduate medical students. Therefore, to address this challenge, we are interested in exploring a new supervision method in the context of case presentations, namely the SNAPPS method, a framework for verbalizing clinical reasoning. The SNAPPS method (Table 5) (Audétat & Laurin, 2018), an acronym for “Summarize,” “Narrow down,” “Analyze,” “Probe,” “Plan,” and “Select” (Audétat & Laurin, 2018), is a learner-centered educational strategy that uses a series of cognitive processes to assist the learner in clinical decision-making and improve clinical reasoning skills. This strategy guides the student through several steps: summarizing the history and findings, generating and analyzing and prioritizing hypotheses and differential diagnoses, asking questions to the supervisor about uncertainties, planning the management of the patient’s medical problems by prioritizing them, and selecting a case-related problem for self-directed learning. The student uses these six steps to organize their thinking, verbalize, and clarify the thought process to the clinical supervisor (Berg-Poppe et al., 2022).
Table 5. The 6 steps of the SNAPPS method (Audétat & Laurin, 2018).
Step |
Meaning |
Content |
1. Summarise |
Summarize the case |
The student summarizes the clinical situation. |
2. Narrow Down |
Name the hypotheses and differential diagnoses |
The student discusses the main diagnostic hypotheses considered. |
3. Analyse |
Analyze the differential by comparing and contrasting the possibilities/Provide justification |
The student presents the key elements that support or contradict each retained hypothesis. He interprets the obtained data to associate them with his hypotheses. |
4. Probe |
Probe the preceptor by asking about uncertainties or difficulties |
The student asks the supervisor questions to enhance his knowledge, address encountered difficulties, discuss concrete strategies to consider, and benefit from the supervisor’s experience. |
5. Plan |
Plan the management of the clinical situation |
The student presents a plan for complementary examinations, treatment, and patient follow-up. |
6. Select |
Select a case related-issue for self directed learning |
The student identifies personal learning objectives based on the situation, seeking the supervisor’s
assistance if needed. |
In their role as supervisors, effective clinician educators ask relevant questions, create intellectually safe learning environments, and encourage student development. The SNAPPS framework emphasizes active student participation, which is essential for the model’s success. This framework is inspired by the Socratic method, utilizing deliberate questions guided by the clinician educator to foster understanding development (Oh, 2005). Wolpaw and colleagues (Wolpaw et al., 2003) have asserted the value of this method in the outpatient setting to encourage dialogue between students and clinician educators, thereby enhancing advanced reflection and promoting active learning.
Medical students take the initiative in communication. Instead of simply reporting facts and information, they are encouraged to communicate their thoughts, questions, and uncertainties to foster their development through collaborative communication with their clinician educator. To balance the active ownership of learning by the student during this educational exchange of ideas, the clinician supervisor takes on the role of facilitator (Berg-Poppe et al., 2022).
Moreover, the SNAPPS method requires students to take initiative, resulting in a responsibility for self-directed learning during the clinical phase of the program (Berg-Poppe et al., 2022). Consequently, this method promotes student autonomy and increases their confidence.
The development and improvement of learners’ clinical reasoning represent a significant challenge. To address this challenge, it is essential to organize teaching and training in clinical reasoning (Schmidt & Mamede, 2015), as well as to train supervisors in clinical supervision methods. Many physicians believe that because they are good clinicians or researchers, they are also good teachers. They often rely on their personal opinions or beliefs regarding medical education issues, instead of seeking evidence as they would for medical activities (Harden et al., 2000). It is necessary to provide clinician educators with skills in medical education (Steinert et al., 2006). According to a systematic review (Steinert et al., 2006), such programs are associated with greater satisfaction and self-confidence among educators, improved teaching skills and behaviors, and a positive impact on learners.
The findings of this exploratory study enable us to proceed with our main mixed methods research, which combines qualitative and quantitative approaches and adopts a comparative, pragmatic framework. The objectives are to evaluate the impact of the SNAPPS method on learners’ clinical reasoning and on the quality of oral case presentations.
5. Strengths-Limitations
This pilot study sought to validate initial intuitions about the effectiveness of the conventional method by emphasizing a diverse participant pool from various medical disciplines and academic levels, enriching the research through varied perspectives. One of its primary strengths is the diversity of participants, which enhanced the qualitative analysis and provided a solid foundation for future research. The study also successfully validated initial intuitions about the traditional method, highlighting its limitations in fostering the expression of learners’ clinical reasoning process during oral case presentations.
Its limitations, particularly the small sample size and lack of generalizability of the results, necessitate cautious interpretation of the results especially regarding correlation between the duration of presentations and the quality of clinical reasoning expressed. Our main future comparative study, with larger and more diverse samples, will be crucial to validate and expand upon these findings, ensuring a deeper understanding. Additionally, clinical reasoning across different fields of study was not addressed, as it fell outside the scope of this study’s objectives.
However, it remains an interesting area that could be explored in future research.
This is an original study within our specific context, and we believe that future research in other hospital environments would be valuable for the implementation of innovative SNAPPS method in our context by comparing a control group using traditional method and a SNAPPS method group.
6. Conclusion
At the conclusion of our pilot study, the analysis of oral case presentations using the usual method revealed gaps that could hinder the development and training of residents. Our perspective aims to address this issue by employing an innovative learner-centered clinical supervision pedagogical strategy, “SNAPPS”. This structured strategy aims to assist learners in the clinical decision-making process and improve their verbal, analytical, and reflective skills. Our objective is to study its impact on the expression and development of clinical reasoning among residents. Through this strategy, we aim to transform our intuitions into pedagogical actions.
Acknowledgements
We thank the participants for their time and contribution to this pilot study.
Funding
This pilot study, which is part of a doctoral research project, received no specific funding.