Tag Archives: scenario design

What’s a Scenario? The Word That Means Many Things, To Many People in Healthcare Simulation

Definition of 'scenario' showing its etymology and meaning related to scripts and instructions.

If you’ve worked in healthcare simulation for any length of time, you’ve probably used the word “scenario” countless times. “Let’s build a new scenario.” “We’re running the sepsis scenario this afternoon.” “That scenario went great!”

But have you ever stopped to think about how differently that same word means to each person involved? The word “scenario” is a perfect example of how language in simulation can unite us, or possibly confuse us,  depending on our perspective. Additionally those creating said “scenarios” need to be keenly aware of these implications.

In truth, “scenario” represents something unique to different members of the simulation ecosystem: learners, educators, technicians, and administrators. Understanding these different lenses can help strengthen teamwork, communication, and the overall impact of our simulation programs.

The Learner’s Scenario: The Clinical Experience

For learners, the scenario is the experience itself. It’s the unfolding clinical-like moment that challenges their knowledge, judgment, and communication skills in an effort to improve.

In the learner’s mind, the scenario “is” the simulation. It’s what they see, hear, and feel—the patient’s distress, the team dynamics, the need to make decisions under pressure. The learner rarely thinks about the planning that went into it; they simply step into a space that hopefully they were well oriented, feels real enough and is relevant to their goals.


For them, the scenario represents an opportunity: a chance to act, reflect, and learn in a safe environment. When done well, it becomes a memorable and emotionally resonant learning event that bridges the gap between classroom knowledge and clinical performance along with providing a stimulus for self-improvement.

The Educator’s Scenario: The Blueprint for Learning

For the educator or faculty member, the scenario is not just an experience—it’s a design.

To the educator, the scenario is the blueprint for what the learner will encounter. It contains the story arc, learning objectives, key events, and expected actions. It guides how pre-learning will be incorporated or reinforced to prepare the learner, how the simulation unfolds, and how the debriefing reinforces the lessons afterward as well as how assessment strategies and tools are incorporated into the learning encounter.

A well-constructed scenario is both an art and a science. It is an instrument that balances operations with realism and  educational intent. It requires alignment between objectives, assessment, and debriefing. The educator’s scenario document might include everything from patient history and vital sign trends to faculty prompts, checklists, and suggested debriefing strategies and topics.

In this view, the scenario becomes a curricular instrument, a tool that translates educational goals into lived experience.

The Simulation Operations Team’s Scenario: The Technical Playbook

For the simulation operations specialist or technician, the scenario is a technical plan, a script for how to bring the educator’s vision to life.

This version of the scenario includes the logistics that make the experience possible, for example:
– Scheduling and room reservations
– Equipment and supply lists
– Simulator programming and physiological responses
– Audio-visual configurations
– Staffing assignments and role descriptions

For the operations team, precision is everything. A single oversight—an unplugged cable, a missing monitor, or a mistimed vital sign change, can derail the encounter and disrupt the learning flow along with the concentration of the learners and faculty alike.

Their scenario isn’t about learning objectives; it’s about execution. It ensures that the right tools, people, environments, and technology align perfectly at the right moment to make the educational magic happen. In many ways, their scenario is the stage directions that make the play run seamlessly. Or to borrow a piece from a previous blog post of mine, it is the music that plays to allow the learners to dance and be evaluated.

The Administrator’s Scenario: The Unit of Measurement

To program administrators and simulation center leaders, the word “scenario” carries yet another meaning.

From this vantage point, the scenario represents a unit of activity. Think of it as a quantifiable event tied to scheduling, staffing, and financial data. It’s a building block for understanding center utilization, cost recovery, and return on investment.

An administrator may see a scenario not only as an educational event but also as a data record in a management system: duration, participants, faculty hours, resource use, and consumables. From these data points come critical insights such as how much it costs to deliver a course, how often equipment is used, and where efficiencies or resource gaps exist.

This administrative view ensures that simulation programs remain sustainable, scalable, and aligned with institutional goals.

One Word, Many Worlds

The fascinating thing about the word “scenario” is that all these definitions are correct, utilized every day in the simulation world and essential. Each reflects a different dimension of the same phenomenon.

For the learner, it’s an experience.
For the educator, it’s a design.
For the technician, it’s an operation.
For the administrator, it’s a metric.

Together, these perspectives form the ecosystem that allows simulation to thrive. The most successful programs are those where these views overlap and inform one another—where educators appreciate the operational complexity, technicians understand the learning goals, and administrators recognize the educational and patient-safety impact that justify the resources.

When those perspectives align, the word “scenario” transforms from a simple script or event into a powerful tool for advancing healthcare education and safety.

Director’s Reflection

In my years of working with simulation programs around the world, I’ve learned that the strength of a simulation scenario isn’t found in just the documents or the technology’s, but it also in the shared understanding among the people who create, deliver, and learn from it.

A scenario is a bridge connecting intent to experience, vision to execution, and learning to improvement. Whether you’re writing one, running one, or analyzing its data, remember that every scenario represents a small but meaningful step toward better healthcare.

Until Next Time,

Happy Simulating!

Leave a comment

Filed under Curriculum, debriefing, design, operations, scenario design, simulation

Debugging Simulation: How Alpha and Beta Testing Strengthen Scenario Success

In the world of healthcare simulation, our goal is to create meaningful learning experiences that improve the safety and quality of patient care. Achieving that goal requires careful planning, thoughtful design, and rigorous evaluation of our simulation scenarios. One concept borrowed from the world of software and technology development—but often overlooked in healthcare education—is the process of alpha and beta testing.

By understanding and applying these concepts to simulation scenario design, educators can significantly enhance the efficiency and effectiveness, and overall impact of their programs. Let’s take a closer look at what alpha and beta testing mean, why they matter in healthcare simulation, and how they can help elevate both the learner as well as the facilitators experience.


What Do We Mean by Alpha and Beta Testing?

The terms alpha testing and beta testing originate from the software development industry. Before an application is released to users, developers put it through multiple rounds of trials to identify problems, fine-tune functionality, and ensure that it behaves as intended. Healthcare simulation, while a very different domain, benefits from the same structured approach.

  • Alpha testing is the internal trial run. In the simulation context, this means running a new scenario with the development team or a small group of faculty before exposing it to actual learners. The purpose is to check for errors, gaps, or inconsistencies in the scenario design. Are the case details clear? Do the vital signs respond correctly to learner interventions? Does the simulator technology function as intended?
  • Beta testing is the external pilot run. This step introduces the scenario to a limited group of learners—often peers, or learners similar to those whom the scenario is intended. The purpose is to observe how real participants interact with the scenario. Do they engage in the way you intended? Do the prompts drive the critical thinking skills you were hoping to elicit? Are they interpreting the simulated aspects of the scenario in the manner which they are intended? Are the debriefing points aligning with your learning objectives?

When done well, these stages help identify potential pitfalls, correct technical issues, and refine educational flow before the simulation reaches a larger audience.


Why Alpha Testing Matters

Alpha testing is your chance to work out the “kinks” of a simulation in a controlled environment. Think of it as a rehearsal where mistakes are not only acceptable but expected.

Consider a scenario where learners are expected to diagnose sepsis in an unstable patient. During alpha testing, your faculty team might discover that the simulator’s vital signs do not update quickly enough when fluid resuscitation is administered. Or perhaps the timing of lab results makes it impossible for learners to reach the intended diagnosis within the allotted session. Identifying these issues before learners arrive saves both time and frustration. However, always remember that those who participated in the design often have developed a shared mental model and may miss the fact that some things are misinterpreted by actual intended learners.

Some examples of key questions to ask during alpha testing include:

  • Do the scenario instructions match the programmed mannequin responses?
  • Are embedded participants (e.g., a nurse or family member role) clear on their scripts?
  • Does the timing of critical events support the learning objectives?
  • Are there any “gotchas” that could derail learner engagement?
  • Did the pre-briefing take longer than expected?

By the end of alpha testing, the simulation team should have a scenario that is technically functional, logically sound, and aligned with its stated goals that runs in the approximate amount of time that it was designed.


Why Beta Testing is Crucial

Once the internal checks are complete, it is time to see how the scenario performs in the real world. Beta testing is the first opportunity to expose the simulation to actual learners, albeit on a smaller and more controlled scale.

Imagine your team has developed a scenario for emergency airway management. The alpha test confirmed that the mannequin responds appropriately to intubation attempts and that medications are available in the correct doses. During beta testing with a group of residents, however, you observe that they consistently miss an early cue about airway edema. This could mean your prompts are too subtle—or that your learners need more scaffolding. Either way, the feedback allows you to adjust before rolling it out widely.

Beta testing provides answers to questions such as:

  • Are learners engaging with the scenario in the way we anticipated?
  • Do the actions of participants align with the intended outcomes? competencies?
  • Does the scenario create opportunities for meaningful debriefing?
  • What unexpected challenges or learner behaviors emerge?

In essence, beta testing allows the scenario to “fail safely” in front of a pilot group so that the eventual cohort benefits from a polished and purposeful experience.


Lessons from Software Development

In software engineering, skipping alpha and beta testing is a recipe for disaster—think buggy apps, frustrated users, and poor reviews. The same risks apply to simulation. Without proper testing, scenarios can fall flat, confuse learners, or even undermine the credibility of your program.

Borrowing these terms reminds us that scenario design is not a one-and-done activity. It is an iterative process where feedback loops play a central role in quality improvement. Just as developers patch software bugs, simulation educators refine scenario elements until they function smoothly.


Practical Tips for Implementing Alpha and Beta Testing

  1. Schedule testing time. Don’t assume you can “test on the fly” before learners walk in. Build alpha and beta testing into your development timeline.
  2. Use checklists. Structured tools can help your team evaluate everything from simulator programming to alignment with learning objectives.
  3. Capture feedback systematically. During beta testing, request that observers take notes on learner behaviors, timing, and unintended outcomes. Post-scenario surveys can also capture learner perceptions.
  4. Iterate, don’t improvise. Resist the urge to “fix” problems on the fly during a live teaching session. Incorporate changes based on alpha/beta feedback before the full rollout.

How This Benefits Learners

Ultimately, alpha and beta testing serve a dual role about making faculty feel more comfortable as well as enhancing the learner experience. A well-tested scenario ensures that:

  • Learners are immersed in a coherent case that is relevant to their learning needs.
  • Technical glitches do not distract from critical thinking.
  • Debriefing discussions flow naturally from the scenario, rather than being forced or disconnected.

In other words, when educators invest in testing, learners reap the rewards through higher-quality education and, by extension, safer patient care.


Conclusion: Test Early, Test Often

Healthcare simulation has matured into a vital component of modern education. But as with any educational tool, its effectiveness depends on the rigor of its design. By embracing alpha and beta testing, simulation teams can identify weaknesses, refine strengths, and deliver scenarios that consistently meet their objectives.

The lesson from software holds true: the more you test before release, the fewer problems you encounter afterward. In healthcare simulation, that means fewer distractions, more meaningful learning, and ultimately better outcomes for patients.

So the next time you’re preparing to debut a new scenario, pause and ask: Have we really tested this? If the answer is no, it may be worth an extra round of alpha or beta testing. Your learners, as well as your participating faculty, and technical staff will thank you.

Leave a comment

Filed under Curriculum, design, scenario design, simulation, Uncategorized

Improving Interrater Reliability in Healthcare Simulation-Based Assessments: The RST Approach

Achieving high interrater reliability (IRR) is a cornerstone of any effective medium or high stakes assessment in healthcare simulation. Without consistent and dependable scoring across multiple raters, the validity of an assessment can be called into question. Interrater reliability ensures that evaluations are fair, objective, and truly reflective of the participant’s performance rather than the subjective biases or variability among raters.

For simulation-based assessments, however, maintaining IRR can be particularly challenging due to the complex, dynamic, and multifaceted nature of healthcare scenarios. This is where the RST approach—focusing on changes to the Rater, the Simulation, and the Tool—can offer a systematic and impactful framework for improvement. In this post I’ll walk you through this approach, providing insights and practical strategies for applying RST to your simulation programs.


The R in RST: Changing the Rater

One of the most straightforward avenues to improve IRR is addressing variability related to the rater. This is critical because raters bring their own perspectives, experiences, and biases to the evaluation process, all of which can affect their scoring.

Strategies for Enhancing the Rater’s Consistency:

  1. Rater Calibration Sessions
    Conducting rater calibration sessions is one of the most effective ways to ensure raters have a shared understanding of the evaluation criteria. These sessions involve reviewing sample performances as a group and discussing scoring rationales to align perceptions. This shared experience helps raters interpret assessment tools in the same way, leading to more consistent scoring.
  2. Rater Selection and Expertise
    Consider who is performing the assessment. Are they subject matter experts? Are they trained educators? Selecting raters with relevant expertise and familiarity with the assessment content can reduce variability. Alternatively, inexperienced or overly diverse rater pools may introduce inconsistencies.
  3. Addressing Rater Bias
    Even with calibration, unconscious biases can creep into assessments. Training raters to recognize and mitigate biases—such as favoring individuals who perform similarly to the rater’s own practice style—can improve consistency.
  4. Changing Raters
    If specific raters consistently show discrepancies in their scoring compared to others, it may be necessary to replace them or limit their participation in high-stakes assessments. Using multiple raters per simulation and averaging scores can also dilute individual biases.

The S in RST: Changing the Simulation

The second dimension of the RST approach involves modifying the simulation itself to make it more assessable. By carefully designing simulations to make critical behaviors, thought processes, and decisions more observable, you enhance the ability of raters to evaluate participants consistently.

Strategies for Simulation Adjustments:

  1. Prompting Observable Actions
    Simulations can be structured to encourage participants to verbalize their thought processes or articulate their decisions. For instance, during a scenario involving a critical diagnosis, asking participants to “think aloud” as they interpret clinical findings can provide raters with clear evidence of decision-making skills, making scoring more straightforward.
  2. Embedding Structured Checkpoints
    Building structured checkpoints into the simulation—such as specific moments when participants are asked to summarize their findings or outline their next steps—creates clear opportunities for assessment. This reduces ambiguity for raters.
  3. Standardizing Simulation Flow
    Variability in how simulations unfold can lead to scoring challenges. Using standardized patient scripts, consistent cues, and fixed timing for critical events ensures that all participants encounter the same conditions, making assessments more comparable. If high technology simulators are being used for the simulation, consider the use of preprogram scenario to ensure the physiology changes are consistent across all episodes of the same scenario.
  4. Revisiting Scenario Complexity
    While realism is a hallmark of effective simulation, excessive complexity can overwhelm raters and obscure key performance indicators. Simplifying scenarios to focus on specific competencies can improve the clarity and reliability of evaluations.

The T in RST: Changing the Tool

The assessment tool is often an overlooked factor in achieving IRR, yet it plays a pivotal role in how raters interpret and apply scoring criteria. A well-designed tool minimizes ambiguity and makes scoring intuitive, even for less experienced raters.

Strategies for Tool Optimization:

  1. Behavioral Anchors for Rating Scales
    Adding specific behavioral examples or descriptors to rating scale items helps raters apply the scales consistently. For instance, instead of a vague “Good” rating, an anchored descriptor like “Effectively communicates diagnosis and treatment plan to patient” provides clarity.
  2. Item Grouping and Ordering
    Organizing items logically—for example, grouping communication skills, clinical decision-making, and procedural skills separately—makes it easier for raters to focus on one domain at a time. A cluttered or disorganized tool can lead to confusion and inconsistent scoring.
  3. Simplifying Language
    Ensure that the language in the tool is straightforward and free of jargon. If raters struggle to interpret an item, their scoring may vary widely.
  4. Usability Enhancements
    Small changes, like improving the font size, using bullet points, or incorporating intuitive layouts, can significantly reduce rater fatigue and errors during scoring. A user-friendly tool ensures raters stay focused on the participant’s performance rather than grappling with the mechanics of the tool.
  5. Pretesting the Tool
    Conduct pilot assessments using the tool to identify problematic items or inconsistencies. This feedback loop allows you to refine the tool before deploying it in high-stakes simulations.

Putting It All Together: The RST Approach in Action

To illustrate how the RST approach works holistically, imagine a healthcare simulation designed to assess a participant’s ability to manage a cardiac arrest scenario:

  • Rater: You organize a calibration session where all raters review a sample video of a cardiac arrest scenario and agree on scoring criteria. You also ensure raters have experience in emergency medicine and provide bias-awareness training.
  • Simulation: The scenario is adjusted to include a structured moment where the participant is required to verbalize their reasoning for choosing a particular medication. Additionally, standardized cues are used to ensure all participants face identical conditions.
  • Tool: The assessment tool is revised to include behavioral anchors, such as “Identifies and administers epinephrine within 3 minutes” for procedural accuracy. The tool’s layout is simplified, grouping items under headings like “Clinical Judgment” and “Communication.”

With these changes, the IRR for this simulation-based assessment improves, as raters now have a shared understanding, participants’ actions are more easily observable, and the tool provides clearer guidance.


Conclusion: Adopting the RST Approach for Better Assessments

While I will agree, improving interrater reliability in healthcare simulation assessments is no small task, but the RST approach offers a structured framework to tackle the challenge. By focusing on the Rater, the Simulation, and the Tool, you can systematically address the factors that contribute to variability and ensure more consistent, fair, and accurate evaluations. For more on this see my previous blog post on interrater reliability.

Whether you are designing a new assessment or refining an existing one, considering how changes in these three areas might influence IRR is a worthwhile investment. With reliable assessments, we not only enhance the quality of simulation-based education but also uphold the integrity of our evaluations—ultimately contributing to better-prepared healthcare professionals.

Are you ready to elevate your simulation assessments? The RST approach is here to guide your journey.

Please like and comment if you would like to see more topics like this in my blog!

Until next time, Happy Simulating!

Leave a comment

Filed under assessment, design, scenario design, simulation

Cognitive Load as a Currency: Spend it WISELY in Simulation Scenario Design

In the world of healthcare education, we know that simulation-based training is a powerful tool, allowing students to experience real-life scenarios in a controlled environment. Simulation not only bridges the gap between theory and practice but also builds confidence and competence in a safe space. However, as with all educational tools, there’s a delicate balance to maintain regarding design decisions, particularly when it comes to the concept of cognitive load.

Cognitive Load: A Precious Resource

Cognitive load refers to the amount of mental effort being used in the working memory. It is, in essence, the currency of the mind—a finite resource that, when spent wisely, can lead to effective learning and retention. But, just like any currency, it can be squandered if not managed properly.

In our healthcare simulations, participants are asked to perform tasks that mimic real-life situations. They must think critically, make decisions quickly, and often work under pressure—all while processing the simulated environment around them. Every element in a simulation scenario demands a portion of the participant’s cognitive load. When this load becomes too heavy, it can overwhelm the learner, leading to confusion, errors, and, ultimately, a less effective educational experience.

The Hidden Costs of Over-Designing Simulations

In an effort to make simulations as realistic as possible, educators sometimes introduce elements that, while seemingly beneficial, can actually detract from the learning experience. These can include irrelevant information, extraneous equipment, or overly complex scenarios that do not directly contribute to the learning objectives. While the intention is often to enhance the realism of the scenario, the reality is that these additional elements force participants to expend cognitive energy on processing what is simulated and why it is being simulated.

For example, consider a scenario designed to teach students how to manage a patient in cardiac arrest. The core learning objectives might include recognizing signs of cardiac distress, performing CPR, and administering appropriate medications. However, the students might find themselves distracted if the scenario also includes irrelevant background noise, additional non-essential equipment, or extraneous patient history that doesn’t contribute to the learning objectives. They may spend valuable cognitive resources trying to process this irrelevant information rather than focusing on the critical tasks at hand.

The Art of Simplification: Less is More

To maximize the effectiveness of simulation, it’s essential to streamline scenarios, focusing on the elements that directly support the learning objectives. This doesn’t mean stripping away all realism, but rather, carefully curating the scenario to include only those aspects that enhance understanding and practice of the targeted skills. The goal is not to make it real but to make it real enough. Our goal is not to recreate reality but to provide an environmental milieu that supports the tasks at hand and allows the scenario to achieve intended objectives.

When designing a simulation, ask yourself:

– What are the primary learning objectives?

– What elements of the scenario directly support these objectives?

– Are there any elements that, while realistic, do not contribute to the learning goals and could potentially distract or overwhelm the students?

By answering these questions, you can begin to design scenarios that are both effective and efficient, ensuring that students’ cognitive resources are spent on mastering the intended skills rather than getting bogged down by unnecessary details.

A Practical Approach to Cognitive Load Management

1. Clear Objectives: Begin with a clear understanding of what you want your students to learn. Every element of the simulation should tie back to these objectives.

2. Essential Information Only: Include only the information and equipment necessary to achieve the learning goals. Avoid adding extras that don’t directly contribute to the scenario’s success.

3. Sequential Learning: If multiple skills need to be practiced, consider breaking them down into separate scenarios. This allows students to focus on one set of objectives at a time, reducing cognitive overload.

4. Debrief Thoughtfully: Use the debriefing session to reinforce learning objectives and clarify any confusion. This helps students consolidate what they’ve learned and understand the relevance of each element in the simulation.

5. Feedback and Iteration: Regularly gather feedback from participants and use it to refine your scenarios. What seems beneficial in theory might not always work in practice, and being open to adjustments is key to effective simulation design. Further, I fstudents stumble in the same point in the scenario, look for potential design flaws or elements that might be adding confusion.

Conclusion: Design the Scenarios to allow the participant to Spend Wisely

Cognitive load is a valuable resource that must be managed carefully in healthcare simulation design. By focusing on what is essential and stripping away the non-essential, educators can create scenarios that are not only realistic but also aligned with the primary learning objectives. This approach ensures that students can devote their cognitive energy to mastering the skills that matter most, leading to more effective learning and better outcomes in real-life situations.

In the end, the key to successful simulation design is not in how much you can add, but in how much you can refine and simplify. By spending cognitive load wisely, you enable your students to thrive in a simulated environment, fully prepared to face the challenges of the real world.

Until Next Time, Happy Simulating!

Leave a comment

Filed under Curriculum, scenario design

Too Much Stuff! Strike a Balance For Effective Learning Through Scenario Design

Simulation scenarios are powerful tools for learning and development, offering immersive experiences for learners to demonstrate the application of knowledge. However, there is a common temptation to include too many elements in these scenarios in an attempt to make them as realistic as possible. I like to say when designing scenarios people like to try to stuff 8 pounds of potatoes in a bag designed to hold 5 pounds!

While the intention behind this is often to enhance learning, it can lead to the opposite effect—overloading the learner’s brain, causing confusion, and ultimately, potentially diminishing the effectiveness of the training.

Over-Realism

When designing simulation scenarios, the allure of creating an overly realistic environment is strong. Developers and educators often believe that the more realistic the scenario, the more beneficial it will be for the learner. This belief stems from the notion that real-life complexity should be mirrored in training to prepare learners for every possible eventuality they might face in their roles.

However, this approach can backfire. Overloading scenarios with excessive detail and too many learning points can overwhelm learners, leading to cognitive overload. This saturation of information makes it challenging for learners to focus on the key objectives and absorb the intended lessons.

Cognitive Overload

Cognitive overload occurs when the amount of information presented exceeds the learner’s capacity to process it effectively. In a scenario packed with numerous variables, tasks, and details, learners may struggle to prioritize and integrate the key lessons. This confusion can hinder their ability to apply the knowledge in real-life situations, which is the ultimate goal of any training program.

Focusing Content

To design effective simulation scenarios, it’s crucial to focus on a few well-defined learning objectives. Start by identifying the core skills and knowledge you want the learners to acquire. Once these objectives are clear, design the scenario to specifically target these areas, avoiding the temptation to add extraneous details that do not directly contribute to the learning goals.

By narrowing the scope of the content, you can create a more streamlined and manageable learning experience. This focused approach allows learners to engage deeply with the material, enhancing their understanding and retention of the key concepts.

Striking the Right Balance

The key to successful simulation design lies in striking the right balance between realism and focus. Scenarios should be realistic enough to engage learners and provide context, but not so complex that they become overwhelming. Here are some tips for achieving this balance:

1. Define Clear Objectives: Start with a clear set of learning objectives. Ensure that every element of the scenario aligns with these goals.

 2. Simplify the Environment: Avoid unnecessary complexity. Include only the elements that are essential for achieving the learning objectives.

3. Iterative Design: Test and refine your scenarios. Gather feedback from learners to identify areas of confusion and adjust the content accordingly.

4. Chunk Information: Break down the content into manageable chunks. This approach helps learners to process and retain information more effectively.

5. Provide Support: Offer guidance, support and appropriate clues and feedback throughout the scenario to help learners navigate complex tasks and reinforce key lessons.

 Conclusion

While the temptation to create overly realistic simulation scenarios is understandable, it’s important to resist this urge in favor of a more focused and efficient design approach. By concentrating on narrow, well-defined learning objectives and avoiding cognitive overload, you can create scenarios that are both effective and engaging. This design mentality not only enhances the learning experience but also increases the efficiency and effectiveness of your training programs.

In summary, maintaining a balance between realism and focus ensures that simulation scenarios are powerful tools for learning, equipping learners with the skills and knowledge they need without overwhelming them with unnecessary complexity. This approach leads to better learning outcomes and a more streamlined development process.

Leave a comment

Filed under Uncategorized

Not Every Simulation Scenario Needs to Have a Diagnostic Mystery!

It is quite common to mistakenly believe that there needs to be a diagnostic mystery associated with a simulation scenario. This could not be further from the truth.

Sometimes it arises from our clinical hat being confused with our educator hat (meaning we let our view of the actual clinical environment become the driving factor in the design of the scenario.) We must carefully consider the learning objectives and what we want to accomplish. One of the powerful things about simulation is that we get to pick where we start and where we stop, as well as the information given or withheld during the scenario.

Let us take an example of an Inferior Wall Myocardial Infarction (IWMI). Let us imagine that we desire to assess a resident physician’s ability to manage the case. Notice I said to manage the case, not diagnose, then manage the case. This has important distinctions on how we would choose to begin the scenario. If the objectives were to diagnose and manage, we might start the case with a person complaining of undifferentiated chest pain and have the participant work towards the diagnosis and then demonstrate the treatment. Elsewise, if we were looking to have them only demonstrate proficiency in the management of the case, we may hand them an EKG showing an IMWI (or maybe not even hand them the EKG) and start the case by saying, “your patient is having an IWMI” and direct them to start the care.  

What is the difference? Does it matter?

In the former example of starting the case, the participant has to work through the diagnostic conundrum of undifferentiated chest pain to come up with the diagnosis of IWMI. Further, it is possible that the participant does not arrive at the proper diagnosis, in which case you would not be able to observe and assess them in the management of the case. Thus, your learning objectives have become dependent on one another. By the way, there’s nothing wrong with this as long as it is intended. We tend to set up cases like this because that is the way that the sequencing would happen in the actual clinical environment (our clinical hat interfering). However, this takes up valuable minutes of simulation, which are expensive and should be planned judiciously. So, my underlying point is if you deliberately are creating the scenario to see the diagnostic reasoning and treatment, then the former approach would be appropriate.

The latter approach, however, should be able to accomplish the learning objective associated with demonstrating the management of the patient. Thus, if that is truly the intended learning objective, the case should be fast-forwarded to eliminate the diagnostic reasoning portion of the scenario. Not only will this save valuable simulation time it will also conceivably lead to more time to carefully evaluate the treatment steps associated with managing the patient. Additionally, it will eliminate the potential of prolonged simulation periods that do not contribute to accomplishing the learning objectives and/or get stuck because of a failure to achieve the initial objective (in this case, for example, the diagnosis.)

So, the next time you make decisions in the scenario’s design, take a breath and ask yourself, “Am I designing it this way because this is the way we always do it? Am I designing it this way because this is the way it appears in the real clinical environment?”

The important point is that one is asking themselves, “How can I stratify my design decisions so that the scenario is best crafted to accomplish the intended learning objectives?” If you do, you will be on the road to designing scenarios that are efficient and effective!

Leave a comment

Filed under scenario design, simulation

Sherlock Holmes and the Students of Simulation

I want to make a comparison between Sherlock Holmes and the students of our simulations! It has important implications for our scenario design process. When you think about it, there’s hypervigilance amongst our students, looking for clues during the simulation. They are doing so to figure out what we want them to do. Analyzing such clues is like the venerable detective Sherlock Holmes’s processes when investigating a crime.

Video version of this post

This has important implications for our scenario design work because many times, we get confused with the idea that our job is to create reality when in fact, it is not that at all our job. As simulation experts, our jobs are to create an environment with the reality that is sufficient to allow a student to progress through various aspects of the provision of health care. We need to be able to make a judgment and say, “hey, they need some work in this area,” and “hey, they’re doing good in this area.”

To accomplish this, we create facsimiles of what they will experience in the actual clinical environment transported into the simulated environment to help them adjust their mindset so they can progress down the pathway of taking care of those (simulated) patient encounters.

We must be mindful that during the simulated environment, people engage their best Sherlock Holmes, and as the famous song goes, [they are] “looking for clues at the scene of the crime.”
Let’s explore this more practically.

Suppose I am working in the emergency department, and I walk into the room and see a knife sitting on the tray table next to a patient. In that case, I immediately think, “wow, somebody didn’t clean this room up after the last patient, and there’s a knife on the tray. I would probably apologize about it to the patient and their family.”

Fast forward…..

Put me into a simulation as a participant, and I walk into the room. I see the knife on the tray next to the patient’s bed, and I immediately think, “Ah, I’m probably going to do a crich or some invasive procedure on this patient.”

How does that translate to our scenario design work? We must be mindful that the students of our simulations are always hypervigilant and always looking for these clues. Sometimes when we have things included in the simulation, we might just have there as window dressing or to try to (re)create some reality. However, stop to think they can be misinterpreted as necessary to be incorporated into the simulation by the student for success in their analysis.

Suddenly, the student sees this thing sitting on the table, so they think it is essential for them to use it in the simulation, and now they are using it, and the simulation is going off the tracks! As the instructor, you’re saying that what happened is not what was supposed to happen!

At times we must be able to objectively go back and look at the scenario design process and recognize maybe just maybe something we did in the design of the scenario, which includes the setup of the environment, that misled the participant(s). If we see multiple students making the same mistakes, we must go back and analyze our scenario design. I like to call it noise when we put extra things into the simulation scenario design. It’s noise, and the potential for that noise to blow up and drive the simulation off the tracks goes up exponentially with every component we include in the space. Be mindful of this and be aware of the hypervigilance associated with students undergoing simulation.

We can negate some of these things by a good orientation, by incorporating the good practice into our simulation scenario design so that we’re only including items in the room that are germane to accomplishing the learning objectives.

Tip: If you see the same mistakes happening again and again, please introspect, go back, look at the design of your simulation scenario, and recognize there could be a flaw! Who finds such flaws in the story?  Sherlock Holmes, that’s who!

1 Comment

Filed under Curriculum, design, scenario design, simulation

5 Tips to Improve Interrater Reliability During Healthcare Simulation Assessments

One of the most important concepts in simulation-based assessment is achieving reliability, and specifically interrater reliability. While I have discussed previously in this blog every simulation is assessment, in this article I am speaking of the type of simulation assessment that requires one or more raters to record data associated with the performance or more specifically an assessment tool.

Interpreter reliability simply put is that if we have multiple raters watching a simulation and using a scoring rubric or tool, that they will produce similar scores. Achieving intermittent reliability is important for several reasons including that we are usually using more than one rater to evaluate simulations over time. Other times we are engaged in research and other high stakes reasons to complete assessment tools and want to be certain that we are reaching correct conclusions.

Improving assessment capabilities for stimulation requires a significant amount of effort. The amount of time and effort that can go into the assessment process should be directly proportional to the stakes of the assessment.

In this article I offer five tips to consider for improving into rate of reliability when conducting simulation-based assessment

1 – Train Your Raters

The most basic and overlooked aspect of achieving into rate and reliability comes from training of the raters. The raters need to be trained to the process, the assessment tools, and each item of the assessment that they are rendering an opinion on. It is tempting to think of subject matter experts as knowledgeable enough to fill out simple assessments however you will find out with detailed testing that often the scoring of the item is truly in the eye of the beholder. Simple items like “asked medical history” may be difficult to achieve reliability if not defined prior to the assessment activity. Other things may affect the assessment that require rater calibration/training such as limitations of the simulation, and how something is being simulated and/or overall familiarity with the technology that may be used to collect the data.

2 – Modify Your Assessment Tool

Modifications to the assessment tool can enhance interrelated reliability. Sometimes it can be extreme as having to remove an assessment item because you figure out that you are unable to achieve reliability despite iterative attempts at improvement. Other less drastic changes can come in the form of clarifying the text directives that are associated with the item. Sometimes removing qualitative wording such as “appropriately” or “correctly” can help to improve reliability. Adding descriptors of expected behavior or behaviorally anchored statements to items can help to improve reliability. However, these modifications and qualifying statements should also be addressed in the training of the raters as described above.

3 – Make Things Assessable (Scenario Design)

An often-overlooked factor that can help to improve indurated reliability is make modifications to the simulation scenario to allow things to be more “assessable”. We make a sizable number of decisions when creating simulation-based scenarios for education purposes. There are other decisions and functions that can be designed into the scenario to allow assessments to be more accurate and reliable. For example, if we want to know if someone correctly interpreted wheezing in the lung sounds of the simulator, we introduced design elements in the scenario that could help us to gather this information accurately and thus increase into rater reliability. For example, we could embed a person in the scenario to play the role of another healthcare provider that simply asks the participant what they heard. Alternatively, we could have the participant fill out a questionnaire at the end of the scenario, or even complete an assessment form regarding the simulation encounter. Lastly, we could embed the assessment tool into the debriefing process and simply ask the participant during the debriefing what they heard when I auscultated the lungs. There is no correct way to do this, I am trying to articulate different solutions to the same problem that could represent solutions based on the context of your scenario design.

4 – Assessment Tool Technology

Gathering assessment data electronically can help significantly. When compared to a paper and pencil collection scheme technology enhanced or “smart” scoring systems can assist. For example, if there are many items on a paper scoring tool the page can sometimes become unwieldy to monitor. Electronic systems can continuously update and filter out data that does not need to be displayed at a given point in time during the unfolding of the simulation assessment. Simply having previously evaluated items disappear off the screen can reduce the clutter associated with scoring tools.

5 – Consider Video Scoring

For high stakes assessment and research purposes it is often wise to consider video scoring. High stakes meaning pass/fail criteria associated with advancement in a program, heavy weighting of a grade, licensure, or practice decisions. The ability to add multiple camera angles as well as the functionality to rewind and play back things that occurred during the simulation are valuable in improving the scoring accuracy of the collected data which will subsequently improve the interrater reliability. Video scoring associated with assessments requires considerable time and effort and thus reserved for the times when it is necessary.

I hope that you found these tips useful. Assessment during simulations can be an important part of improving the quality and safety of patient care!

If you found this post useful please consider subscribing to this blog!

Thanks and until next time! Happy Simulating.

Leave a comment

Filed under assessment, Curriculum, design, scenario design

Embedding Forcing Functions into Scenario Design to Enhance Assessment Capabilities

shutterstock_316201547Many people design scoring instruments for simulation encounters as part of an assessment plan. They are used for various reason ranging from a tool to help provide feedback, research purposes, to high stakes pass/fail criteria. Enhancing the ability for assessment tools to function as intended may often be closely linked to scenario design.

Often times checklists are employed. When designing checklists is critical that you are asking the question “Can I accurately measure this?”. It is easy to design checklists that seem intuitively simple and filled with common sense (from a clinical perspective) but are not actually able accurately measure what you think you are evaluating.

It is quite common to see checklists that have items such as “Observes Chest Rise”; “Identified Wheezing”; “Observed Heart Rate”. During faculty training sessions focusing on assessment tool development we routinely run scenarios that contain deliberate errors of omission. These items, some are routinely scored, or “checked” as completed. Why is this? Part of the answer is we are interjecting our own clinical bias into what we think the simulation participant is doing or thinking. This raises the possibility that we are not measuring what we are intending to measure, or assess.

Consider two checklist items for an asthma scenario, one is “Auscultates Lung Sounds”; another item is “Correctly Interprets Wheezing”. The former we can reasonably infer from watching the scenario and see the participant listen to lung fields on the simulator. The latter however is more complicated. We don’t know if the participant recognized wheezing or not by watching them listen to the lungs. Many people would check yes for “Correctly Interpreted Wheezing” if the next thing the participant did was order a bronchodilator. This would be an incorrect assumption, but could be rationalized in the mind of the evaluator because of a normal clinical sequence and context.

However, it may be completely wrong and the participant never interpreted the sounds as wheezing, but ordered a treatment because of a history of asthma. Or what would happen if the bronchodilator was ordered before auscultation of the lungs? What you have by itself, is an item on your checklist that seems simple enough, but is practically unmeasurable through simple observation.

This is where linking scenario design and assessment tools can come in handy. If the item you are trying to assess is a critical element of the learning and assessment plan perhaps something in the simulation, transition to, or during the debriefing can cause the information to be made available to more correctly or accurately, assess the item.

A solution to a real-time assessment during the flow of the scenario is possible within the design of the scenario. Perhaps inserting a confederate as a nurse caring for the patient that is scripted to ask “What did you hear?” after the participant auscultates the lungs fields. This will force the data to become available during the scenario for the assessor to act upon. Hence the term, forcing function.

Another possibility would be to have the participant complete a patient note on the encounter and evaluate their recording of the lung sounds. Another possibility would just be to have the participant write down what their interpretation of the lung sounds were. Or perhaps embed the question into the context of the debriefing. Any of these methods would provide a more accurate evaluation of the assessment item ““Correctly Interpreted Wheezing”.

While not trying to create a list of exhaustive methods I am trying to provide two things in this post. One is to have you critically evaluate your ability to accurately assess something that occurs within a scenario with higher validity. Secondly, recognize that creation of successful, reliable and valid assessment instruments are linked directly to scenario design. This can occur during the creation of the scenario, or can be as a modification to an existing scenario to enhance assessment capabilities.

This auscultation item just serves as a simple example. The realization of the challenges of accurate assessment of a participants performance is important to recognize to allow for the development of robust, valid and reliable tools.  The next time you see or design a checklist or scoring tool think in your own mind….. Can I really, truly evaluate that item accurately? If not, can I modify the scenario or debriefing to force the information to be made available?

 

Leave a comment

Filed under scenario design

The Contract Essential to the Parties of Simulation

If you think about it an agreement needs to exist between those whom facilitate simulation and those who participate. Facilitate the purposes of this discussion is referring to those who create and execute simulation based learning encounters. Sometimes the agreement is more formal other times more implied. This phenomenon has been described in many ways over the yearsshutterstock_226296865 having been branded by such descriptors as fiction contract, psychological contract, or learning contract.

Why does this need to be the case? A contract or agreement is generally called for when two or more parties are engaging in some sort of collaborative relationship to accomplish something. Often times these type of contracts spell out the responsibilities of the parties involved. If you think about simulation at a high level the facilitator side is agreeing to provide learning activities using simulation to help the participant(s) become better healthcare providers. The participants are engaged at the highest level because they want to become better healthcare providers. While not trying to hold a comprehensive discussion, let’s explore this concept and the responsibilities of each party a bit further.

Facilitators are designing simulation activities with a variety of tools and techniques that are not perfect imitators of actual healthcare. They are crafting events for which the participant to a greater or lesser extent immerse themselves in, or at a minimum simply participate. Some of these activities are designed to contain diagnostic mystery, some demand specific knowledge, skills and attitudes be known or developed to successfully complete the program. Facilitators are also putting participants in situations that the must perform in front of others and that can create feelings of vulnerability. So all toll, the role of the facilitator comes with enormous responsibility.

Facilitators are also asking the participants to imagine part of what they are engaging in is a reasonable facsimile of what one may encounter when providing actual healthcare. Therefore another tenet of the agreement is that the facilitator will provide an adequate orientation to the simulation environment pointing out what is more and less real including the role that the participant may be playing and how their role interacts with the environment outside of the simulation, if at all. (I.e. define any communications that may occur during the simulation between the participants and the facilitator.

Facilitators trained in simulation know that mistakes occur sometimes due to a lack of knowledge, incorrect judgement or unrelated issues such as a poorly designed simulation. Facilitators thereby commit to not judge the participant in anything other than their performance during the simulation. While diagnostic conundrums are inevitable in many types of simulations the facilitator should not try to unnecessarily trick or mislead the participant in any way that is not directly contributing to helping the participant(s) improve. The facilitator must attempt to use the time of the participants wisely and responsibly.

The role of the participant shares responsibilities as a part of the agreement as well. Participants agree to a commitment to become better healthcare providers through continuous learning and improvement. This is inherent in a professional, but there are some likely good reasonsshutterstock_147464348 to remind participants of this important premise.

Participants must agree to the use of their time to participate in the simulation. The participants are also agreeing to an understanding that they know the environment of the simulation is not real, and that there will be varying levels of realism employed to help them perform in the simulation. But to be clear they agree to this tenet predicated on the trust that that facilitators are having the participant experience simulations that are relevant to what they do, with an underlying commitment to help them get better. In simulations involving multiple participants, they must also agree to similarly not judge others on what occurs in the simulation, as well as keeping the personal details of what they experience in the simulation confidential.

So in closing, successful simulation or other immersive learning environments require an agreement of sorts between those who create and execute the simulation based learning environments as well as those who participate in them. Each party brings a set of responsibilities to the table to help to ensure a rich learning environment with appropriate professional decorum and commitment to improvement. The agreements range from implicit to explicit, but when they exist and are adhered to will continue to allow the recognition of value that can arise from simulation to help improve the care ultimately delivered to our patients. After all, isn’t that our highest goal?

1 Comment

Filed under Uncategorized