Simulation Program Leaders – Pay Attention to the Right Customer!

In the dynamic world of healthcare education, simulation centers stand as innovative beacons of learning, offering practical, immersive experiences that prepare learners for the complexities of real-world medical scenarios. However, the effectiveness of these centers hinges not just on state-of-the-art equipment or meticulously designed scenarios but also on a deep understanding of who the true customers of these centers are. Contrary to initial impressions, the most pivotal customers are not the learners themselves but the faculty teaching the programs. Recognizing and supporting this critical customer base is the cornerstone of creating impactful, simulation-based education programs.

Customer satisfaction survey form on clipboard with red pen

Before the haters start hating, please, at least, hear me out………..

At first glance, identifying the primary customers of healthcare simulation centers might seem straightforward—the learners or students who engage directly with the simulations. However, this perspective overlooks a crucial element of the educational ecosystem: the faculty. These dedicated educators are the linchpins of simulation-based learning, bridging theoretical knowledge with practical application. Their role transcends mere instruction; they craft the educational experiences that shape future healthcare professionals.

When simulation centers prioritize faculty needs and integrate their expertise into the development and execution of simulation programs, they unlock unprecedented levels of educational efficacy. The more the simulation program focuses on the needs and potential of the faculty, the better the resulting programs can be. Creating tools that can enhance the capabilities of the delivered simulation encounters, accompanying materials, as well attempting to reduce the administrative overhead incurred by the faculty will enhance the total potential outcomes of the center. Don’t we want our faculty to practice at the top of the license or capabilities? Doing administrative tasks that can be automated or delegated, will certainly contribute to that as a goal.

The most effective staffing model for simulation centers is inherently collaborative, leveraging a dual-expertise approach. This model marries the simulation center staff’s proficiency in simulation, education, curriculum development, and operations with the subject matter expertise of clinical professionals. By doing so, it creates fertile ground for the development of highly effective, simulation-based education programs. This arrangement / strategic positioning can exist whether the program directly employs its teaching faculty or not.

The simulation center’s staff is the learning environment’s operational backbone. They often bring specialized knowledge in simulation technology, educational theory, curriculum design, and day-to-day operations. Their expertise ensures that the center’s infrastructure, from technology to program scheduling, runs smoothly and effectively. This operational excellence sets the stage for high-quality educational experiences. Their collaboration with the clinical subject matter experts sets the stage for high-quality simulation encounters.

Subject matter experts, such as faculty with clinical experience and expertise, are the heart of the center’s educational offerings from a clinical-facing content perspective. They infuse simulation scenarios with real-world complexity, authenticity, and relevance. Their clinical insights ensure that simulations are technically accurate and deeply resonant with the practical realities of healthcare. This clinical expertise is critical in designing scenarios that challenge learners meaningfully, preparing them for the nuances of actual patient care. They can often provide insight through knowledge and experience of understanding what people struggle with on the front lines of patient care.

When simulation center staff and subject matter experts collaborate closely, the result is a synergistic blend of operational efficiency and clinical authenticity. This partnership enables the development of simulation-based education programs that are logistically sound and educationally rigorous. By aligning the technical, operational, and administrative capabilities of the simulation staff with the clinical acumen of faculty, simulation centers create a win-win combination that can provide high-quality programs most efficiently.

The premise is straightforward: when faculty are well-supported by the simulation program, they are better equipped to deliver exceptional educational experiences. This support manifests in various ways, from providing faculty with the latest simulation technology to involving them in curriculum development processes and creating tools and methods that remove accompanying administrative tasks. When faculty feel empowered and valued, their teaching becomes more effective, benefiting the learners.

Learners engage with more meaningful learning encounters, receive higher-quality feedback, and ultimately enjoy a richer, more productive learning experience. Thus, they benefit as well via a primary focus on the faculty.

Understanding that the actual customers of healthcare simulation centers are the faculty who teach the programs is not just an academic distinction—it’s a strategic insight that should be adopted by the simulation program that can significantly enhance the quality and impact of simulation-based education. Enhancing a collaborative staffing model that harnesses the strengths of simulation center staff and clinical subject matter experts can create powerful educational experiences that prepare learners to succeed and excel in the fast-paced, ever-evolving world of healthcare.

The goal is clear: to support faculty so that they and their learners thrive, fostering a future where healthcare professionals are as compassionate as they are competent.

And yes, I love learners, too!

2 Comments

Filed under operations, simulation, Uncategorized

What is Simulation? The question that caught me off guard!

I was having an exit interview meeting with one of my graduating simulation fellows, and he asked me an interesting question for his last day. He said, “Dr. Paul, what is simulation?” I thought this was perplexing after a year-long intense study of simulation with us at our Institute! It was quite insightful, though. One of his observations was that there are many ways to do simulations right. He had many experiences throughout the year, visiting other simulation centers, attending international meetings, and teaching with us at different facilities. He realized many different vantage points, missions, visions, and purposes for implementing healthcare simulation.

I took a deep breath, thought about it, and said, “Simulation is a methodology by which we re-create a portion of the healthcare delivery experience with a goal of education and/or assessment of people, groups of people, teams, and/or environments of care.” Then, I drew a rough sketch of my vantage point of simulation that divided into two major subgroups, including methods/modes on one side and primary purpose on the other. I recreated it in the accompanying figure.

Methods/Modes

I think of the methods or modes of simulation based on the primary simulator technology employed to generate the goals of an intended program. Of course, mixed modality simulations often incorporate a spectrum of technologies.

I don’t mean this list to be exhaustive by any stretch of the imagination, and some may argue an oversimplification. The general categories that come to my mind are as follows:

  1. High-technology manikins generally presents the form factor of an entire human being complemented with electronics, pneumatics, and computer equipment that helps the manikin represent various aspects of anatomy and or physiology. (As you have undoubtedly heard me opine in the past, the word FIDELITY does not belong in any descriptor of a simulator. It muddles the water and confuses the overall strategies associated with simulation, although it is a popular industry buzzword that has somehow worked its way into academic definitions inappropriately.)
  2. Low-technology manikins generally have the form factor of an entire human being but with significantly less electronics or infrastructure to allow physiologic or anatomic changes that occurred during the simulation encounter.
  3. Standardized people/patients, meaning live people playing various roles ranging from patients, family members, and other healthcare team members to help bring a simulation encounter to life.
  4. Task trainers represent a re-creation of a portion of the human being oftentimes created to accomplish goals of completing skills or procedures. Depending on the purpose, they may or may not have a significant amount of augmenting technology.
  5. Screen-based simulations are computerized case or situation representations of some aspects of patient care that change in response to the stimulus provided by participants.
  6. Role-play includes designs that utilize peers and/or select faculty to engage in a simulated conversation or situation to accomplish learning outcomes.
  7. Virtual reality/augmented reality are high technology recreations or supplements that re-create reality through the lens of a first-person engaging in some sort of healthcare situation and have the capacity to change in response to the stimulus provided by the participant or participants.

Primary Purpose/Goals

Again, looking at a given simulation’s primary purpose and goals will lead one to quickly find overlaps and that the categories did not exist in complete isolation. However, for this discussion, it helps to think of the different categories of intent.

Education

When I think of simulation programs primarily focusing on education, it comes down to helping participants gain or refine knowledge, skills, competence, or other measures that allow them to become better healthcare providers. In general, a teaching exercise. This can apply to simulation scenarios that are directed at one person, groups of people (all learning the same thing), or perhaps teams that have learning goals of competencies associated with the interaction between the groups of people similar to that that occurs in the care of actual patients in the healthcare environment.

Assessment

The simulation encounter is primarily designed as an assessment. This means there is a more formal measurement associated with the performance of the simulation, often employing scoring tools, with the primary focus of measuring the competency of an individual, groups of individuals, or similar to the above teams of individuals functioning as teams. Further, assessment can measure aspects of the environment of care and/or the systems involved in supporting patients and the healthcare workforce.  (For example, an in-situ code blue response simulation may measure the response of the local care team, the response of a responding team, the engagement of the hospital operator, the location and arrival of necessary equipment, etc.)

Research

There are many approaches to the use of modern healthcare simulation in research. At a crude level, I subdivided into looking at the outcomes of the simulation; meaning did the simulation encounter help to improve the participant’s performance? At the next level, you can evaluate if the simulation improves patient care.

The next category is using simulation as a surrogate of the patient care environment but not measuring the effect of the simulation. For example, we might set up an ICU patient care environment for human factors experiments to figure out the ideal location of a piece of equipment, the tone of an alarm, the interaction of caregivers with various equipment, etc. Such an example of simulation often helps to determine optimal environments and systems of care in the primary planning stages or the remodeling of healthcare delivery processes and procedures.

So, the next time I orient an incoming simulation fellow, I will start with this discussion. I am thankful that my fellow who just graduated provided such a simple but deeply probing question to help wrap his arms around the various simulations he has been experiencing over the last year while he studied with us.

Having put some more thought into this, I think it’s a useful exercise for those of us in leadership positions within the simulation world; it is probably good to stop and think about this a couple of times a year to refresh, reset, and ensure that we are remaining mission-driven to our purpose.

Until next time, Happy Simulating!

Leave a comment

Filed under simulation

HUMBLE: Six Traits That Will Make You a Better Simulation Educator and Lead Effective Debriefings

HUMBLE: Six Traits That Will Make You a Better Simulation Educator and Lead Better Debriefings

Excelling as a educator in the healthcare simulation field goes beyond just imparting knowledge; it requires a unique set of qualities that can truly make a difference in students’ learning experiences. The acronym HUMBLE focuses on six key traits that can help educators better design, facilitate, and lead more effective debriefings. These traits include Humility, Understanding, Mindfulness, Balance, Learning, and Engaging. In this blog post, I will delve into these traits and explore how they can enhance your abilities as an educator, ultimately leading to more impactful and engaging debriefing sessions.

H – Humility

This is one of my favorites and the most important in my humble opinion! Approaching teaching responsibilities in simulation from a perspective of humility goes a long way. Instructors, with humility, acknowledge that they don’t know everything and remain open to continuous learning. This attitude is also imparted to the participants, encouraging them to adopt the same approach throughout their careers.

An instructor who demonstrates humility creates a more approachable and non-threatening atmosphere, allowing students to feel comfortable admitting to and learning from their errors. This also contributes to a milieu that helps maintain a safe learning environment and a perspective of a level playing field that helps to allow participants of the simulation to share their thoughts. This, in turn, gives us as faculty a privileged glimpse into their thought processes. Interestingly, it is also well-known in business literature that leaders who demonstrate humility are often perceived as more credible and trustworthy.

U – Understanding

Understanding the fact that each participant of your simulation is a person that has their individual lives, challenges, successes, experiences, and strong and weak skills is key to understanding the fact that there are varying amounts of knowledge and/or abilities for the person to apply that knowledge in the simulated session. In other words, many factors contribute to why someone knows something or can apply knowledge in each situation. We should maintain an understanding that everyone has gaps in knowledge and attempt to remain nonjudgmental as to why those gaps exist.

M – Mindfulness

It is incredibly important that we are mindful of our presence during the simulation as well as the debriefing. Educators need to be attentive, focused, immersed, and committed to the learning objectives to expertly facilitate and then lead a high-quality debriefing that contributes to the learning outcomes. We need to work to identify tips and challenges that help maintain our mindfulness, focus, and attention during these activities.

While I am not suggesting a prescriptive approach, it is important to introspect and determine how you enhance your mindfulness associated with the simulation-based education process. For some, it means being well rested; for others, it means appropriately titrated doses of caffeine, and yet for others, exhaustive preparation the day before. Reflect on your performance by thinking about when your concentration may have waxed and waned and what you can do to improve. I find it particularly challenging to remain cognitively sharp throughout the entire series when running the same scenario repeatedly with different groups of learners!

B – Balance

Creating the mindset of balance in any one simulation session helps participants discover what they need to improve upon and what they did well in each simulation encounter. There is an old saying, “The negative screams, while the positive only whispers….. ” that I think you would agree applies when we are facilitating a simulation and about to go into the debriefing. If you think about it from the learner’s perspective, exploring a laundry list of their failures without recognizing the contributions that went well can be demoralizing and interfere with the faculty/participant relationship. While I’m not suggesting that we gloss over egregious errors, it is important to find a balance between those activities that went well and those that need improvement.

L – Limited, Lifelong Learning

This may be my second favorite! When conducting the debriefing, faculty should avoid trying to comment or debrief on every single thing in every scenario. It is important to remember that the journey of healthcare, whether in a simulated environment, attending lectures, attending workshops, or generating experiences by taking care of real patients, is a lifelong learning process. Each encounter along the way provides the potential for learning, albeit limited by the amount of cognitive transfer that can occur at a given time. During simulation, there is a natural tendency to want to cram everything into every scenario. I think this emanates from the fact that we are so excited about the simulation modality and get a small opportunity with each participant! Admittedly, I need to keep myself in check during such encounters. It’s important to think of the human brain as a sponge. Once it is saturated, the sponge cannot effectively take on more water.

E – Engagement

Engaging the learners in the conversation, as well as designing the scenarios to engage learners actively, is part and parcel of the basis of the idea that simulation, through active learning, is a high-quality opportunity. Think about this during the design process of your scenarios as well as the debriefings, insofar as how you assign roles, what your observers are required to do, and how you rotate people in and out of the scenario.

During the debriefing, remember that engaging your learners so that they are responding to the prompts you provide during the debriefing will elicit the responses. As the learners are engaged in the conversation, you can listen to their thought processes and make evaluations of the depth of their knowledge around a particular topic. Additionally, you can identify gaps that exist, either in knowledge or the application of knowledge, that can help them improve for the future. So often, when training others in debriefing, I observe faculty members dropping into a mode of “mini-lecture” during what is supposed to be a debriefing. This deviates from active cognitive engagement and sometimes transcends into (a well-intentioned) one-way conversation. It is important to remember that if your participants are not engaged, you are potentially squandering some of the learning opportunities. At a minimum, you are giving up the ability to hear what they are thinking.

In summary, as you continue to develop your skills as a healthcare simulation educator, I invite you to use HUMBLE as an acronym that helps to reflect upon positive traits, actions, and good guiding principles, that provide learners with an optimized environment for improvement.  I truly think that healthcare simulation educators have powerful opportunities for assisting with the transfer of knowledge, and experience and creating opportunities for reflection, and by being HUMBLE we can ensure a more effective and empathetic learning environment for all participants.

Until Next Time,

Happy Simulating!

Leave a comment

Filed under debriefing, simulation

The Importance of the Psychological Contract in Healthcare Simulation: Six Fundamental Elements

Simulation is a powerful tool in healthcare education to enhance learning and improve patient outcomes. Through simulation-based learning encounters, participants can engage in hands-on experiences that mimic real-life situations, allowing them to develop critical skills and knowledge.

The success of healthcare simulation educational encounters relies on the participants and the facilitators who guide and support the learning process. Understanding the psychological contract that needs to exist between participants, facilitators, and content designers, is crucial in creating a positive and effective learning environment. In this blog post, we will explore the importance of this psychological contract and discuss strategies to enhance it, ultimately leading to enhanced learning and improved outcomes in healthcare simulation.

While most discussions of the psychological contract are in the context of facilitating a simulation in real time, some elements are critically important to consider during the design process associated with simulation-based education encounters. How we structure our briefings, pre-briefings, and course schedules can dramatically influence our relationship with the participants to enhance the learning potential in the simulated environment.  

I like to think of six essential elements when designing and facilitating simulations.

Professionalism: We agree to treat each other as professionals throughout simulation-based education encounters. The learner agrees to attempt to interact in the scenario as if they were taking care of an actual patient, and the simulation facilitator agrees that the scenario will be directed to respond with a reasonable facsimile of how an actual patient will respond to the care being delivered.

Confidentiality: The simulation program agrees to keep the performance assessment of participants confidential to the extent possible. The simulation participant should be apprised of the fate of any audio, video, or still photographic media generated from the simulation. If, by programmatic design, there is the intent to share any performance results, the participant should be aware of this before engagement in the program.

Time: The simulation facilitator commits to creating an environment of learning that respects the participant’s time. The simulation program commits to the intent that the simulation encounter and all associated time spent will help provide the participant with relevant, professional education and growth potential.

Realism/Deception: Both the participant and the facilitator acknowledge that the environment is not real and will contain varying degrees of realism. The simulation environment’s primary intent is to provide a reasonable facsimile of a healthcare encounter to serve as the background for the participant to demonstrate their clinical practice proficiency to the best of their knowledge in exchange for feedback that highlights areas of success and identifies areas of potential improvement. Our simulation-based scenario designs are modeled after actual patient encounters or close representations of cases that may occur within your practice domain. While the case may represent areas of diagnostic mystery or other unknowns, the scenarios are not designed to deceive or mislead the learner deliberately. The facilitator acknowledges there may be facsimiles of the simulation that may be misinterpreted by the learner as a matter of simulation scenario design limitations and will address them as appropriate, as they occur.

Judgment: While there will be an assessment of the learner’s performance to carry out effective feedback, it will be based upon known best practices, guidelines, algorithms, protocols, and professional judgment. No judgment will be associated with why a gap in knowledge or performance was identified. The facilitators agree to maintain a safe learning environment that invites questions, explorations, and clarifications as needed to enhance learning potential.

Humbleness: Healthcare is a complicated profession regardless of the practice domain. It requires the engagement of lifelong learners to learn and retain a significant amount of knowledge and skill. Additionally, there is a constant refinement of knowledge, best practices, and procedures. The facilitator acknowledges that they are imperfect and engage in the same lifelong learning journey as the participant.

While the descriptions associated with each element of the psychological contract in this post are more aligned with the engagement with senior learners or practicing professionals, it is easy to translate each category when working with students and other types of junior learners.

Educators and learners can establish a foundation of trust, collaboration, and active participation by understanding and embracing the tenants of psychological contracts in healthcare simulation. Careful consideration of these elements is beneficial during program design and when actively facilitating simulation-based learning encounters. This, in turn, enhances the learning outcomes, improves clinical practice, and prepares healthcare professionals to deliver high-quality care as they engage in real-world patient encounters and associated situations.

The next time you are designing or conducting simulation based education endeavors give careful consideration to the psychological contract!

Until next time, Happy Simulating!

Leave a comment

Filed under Curriculum, design, simulation

Simulation Professionals: Don’t let the Vocal Minority Get You Down!

The social psychologist Barbara Fredrickson coined the phrase, “The negative screams while the positive only whispers.” I don’t know about you, but this is extraordinarily true when reviewing course evaluations after simulation-based education programs!

Post-course evaluations are essential in measuring the program’s effectiveness and participant perceptions and are a tool to help with quality improvement initiatives. However, the feedback from vocal minorities can sometimes overshadow the opinions of the silent majority. After pouring blood, sweat, and tears into creating what you believe to be a successful simulation-based program, it can sometimes be a blow to your motivation when you receive negative evaluations.  At times the feedback can be pithy and personal and can sting.

Receiving negative feedback can be challenging for many reasons. First and foremost, it can feel like a personal attack on the hard work and effort you’ve put into a project or program. It’s natural to feel defensive or upset when someone criticizes something you’ve put so much time and energy into creating. Additionally, negative feedback can be difficult to process and use constructively. It’s easy to get caught up in the moment’s emotions and feel overwhelmed by the criticism. This can make it difficult to see the feedback as an opportunity for growth and improvement rather than a setback or failure.

This can be problematic as the feedback may not accurately represent the actual experiences of most participants, but it can certainly feel that way. It is also important to recognize the opportunities that come with critical feedback that could help you improve your program. It can help educators and course designers to identify areas for improvement and develop strategies for addressing these areas. Particularly when it is delivered constructively, and with a focus on improvement, negative feedback can be a powerful tool for enhancing the quality of simulation-based education programs and developing resilience in educators and learners alike. Critical feedback can help to identify areas for improvement, develop new strategies, and implement changes that can benefit future participants.

It is also important to remember that most participants with positive experiences may not feel the need to provide feedback. In contrast, those who have negative experiences may be more inclined to do so. So, I challenge you to go back and look at the designs of your course evaluation tools. It’s important to remember that the silent majority can be an important ally in the success of your program. By actively seeking out their feedback and insights, you can ensure that your program is meeting the needs of all participants, not just the most vocal. I’m not suggesting that we ignore the critical feedback; we just must find a way to balance it into a healthy model that contributes to resilience.

Developing a growth mindset is essential for developing resilience for those running simulation programs. It involves embracing challenges and staying motivated even when things get tough. Instead of seeing failures and setbacks as signs of inadequacy, individuals with a healthy mindset view them as opportunities for growth and learning. One powerful tool I use is remaining patient-centric in the decisions made regarding our simulations. Thinking about the downstream benefits that help raise the quality-of-care patients receive because of our efforts helps to keep my eye on the ball.

Lastly, remember that we can’t be all things to all people. While we remain excited and recognize the power of simulation-based education, not everyone will share our enthusiasm. As we move forward, remember that we can learn from the naysayers and the people unhappy that they are required to participate in some of our programs. Try to avoid the negative screaming in your ear, and you mistakenly believe that it represents the majority opinion. Stay focused on the idea that patients will benefit from our efforts, and many participants likely perceive value from our efforts.

Leave a comment

Filed under Curriculum, simulation

When Simulation Is NOT the Answer: Own It!

Obviously, we are happy that simulation has become a popular method of education in healthcare. Simulation can provide a hands-on approach to learning that allows participants to experience real-life situations in a safe and controlled environment.

However, while simulation has many benefits, it’s not necessarily the best option for every type of education.  When we engage simulation as a modality, it is relatively complex, expensive and resource intensive compared to other educational methodologies. That all being said we all know at times it is an irreplaceable methodology that allows education, competency assessment, as well as system assessment information to be utilized in the improvement of healthcare.  The key is to have a stratification process/policy in place to evaluate opportunities to decide when simulation is the optimal deployment tool.

As leaders and managers of simulation programs we are charged with creating the return on investment for our programs. We are entrusted by the people who provide our funding to be good stewards of the investment and ongoing operational support of the simulation efforts.  It is up to us to hold the keys to the vault that we call simulation so that it gets engaged, deployed and/or utilized in the fashion that generates the expected outcomes with the highest amount of efficiency and effectiveness.

In short, don’t simulate because you can, simulate because you need to!

As your simulation center becomes a more recognized resource within your institution, there will often be an increase in request for services.  As this occurs it is critically important that leaders of programs are ensuring that the simulations are bringing value. 

For example, if someone wants you to do simulation training for an entire unit to rule out a new simple policy or procedure change, do not just say yes.  Instead, create a framework that advises the requester if simulation is the best modality.

When contemplating the value of simulation as a modality, I think it is best to go back to the creation of learning objectives for anticipated scenarios.  I always like to say that if you do knowledge, skills, and attitudes (KSA) analysis of your learning objectives and they all come up with K’s, you should reevaluate whether simulation is the best method.

Web-based education including courses, videos, lectures, or assigned reading may accomplish the same objectives as your planned simulation.  If this is the case, as a leader in simulation it is important that you recognize this and recommend modalities other than simulation.  It will likely save your organization time and money.  More importantly, it may increase the credibility of your advice and reputation moving forward as a problem solver for the institution as well as someone who is fiscally responsible.  Over time it can be valuable for a simulation program to enjoy a reputation of “the solution deployment” expert, not simply the “simulation” expert.

It is important to remember that the true value we provide is in the end-result of creating higher quality healthcare along with a safer environment for patients.  In this day and age, it has become increasingly important that our engagement is thoughtful, prudent with cost considerations in mind.  While we are all passionate about simulation, leaders of the future will garner success through a lens of efficiency and effectiveness in the programs that we deploy.

In conclusion, healthcare simulation is an important tool for education and patient safety, but it is not always the best tool. Simulation program managers and leaders should consider the specific learning outcomes they hope to achieve and carefully consider which educational modality is most appropriate for their learners. By doing so, they can ensure that they are providing the best possible, most cost-efficient training for their staff and ultimately improving patient outcomes.

Remember: Don’t simulate because you can, simulate because you need to!

Let me know what you think in the comments! If you enjoyed this post, please let me know by liking it, or subscribing to my Blog!

Until next time,

Happy Simulating!

Leave a comment

Filed under Curriculum, return on investment, simulation

Not Every Simulation Scenario Needs to Have a Diagnostic Mystery!

It is quite common to mistakenly believe that there needs to be a diagnostic mystery associated with a simulation scenario. This could not be further from the truth.

Sometimes it arises from our clinical hat being confused with our educator hat (meaning we let our view of the actual clinical environment become the driving factor in the design of the scenario.) We must carefully consider the learning objectives and what we want to accomplish. One of the powerful things about simulation is that we get to pick where we start and where we stop, as well as the information given or withheld during the scenario.

Let us take an example of an Inferior Wall Myocardial Infarction (IWMI). Let us imagine that we desire to assess a resident physician’s ability to manage the case. Notice I said to manage the case, not diagnose, then manage the case. This has important distinctions on how we would choose to begin the scenario. If the objectives were to diagnose and manage, we might start the case with a person complaining of undifferentiated chest pain and have the participant work towards the diagnosis and then demonstrate the treatment. Elsewise, if we were looking to have them only demonstrate proficiency in the management of the case, we may hand them an EKG showing an IMWI (or maybe not even hand them the EKG) and start the case by saying, “your patient is having an IWMI” and direct them to start the care.  

What is the difference? Does it matter?

In the former example of starting the case, the participant has to work through the diagnostic conundrum of undifferentiated chest pain to come up with the diagnosis of IWMI. Further, it is possible that the participant does not arrive at the proper diagnosis, in which case you would not be able to observe and assess them in the management of the case. Thus, your learning objectives have become dependent on one another. By the way, there’s nothing wrong with this as long as it is intended. We tend to set up cases like this because that is the way that the sequencing would happen in the actual clinical environment (our clinical hat interfering). However, this takes up valuable minutes of simulation, which are expensive and should be planned judiciously. So, my underlying point is if you deliberately are creating the scenario to see the diagnostic reasoning and treatment, then the former approach would be appropriate.

The latter approach, however, should be able to accomplish the learning objective associated with demonstrating the management of the patient. Thus, if that is truly the intended learning objective, the case should be fast-forwarded to eliminate the diagnostic reasoning portion of the scenario. Not only will this save valuable simulation time it will also conceivably lead to more time to carefully evaluate the treatment steps associated with managing the patient. Additionally, it will eliminate the potential of prolonged simulation periods that do not contribute to accomplishing the learning objectives and/or get stuck because of a failure to achieve the initial objective (in this case, for example, the diagnosis.)

So, the next time you make decisions in the scenario’s design, take a breath and ask yourself, “Am I designing it this way because this is the way we always do it? Am I designing it this way because this is the way it appears in the real clinical environment?”

The important point is that one is asking themselves, “How can I stratify my design decisions so that the scenario is best crafted to accomplish the intended learning objectives?” If you do, you will be on the road to designing scenarios that are efficient and effective!

Leave a comment

Filed under scenario design, simulation

Sherlock Holmes and the Students of Simulation

I want to make a comparison between Sherlock Holmes and the students of our simulations! It has important implications for our scenario design process. When you think about it, there’s hypervigilance amongst our students, looking for clues during the simulation. They are doing so to figure out what we want them to do. Analyzing such clues is like the venerable detective Sherlock Holmes’s processes when investigating a crime.

Video version of this post

This has important implications for our scenario design work because many times, we get confused with the idea that our job is to create reality when in fact, it is not that at all our job. As simulation experts, our jobs are to create an environment with the reality that is sufficient to allow a student to progress through various aspects of the provision of health care. We need to be able to make a judgment and say, “hey, they need some work in this area,” and “hey, they’re doing good in this area.”

To accomplish this, we create facsimiles of what they will experience in the actual clinical environment transported into the simulated environment to help them adjust their mindset so they can progress down the pathway of taking care of those (simulated) patient encounters.

We must be mindful that during the simulated environment, people engage their best Sherlock Holmes, and as the famous song goes, [they are] “looking for clues at the scene of the crime.”
Let’s explore this more practically.

Suppose I am working in the emergency department, and I walk into the room and see a knife sitting on the tray table next to a patient. In that case, I immediately think, “wow, somebody didn’t clean this room up after the last patient, and there’s a knife on the tray. I would probably apologize about it to the patient and their family.”

Fast forward…..

Put me into a simulation as a participant, and I walk into the room. I see the knife on the tray next to the patient’s bed, and I immediately think, “Ah, I’m probably going to do a crich or some invasive procedure on this patient.”

How does that translate to our scenario design work? We must be mindful that the students of our simulations are always hypervigilant and always looking for these clues. Sometimes when we have things included in the simulation, we might just have there as window dressing or to try to (re)create some reality. However, stop to think they can be misinterpreted as necessary to be incorporated into the simulation by the student for success in their analysis.

Suddenly, the student sees this thing sitting on the table, so they think it is essential for them to use it in the simulation, and now they are using it, and the simulation is going off the tracks! As the instructor, you’re saying that what happened is not what was supposed to happen!

At times we must be able to objectively go back and look at the scenario design process and recognize maybe just maybe something we did in the design of the scenario, which includes the setup of the environment, that misled the participant(s). If we see multiple students making the same mistakes, we must go back and analyze our scenario design. I like to call it noise when we put extra things into the simulation scenario design. It’s noise, and the potential for that noise to blow up and drive the simulation off the tracks goes up exponentially with every component we include in the space. Be mindful of this and be aware of the hypervigilance associated with students undergoing simulation.

We can negate some of these things by a good orientation, by incorporating the good practice into our simulation scenario design so that we’re only including items in the room that are germane to accomplishing the learning objectives.

Tip: If you see the same mistakes happening again and again, please introspect, go back, look at the design of your simulation scenario, and recognize there could be a flaw! Who finds such flaws in the story?  Sherlock Holmes, that’s who!

1 Comment

Filed under Curriculum, design, scenario design, simulation

5 Tips to Improve Interrater Reliability During Healthcare Simulation Assessments

One of the most important concepts in simulation-based assessment is achieving reliability, and specifically interrater reliability. While I have discussed previously in this blog every simulation is assessment, in this article I am speaking of the type of simulation assessment that requires one or more raters to record data associated with the performance or more specifically an assessment tool.

Interpreter reliability simply put is that if we have multiple raters watching a simulation and using a scoring rubric or tool, that they will produce similar scores. Achieving intermittent reliability is important for several reasons including that we are usually using more than one rater to evaluate simulations over time. Other times we are engaged in research and other high stakes reasons to complete assessment tools and want to be certain that we are reaching correct conclusions.

Improving assessment capabilities for stimulation requires a significant amount of effort. The amount of time and effort that can go into the assessment process should be directly proportional to the stakes of the assessment.

In this article I offer five tips to consider for improving into rate of reliability when conducting simulation-based assessment

1 – Train Your Raters

The most basic and overlooked aspect of achieving into rate and reliability comes from training of the raters. The raters need to be trained to the process, the assessment tools, and each item of the assessment that they are rendering an opinion on. It is tempting to think of subject matter experts as knowledgeable enough to fill out simple assessments however you will find out with detailed testing that often the scoring of the item is truly in the eye of the beholder. Simple items like “asked medical history” may be difficult to achieve reliability if not defined prior to the assessment activity. Other things may affect the assessment that require rater calibration/training such as limitations of the simulation, and how something is being simulated and/or overall familiarity with the technology that may be used to collect the data.

2 – Modify Your Assessment Tool

Modifications to the assessment tool can enhance interrelated reliability. Sometimes it can be extreme as having to remove an assessment item because you figure out that you are unable to achieve reliability despite iterative attempts at improvement. Other less drastic changes can come in the form of clarifying the text directives that are associated with the item. Sometimes removing qualitative wording such as “appropriately” or “correctly” can help to improve reliability. Adding descriptors of expected behavior or behaviorally anchored statements to items can help to improve reliability. However, these modifications and qualifying statements should also be addressed in the training of the raters as described above.

3 – Make Things Assessable (Scenario Design)

An often-overlooked factor that can help to improve indurated reliability is make modifications to the simulation scenario to allow things to be more “assessable”. We make a sizable number of decisions when creating simulation-based scenarios for education purposes. There are other decisions and functions that can be designed into the scenario to allow assessments to be more accurate and reliable. For example, if we want to know if someone correctly interpreted wheezing in the lung sounds of the simulator, we introduced design elements in the scenario that could help us to gather this information accurately and thus increase into rater reliability. For example, we could embed a person in the scenario to play the role of another healthcare provider that simply asks the participant what they heard. Alternatively, we could have the participant fill out a questionnaire at the end of the scenario, or even complete an assessment form regarding the simulation encounter. Lastly, we could embed the assessment tool into the debriefing process and simply ask the participant during the debriefing what they heard when I auscultated the lungs. There is no correct way to do this, I am trying to articulate different solutions to the same problem that could represent solutions based on the context of your scenario design.

4 – Assessment Tool Technology

Gathering assessment data electronically can help significantly. When compared to a paper and pencil collection scheme technology enhanced or “smart” scoring systems can assist. For example, if there are many items on a paper scoring tool the page can sometimes become unwieldy to monitor. Electronic systems can continuously update and filter out data that does not need to be displayed at a given point in time during the unfolding of the simulation assessment. Simply having previously evaluated items disappear off the screen can reduce the clutter associated with scoring tools.

5 – Consider Video Scoring

For high stakes assessment and research purposes it is often wise to consider video scoring. High stakes meaning pass/fail criteria associated with advancement in a program, heavy weighting of a grade, licensure, or practice decisions. The ability to add multiple camera angles as well as the functionality to rewind and play back things that occurred during the simulation are valuable in improving the scoring accuracy of the collected data which will subsequently improve the interrater reliability. Video scoring associated with assessments requires considerable time and effort and thus reserved for the times when it is necessary.

I hope that you found these tips useful. Assessment during simulations can be an important part of improving the quality and safety of patient care!

If you found this post useful please consider subscribing to this blog!

Thanks and until next time! Happy Simulating.

Leave a comment

Filed under assessment, Curriculum, design, scenario design

Adjuncts to Enhance Debriefing

I wanted to discuss some ideas of using adjuncts as part of your debriefing.

When we think about debriefing, we often think about a conversation between faculty member or members and participants of simulation with a focus on everyone developing an understanding of what they did right as well as what they need to improve upon.  We rarely think about the possibility of including other “things” to enhance the learning that comes from the debriefing.

I tend to incorporate adjuncts into a many of the debriefings associated with courses that I design.  What I mean is things that added into the debriefing process/environment that can enhance the discussion.  Sometimes with deliberate purpose, and other times just to mix it up a little bit so that it is not just a dialogue between the participants and the faculty.  It may be something technical, it may be something as simple as a paper handout.

Simple Task Trainer as an Adjunct

Some ideas of adjuncts include PowerPoint slide deck or a few targeted slides that help to review a complex topic, one that requires a deeper understanding, or a subject that benefits from repetition of exposure.  Another type of adjunct is the simulator log file which can help set the stage for the debriefing and create a pathway of discussion that chronologically follows what happened during the simulation.  Another adjunct could be a partial task trainer or a model that helps to describe or demonstrate something.  For example, the students forgot to do a jaw-thrust or open the airway.  We can use a task trainer, or a teaching aide incorporated into the discussion during the debriefing.  

Example of an Algorithm Poster on the Wall

Other things that I use are charts, graphs, and algorithms that may represent best practices.  When I debrief during my difficult airway management course for physicians, I have the algorithm up on the wall hanging as a poster.  We use the algorithm posters as a pathway to compare the performance of the participants of the simulation with what the ideal case would be.  You can use the adjunct learning aid as a reference to standards.  This can help you to take yourself out of the direct argument of right vs. wrong.  This allows use of the adjunct as a third-party messenger of a reference to best practices excellence when I have the participants compare their performance against what appears on the algorithm.  This allows them to discover their own variations from the expected standard.  It tends to create powerful learning moments without the faculty having to be “the bearer of bad news!”

I think that if you start to strategically think about how to incorporate adjuncts into your debriefing you will find the students are more satisfied with the debriefing.  It also increases the stickiness of the learning and creates a more enjoyable experience for the faculty member as well as the participants.  Try it!  It does not have to be fancy!

Thanks, and as always,

Happy Simulating!

Leave a comment

Filed under Curriculum, debriefing