Tag Archives: healthcare simulation

What is Simulation? The question that caught me off guard!

I was having an exit interview meeting with one of my graduating simulation fellows, and he asked me an interesting question for his last day. He said, “Dr. Paul, what is simulation?” I thought this was perplexing after a year-long intense study of simulation with us at our Institute! It was quite insightful, though. One of his observations was that there are many ways to do simulations right. He had many experiences throughout the year, visiting other simulation centers, attending international meetings, and teaching with us at different facilities. He realized many different vantage points, missions, visions, and purposes for implementing healthcare simulation.

I took a deep breath, thought about it, and said, “Simulation is a methodology by which we re-create a portion of the healthcare delivery experience with a goal of education and/or assessment of people, groups of people, teams, and/or environments of care.” Then, I drew a rough sketch of my vantage point of simulation that divided into two major subgroups, including methods/modes on one side and primary purpose on the other. I recreated it in the accompanying figure.

Methods/Modes

I think of the methods or modes of simulation based on the primary simulator technology employed to generate the goals of an intended program. Of course, mixed modality simulations often incorporate a spectrum of technologies.

I don’t mean this list to be exhaustive by any stretch of the imagination, and some may argue an oversimplification. The general categories that come to my mind are as follows:

  1. High-technology manikins generally presents the form factor of an entire human being complemented with electronics, pneumatics, and computer equipment that helps the manikin represent various aspects of anatomy and or physiology. (As you have undoubtedly heard me opine in the past, the word FIDELITY does not belong in any descriptor of a simulator. It muddles the water and confuses the overall strategies associated with simulation, although it is a popular industry buzzword that has somehow worked its way into academic definitions inappropriately.)
  2. Low-technology manikins generally have the form factor of an entire human being but with significantly less electronics or infrastructure to allow physiologic or anatomic changes that occurred during the simulation encounter.
  3. Standardized people/patients, meaning live people playing various roles ranging from patients, family members, and other healthcare team members to help bring a simulation encounter to life.
  4. Task trainers represent a re-creation of a portion of the human being oftentimes created to accomplish goals of completing skills or procedures. Depending on the purpose, they may or may not have a significant amount of augmenting technology.
  5. Screen-based simulations are computerized case or situation representations of some aspects of patient care that change in response to the stimulus provided by participants.
  6. Role-play includes designs that utilize peers and/or select faculty to engage in a simulated conversation or situation to accomplish learning outcomes.
  7. Virtual reality/augmented reality are high technology recreations or supplements that re-create reality through the lens of a first-person engaging in some sort of healthcare situation and have the capacity to change in response to the stimulus provided by the participant or participants.

Primary Purpose/Goals

Again, looking at a given simulation’s primary purpose and goals will lead one to quickly find overlaps and that the categories did not exist in complete isolation. However, for this discussion, it helps to think of the different categories of intent.

Education

When I think of simulation programs primarily focusing on education, it comes down to helping participants gain or refine knowledge, skills, competence, or other measures that allow them to become better healthcare providers. In general, a teaching exercise. This can apply to simulation scenarios that are directed at one person, groups of people (all learning the same thing), or perhaps teams that have learning goals of competencies associated with the interaction between the groups of people similar to that that occurs in the care of actual patients in the healthcare environment.

Assessment

The simulation encounter is primarily designed as an assessment. This means there is a more formal measurement associated with the performance of the simulation, often employing scoring tools, with the primary focus of measuring the competency of an individual, groups of individuals, or similar to the above teams of individuals functioning as teams. Further, assessment can measure aspects of the environment of care and/or the systems involved in supporting patients and the healthcare workforce.  (For example, an in-situ code blue response simulation may measure the response of the local care team, the response of a responding team, the engagement of the hospital operator, the location and arrival of necessary equipment, etc.)

Research

There are many approaches to the use of modern healthcare simulation in research. At a crude level, I subdivided into looking at the outcomes of the simulation; meaning did the simulation encounter help to improve the participant’s performance? At the next level, you can evaluate if the simulation improves patient care.

The next category is using simulation as a surrogate of the patient care environment but not measuring the effect of the simulation. For example, we might set up an ICU patient care environment for human factors experiments to figure out the ideal location of a piece of equipment, the tone of an alarm, the interaction of caregivers with various equipment, etc. Such an example of simulation often helps to determine optimal environments and systems of care in the primary planning stages or the remodeling of healthcare delivery processes and procedures.

So, the next time I orient an incoming simulation fellow, I will start with this discussion. I am thankful that my fellow who just graduated provided such a simple but deeply probing question to help wrap his arms around the various simulations he has been experiencing over the last year while he studied with us.

Having put some more thought into this, I think it’s a useful exercise for those of us in leadership positions within the simulation world; it is probably good to stop and think about this a couple of times a year to refresh, reset, and ensure that we are remaining mission-driven to our purpose.

Until next time, Happy Simulating!

Leave a comment

Filed under simulation

HUMBLE: Six Traits That Will Make You a Better Simulation Educator and Lead Effective Debriefings

HUMBLE: Six Traits That Will Make You a Better Simulation Educator and Lead Better Debriefings

Excelling as a educator in the healthcare simulation field goes beyond just imparting knowledge; it requires a unique set of qualities that can truly make a difference in students’ learning experiences. The acronym HUMBLE focuses on six key traits that can help educators better design, facilitate, and lead more effective debriefings. These traits include Humility, Understanding, Mindfulness, Balance, Learning, and Engaging. In this blog post, I will delve into these traits and explore how they can enhance your abilities as an educator, ultimately leading to more impactful and engaging debriefing sessions.

H – Humility

This is one of my favorites and the most important in my humble opinion! Approaching teaching responsibilities in simulation from a perspective of humility goes a long way. Instructors, with humility, acknowledge that they don’t know everything and remain open to continuous learning. This attitude is also imparted to the participants, encouraging them to adopt the same approach throughout their careers.

An instructor who demonstrates humility creates a more approachable and non-threatening atmosphere, allowing students to feel comfortable admitting to and learning from their errors. This also contributes to a milieu that helps maintain a safe learning environment and a perspective of a level playing field that helps to allow participants of the simulation to share their thoughts. This, in turn, gives us as faculty a privileged glimpse into their thought processes. Interestingly, it is also well-known in business literature that leaders who demonstrate humility are often perceived as more credible and trustworthy.

U – Understanding

Understanding the fact that each participant of your simulation is a person that has their individual lives, challenges, successes, experiences, and strong and weak skills is key to understanding the fact that there are varying amounts of knowledge and/or abilities for the person to apply that knowledge in the simulated session. In other words, many factors contribute to why someone knows something or can apply knowledge in each situation. We should maintain an understanding that everyone has gaps in knowledge and attempt to remain nonjudgmental as to why those gaps exist.

M – Mindfulness

It is incredibly important that we are mindful of our presence during the simulation as well as the debriefing. Educators need to be attentive, focused, immersed, and committed to the learning objectives to expertly facilitate and then lead a high-quality debriefing that contributes to the learning outcomes. We need to work to identify tips and challenges that help maintain our mindfulness, focus, and attention during these activities.

While I am not suggesting a prescriptive approach, it is important to introspect and determine how you enhance your mindfulness associated with the simulation-based education process. For some, it means being well rested; for others, it means appropriately titrated doses of caffeine, and yet for others, exhaustive preparation the day before. Reflect on your performance by thinking about when your concentration may have waxed and waned and what you can do to improve. I find it particularly challenging to remain cognitively sharp throughout the entire series when running the same scenario repeatedly with different groups of learners!

B – Balance

Creating the mindset of balance in any one simulation session helps participants discover what they need to improve upon and what they did well in each simulation encounter. There is an old saying, “The negative screams, while the positive only whispers….. ” that I think you would agree applies when we are facilitating a simulation and about to go into the debriefing. If you think about it from the learner’s perspective, exploring a laundry list of their failures without recognizing the contributions that went well can be demoralizing and interfere with the faculty/participant relationship. While I’m not suggesting that we gloss over egregious errors, it is important to find a balance between those activities that went well and those that need improvement.

L – Limited, Lifelong Learning

This may be my second favorite! When conducting the debriefing, faculty should avoid trying to comment or debrief on every single thing in every scenario. It is important to remember that the journey of healthcare, whether in a simulated environment, attending lectures, attending workshops, or generating experiences by taking care of real patients, is a lifelong learning process. Each encounter along the way provides the potential for learning, albeit limited by the amount of cognitive transfer that can occur at a given time. During simulation, there is a natural tendency to want to cram everything into every scenario. I think this emanates from the fact that we are so excited about the simulation modality and get a small opportunity with each participant! Admittedly, I need to keep myself in check during such encounters. It’s important to think of the human brain as a sponge. Once it is saturated, the sponge cannot effectively take on more water.

E – Engagement

Engaging the learners in the conversation, as well as designing the scenarios to engage learners actively, is part and parcel of the basis of the idea that simulation, through active learning, is a high-quality opportunity. Think about this during the design process of your scenarios as well as the debriefings, insofar as how you assign roles, what your observers are required to do, and how you rotate people in and out of the scenario.

During the debriefing, remember that engaging your learners so that they are responding to the prompts you provide during the debriefing will elicit the responses. As the learners are engaged in the conversation, you can listen to their thought processes and make evaluations of the depth of their knowledge around a particular topic. Additionally, you can identify gaps that exist, either in knowledge or the application of knowledge, that can help them improve for the future. So often, when training others in debriefing, I observe faculty members dropping into a mode of “mini-lecture” during what is supposed to be a debriefing. This deviates from active cognitive engagement and sometimes transcends into (a well-intentioned) one-way conversation. It is important to remember that if your participants are not engaged, you are potentially squandering some of the learning opportunities. At a minimum, you are giving up the ability to hear what they are thinking.

In summary, as you continue to develop your skills as a healthcare simulation educator, I invite you to use HUMBLE as an acronym that helps to reflect upon positive traits, actions, and good guiding principles, that provide learners with an optimized environment for improvement.  I truly think that healthcare simulation educators have powerful opportunities for assisting with the transfer of knowledge, and experience and creating opportunities for reflection, and by being HUMBLE we can ensure a more effective and empathetic learning environment for all participants.

Until Next Time,

Happy Simulating!

Leave a comment

Filed under debriefing, simulation

The Importance of the Psychological Contract in Healthcare Simulation: Six Fundamental Elements

Simulation is a powerful tool in healthcare education to enhance learning and improve patient outcomes. Through simulation-based learning encounters, participants can engage in hands-on experiences that mimic real-life situations, allowing them to develop critical skills and knowledge.

The success of healthcare simulation educational encounters relies on the participants and the facilitators who guide and support the learning process. Understanding the psychological contract that needs to exist between participants, facilitators, and content designers, is crucial in creating a positive and effective learning environment. In this blog post, we will explore the importance of this psychological contract and discuss strategies to enhance it, ultimately leading to enhanced learning and improved outcomes in healthcare simulation.

While most discussions of the psychological contract are in the context of facilitating a simulation in real time, some elements are critically important to consider during the design process associated with simulation-based education encounters. How we structure our briefings, pre-briefings, and course schedules can dramatically influence our relationship with the participants to enhance the learning potential in the simulated environment.  

I like to think of six essential elements when designing and facilitating simulations.

Professionalism: We agree to treat each other as professionals throughout simulation-based education encounters. The learner agrees to attempt to interact in the scenario as if they were taking care of an actual patient, and the simulation facilitator agrees that the scenario will be directed to respond with a reasonable facsimile of how an actual patient will respond to the care being delivered.

Confidentiality: The simulation program agrees to keep the performance assessment of participants confidential to the extent possible. The simulation participant should be apprised of the fate of any audio, video, or still photographic media generated from the simulation. If, by programmatic design, there is the intent to share any performance results, the participant should be aware of this before engagement in the program.

Time: The simulation facilitator commits to creating an environment of learning that respects the participant’s time. The simulation program commits to the intent that the simulation encounter and all associated time spent will help provide the participant with relevant, professional education and growth potential.

Realism/Deception: Both the participant and the facilitator acknowledge that the environment is not real and will contain varying degrees of realism. The simulation environment’s primary intent is to provide a reasonable facsimile of a healthcare encounter to serve as the background for the participant to demonstrate their clinical practice proficiency to the best of their knowledge in exchange for feedback that highlights areas of success and identifies areas of potential improvement. Our simulation-based scenario designs are modeled after actual patient encounters or close representations of cases that may occur within your practice domain. While the case may represent areas of diagnostic mystery or other unknowns, the scenarios are not designed to deceive or mislead the learner deliberately. The facilitator acknowledges there may be facsimiles of the simulation that may be misinterpreted by the learner as a matter of simulation scenario design limitations and will address them as appropriate, as they occur.

Judgment: While there will be an assessment of the learner’s performance to carry out effective feedback, it will be based upon known best practices, guidelines, algorithms, protocols, and professional judgment. No judgment will be associated with why a gap in knowledge or performance was identified. The facilitators agree to maintain a safe learning environment that invites questions, explorations, and clarifications as needed to enhance learning potential.

Humbleness: Healthcare is a complicated profession regardless of the practice domain. It requires the engagement of lifelong learners to learn and retain a significant amount of knowledge and skill. Additionally, there is a constant refinement of knowledge, best practices, and procedures. The facilitator acknowledges that they are imperfect and engage in the same lifelong learning journey as the participant.

While the descriptions associated with each element of the psychological contract in this post are more aligned with the engagement with senior learners or practicing professionals, it is easy to translate each category when working with students and other types of junior learners.

Educators and learners can establish a foundation of trust, collaboration, and active participation by understanding and embracing the tenants of psychological contracts in healthcare simulation. Careful consideration of these elements is beneficial during program design and when actively facilitating simulation-based learning encounters. This, in turn, enhances the learning outcomes, improves clinical practice, and prepares healthcare professionals to deliver high-quality care as they engage in real-world patient encounters and associated situations.

The next time you are designing or conducting simulation based education endeavors give careful consideration to the psychological contract!

Until next time, Happy Simulating!

Leave a comment

Filed under Curriculum, design, simulation

When Simulation Is NOT the Answer: Own It!

Obviously, we are happy that simulation has become a popular method of education in healthcare. Simulation can provide a hands-on approach to learning that allows participants to experience real-life situations in a safe and controlled environment.

However, while simulation has many benefits, it’s not necessarily the best option for every type of education.  When we engage simulation as a modality, it is relatively complex, expensive and resource intensive compared to other educational methodologies. That all being said we all know at times it is an irreplaceable methodology that allows education, competency assessment, as well as system assessment information to be utilized in the improvement of healthcare.  The key is to have a stratification process/policy in place to evaluate opportunities to decide when simulation is the optimal deployment tool.

As leaders and managers of simulation programs we are charged with creating the return on investment for our programs. We are entrusted by the people who provide our funding to be good stewards of the investment and ongoing operational support of the simulation efforts.  It is up to us to hold the keys to the vault that we call simulation so that it gets engaged, deployed and/or utilized in the fashion that generates the expected outcomes with the highest amount of efficiency and effectiveness.

In short, don’t simulate because you can, simulate because you need to!

As your simulation center becomes a more recognized resource within your institution, there will often be an increase in request for services.  As this occurs it is critically important that leaders of programs are ensuring that the simulations are bringing value. 

For example, if someone wants you to do simulation training for an entire unit to rule out a new simple policy or procedure change, do not just say yes.  Instead, create a framework that advises the requester if simulation is the best modality.

When contemplating the value of simulation as a modality, I think it is best to go back to the creation of learning objectives for anticipated scenarios.  I always like to say that if you do knowledge, skills, and attitudes (KSA) analysis of your learning objectives and they all come up with K’s, you should reevaluate whether simulation is the best method.

Web-based education including courses, videos, lectures, or assigned reading may accomplish the same objectives as your planned simulation.  If this is the case, as a leader in simulation it is important that you recognize this and recommend modalities other than simulation.  It will likely save your organization time and money.  More importantly, it may increase the credibility of your advice and reputation moving forward as a problem solver for the institution as well as someone who is fiscally responsible.  Over time it can be valuable for a simulation program to enjoy a reputation of “the solution deployment” expert, not simply the “simulation” expert.

It is important to remember that the true value we provide is in the end-result of creating higher quality healthcare along with a safer environment for patients.  In this day and age, it has become increasingly important that our engagement is thoughtful, prudent with cost considerations in mind.  While we are all passionate about simulation, leaders of the future will garner success through a lens of efficiency and effectiveness in the programs that we deploy.

In conclusion, healthcare simulation is an important tool for education and patient safety, but it is not always the best tool. Simulation program managers and leaders should consider the specific learning outcomes they hope to achieve and carefully consider which educational modality is most appropriate for their learners. By doing so, they can ensure that they are providing the best possible, most cost-efficient training for their staff and ultimately improving patient outcomes.

Remember: Don’t simulate because you can, simulate because you need to!

Let me know what you think in the comments! If you enjoyed this post, please let me know by liking it, or subscribing to my Blog!

Until next time,

Happy Simulating!

Leave a comment

Filed under Curriculum, return on investment, simulation

Not Every Simulation Scenario Needs to Have a Diagnostic Mystery!

It is quite common to mistakenly believe that there needs to be a diagnostic mystery associated with a simulation scenario. This could not be further from the truth.

Sometimes it arises from our clinical hat being confused with our educator hat (meaning we let our view of the actual clinical environment become the driving factor in the design of the scenario.) We must carefully consider the learning objectives and what we want to accomplish. One of the powerful things about simulation is that we get to pick where we start and where we stop, as well as the information given or withheld during the scenario.

Let us take an example of an Inferior Wall Myocardial Infarction (IWMI). Let us imagine that we desire to assess a resident physician’s ability to manage the case. Notice I said to manage the case, not diagnose, then manage the case. This has important distinctions on how we would choose to begin the scenario. If the objectives were to diagnose and manage, we might start the case with a person complaining of undifferentiated chest pain and have the participant work towards the diagnosis and then demonstrate the treatment. Elsewise, if we were looking to have them only demonstrate proficiency in the management of the case, we may hand them an EKG showing an IMWI (or maybe not even hand them the EKG) and start the case by saying, “your patient is having an IWMI” and direct them to start the care.  

What is the difference? Does it matter?

In the former example of starting the case, the participant has to work through the diagnostic conundrum of undifferentiated chest pain to come up with the diagnosis of IWMI. Further, it is possible that the participant does not arrive at the proper diagnosis, in which case you would not be able to observe and assess them in the management of the case. Thus, your learning objectives have become dependent on one another. By the way, there’s nothing wrong with this as long as it is intended. We tend to set up cases like this because that is the way that the sequencing would happen in the actual clinical environment (our clinical hat interfering). However, this takes up valuable minutes of simulation, which are expensive and should be planned judiciously. So, my underlying point is if you deliberately are creating the scenario to see the diagnostic reasoning and treatment, then the former approach would be appropriate.

The latter approach, however, should be able to accomplish the learning objective associated with demonstrating the management of the patient. Thus, if that is truly the intended learning objective, the case should be fast-forwarded to eliminate the diagnostic reasoning portion of the scenario. Not only will this save valuable simulation time it will also conceivably lead to more time to carefully evaluate the treatment steps associated with managing the patient. Additionally, it will eliminate the potential of prolonged simulation periods that do not contribute to accomplishing the learning objectives and/or get stuck because of a failure to achieve the initial objective (in this case, for example, the diagnosis.)

So, the next time you make decisions in the scenario’s design, take a breath and ask yourself, “Am I designing it this way because this is the way we always do it? Am I designing it this way because this is the way it appears in the real clinical environment?”

The important point is that one is asking themselves, “How can I stratify my design decisions so that the scenario is best crafted to accomplish the intended learning objectives?” If you do, you will be on the road to designing scenarios that are efficient and effective!

Leave a comment

Filed under scenario design, simulation

Sherlock Holmes and the Students of Simulation

I want to make a comparison between Sherlock Holmes and the students of our simulations! It has important implications for our scenario design process. When you think about it, there’s hypervigilance amongst our students, looking for clues during the simulation. They are doing so to figure out what we want them to do. Analyzing such clues is like the venerable detective Sherlock Holmes’s processes when investigating a crime.

Video version of this post

This has important implications for our scenario design work because many times, we get confused with the idea that our job is to create reality when in fact, it is not that at all our job. As simulation experts, our jobs are to create an environment with the reality that is sufficient to allow a student to progress through various aspects of the provision of health care. We need to be able to make a judgment and say, “hey, they need some work in this area,” and “hey, they’re doing good in this area.”

To accomplish this, we create facsimiles of what they will experience in the actual clinical environment transported into the simulated environment to help them adjust their mindset so they can progress down the pathway of taking care of those (simulated) patient encounters.

We must be mindful that during the simulated environment, people engage their best Sherlock Holmes, and as the famous song goes, [they are] “looking for clues at the scene of the crime.”
Let’s explore this more practically.

Suppose I am working in the emergency department, and I walk into the room and see a knife sitting on the tray table next to a patient. In that case, I immediately think, “wow, somebody didn’t clean this room up after the last patient, and there’s a knife on the tray. I would probably apologize about it to the patient and their family.”

Fast forward…..

Put me into a simulation as a participant, and I walk into the room. I see the knife on the tray next to the patient’s bed, and I immediately think, “Ah, I’m probably going to do a crich or some invasive procedure on this patient.”

How does that translate to our scenario design work? We must be mindful that the students of our simulations are always hypervigilant and always looking for these clues. Sometimes when we have things included in the simulation, we might just have there as window dressing or to try to (re)create some reality. However, stop to think they can be misinterpreted as necessary to be incorporated into the simulation by the student for success in their analysis.

Suddenly, the student sees this thing sitting on the table, so they think it is essential for them to use it in the simulation, and now they are using it, and the simulation is going off the tracks! As the instructor, you’re saying that what happened is not what was supposed to happen!

At times we must be able to objectively go back and look at the scenario design process and recognize maybe just maybe something we did in the design of the scenario, which includes the setup of the environment, that misled the participant(s). If we see multiple students making the same mistakes, we must go back and analyze our scenario design. I like to call it noise when we put extra things into the simulation scenario design. It’s noise, and the potential for that noise to blow up and drive the simulation off the tracks goes up exponentially with every component we include in the space. Be mindful of this and be aware of the hypervigilance associated with students undergoing simulation.

We can negate some of these things by a good orientation, by incorporating the good practice into our simulation scenario design so that we’re only including items in the room that are germane to accomplishing the learning objectives.

Tip: If you see the same mistakes happening again and again, please introspect, go back, look at the design of your simulation scenario, and recognize there could be a flaw! Who finds such flaws in the story?  Sherlock Holmes, that’s who!

1 Comment

Filed under Curriculum, design, scenario design, simulation

5 Tips to Improve Interrater Reliability During Healthcare Simulation Assessments

One of the most important concepts in simulation-based assessment is achieving reliability, and specifically interrater reliability. While I have discussed previously in this blog every simulation is assessment, in this article I am speaking of the type of simulation assessment that requires one or more raters to record data associated with the performance or more specifically an assessment tool.

Interpreter reliability simply put is that if we have multiple raters watching a simulation and using a scoring rubric or tool, that they will produce similar scores. Achieving intermittent reliability is important for several reasons including that we are usually using more than one rater to evaluate simulations over time. Other times we are engaged in research and other high stakes reasons to complete assessment tools and want to be certain that we are reaching correct conclusions.

Improving assessment capabilities for stimulation requires a significant amount of effort. The amount of time and effort that can go into the assessment process should be directly proportional to the stakes of the assessment.

In this article I offer five tips to consider for improving into rate of reliability when conducting simulation-based assessment

1 – Train Your Raters

The most basic and overlooked aspect of achieving into rate and reliability comes from training of the raters. The raters need to be trained to the process, the assessment tools, and each item of the assessment that they are rendering an opinion on. It is tempting to think of subject matter experts as knowledgeable enough to fill out simple assessments however you will find out with detailed testing that often the scoring of the item is truly in the eye of the beholder. Simple items like “asked medical history” may be difficult to achieve reliability if not defined prior to the assessment activity. Other things may affect the assessment that require rater calibration/training such as limitations of the simulation, and how something is being simulated and/or overall familiarity with the technology that may be used to collect the data.

2 – Modify Your Assessment Tool

Modifications to the assessment tool can enhance interrelated reliability. Sometimes it can be extreme as having to remove an assessment item because you figure out that you are unable to achieve reliability despite iterative attempts at improvement. Other less drastic changes can come in the form of clarifying the text directives that are associated with the item. Sometimes removing qualitative wording such as “appropriately” or “correctly” can help to improve reliability. Adding descriptors of expected behavior or behaviorally anchored statements to items can help to improve reliability. However, these modifications and qualifying statements should also be addressed in the training of the raters as described above.

3 – Make Things Assessable (Scenario Design)

An often-overlooked factor that can help to improve indurated reliability is make modifications to the simulation scenario to allow things to be more “assessable”. We make a sizable number of decisions when creating simulation-based scenarios for education purposes. There are other decisions and functions that can be designed into the scenario to allow assessments to be more accurate and reliable. For example, if we want to know if someone correctly interpreted wheezing in the lung sounds of the simulator, we introduced design elements in the scenario that could help us to gather this information accurately and thus increase into rater reliability. For example, we could embed a person in the scenario to play the role of another healthcare provider that simply asks the participant what they heard. Alternatively, we could have the participant fill out a questionnaire at the end of the scenario, or even complete an assessment form regarding the simulation encounter. Lastly, we could embed the assessment tool into the debriefing process and simply ask the participant during the debriefing what they heard when I auscultated the lungs. There is no correct way to do this, I am trying to articulate different solutions to the same problem that could represent solutions based on the context of your scenario design.

4 – Assessment Tool Technology

Gathering assessment data electronically can help significantly. When compared to a paper and pencil collection scheme technology enhanced or “smart” scoring systems can assist. For example, if there are many items on a paper scoring tool the page can sometimes become unwieldy to monitor. Electronic systems can continuously update and filter out data that does not need to be displayed at a given point in time during the unfolding of the simulation assessment. Simply having previously evaluated items disappear off the screen can reduce the clutter associated with scoring tools.

5 – Consider Video Scoring

For high stakes assessment and research purposes it is often wise to consider video scoring. High stakes meaning pass/fail criteria associated with advancement in a program, heavy weighting of a grade, licensure, or practice decisions. The ability to add multiple camera angles as well as the functionality to rewind and play back things that occurred during the simulation are valuable in improving the scoring accuracy of the collected data which will subsequently improve the interrater reliability. Video scoring associated with assessments requires considerable time and effort and thus reserved for the times when it is necessary.

I hope that you found these tips useful. Assessment during simulations can be an important part of improving the quality and safety of patient care!

If you found this post useful please consider subscribing to this blog!

Thanks and until next time! Happy Simulating.

Leave a comment

Filed under assessment, Curriculum, design, scenario design

Beware of the Educational Evangelist!

beware educational evangelistThey are everywhere now days like characters in pokemon go. They seem to hang out in high concentration around new simulation centers.

You know the type. Usually they start off by saying how terrible it is for someone to give a lecture. Then they go on to espouse the virtues and values of student – centered education claiming active participation and small group learning is the pathway to the glory land. They often toss in terms like “flipped classroom”. And just to ensure you don’t question their educational expertise they use a word ending with “-gogy” in the same paragraph as the phrase “evidence-based”.

If you ask them where they have been in the last six months you find out that they probably went to a weekend healthcare education reform retreat or something equivalent…….

My principal concern with the today’s educational evangelist is that they are in search of a new way of doing everything. Often times they recommend complete and total overhauls to existing curriculum without regard to a true understanding of how to efficiently and effectively improve, and/or analyze the existing resources required to carry out such changes.

Further, the evangelist usually has a favorite methodology such as “small group learning”, “problem-based learning” or “simulation-based learning” that they are trying to convert everyone to through prophecy.

An easy target of all educational evangelist is the lecture, and often that is where the prophecy begins. They usually want to indicate that if lecture is happening, learning is not. As I discussed in a previous blog article lecture is not dead, and when done well, can be quite engaging and create significant opportunities for learning and is maximally efficient in terms of resources.

If you think about a critically it is just as easy to do lousy small group facilitation as it is to do a lousy lecture. Thus, the potential gains in learning will not achieve maximal potential. The difference is small group facilitation like simulation, generally take significantly more faculty resources.

The truth is the educational evangelist is a great person to have in and amongst the team. Their desire for change, generally instilled with significant passion are often a source of great energy. When harnessed they can help advance and revise curricula to maximize, and modernize various educational programs.

However, to be maximally efficient all significant changes should undergo pre-analysis, hopefully derived from a needs assessment, whether it is formal or informal. Secondly, it is worth having more than one opinion to decide the prioritization of what needs to be changed in a given curriculum. While the evangelist will be suggestive that the entire curriculum is broken, often times with a more balanced review you find out that there are areas of the curriculum that would benefit from such overhaul, and some aspects that are performing just fine.

When you begin to change aspects of the curriculum, start small and measure the change if possible. Moving forward on a step-by-step basis will usually provide a far better revised curriculum then an approach that “Throws out the baby and the bathwater”. Mix the opinions of the stalwarts of the existing curriculum methods with the evangelists. Challenge existing axioms, myths and entrenched beliefs like “Nothing can replace the real patient for learning….” If this process is led well, it will allow the decision making group to reach a considerably more informed position that will lead to sound decisions, change strategies, and guide investments appropriately.

So if you’re the leader or a member of a team responsible for a given curriculum of healthcare instruction and confronted with the educational evangelist, welcome their participation. Include them in the discussions moving forward with a balanced team of people have them strive to create an objective prioritization of the needs for change. This will allow you to make excellent decisions with regard to new technologies and/or methods that you should likely embrace for your program. More importantly you will avoid tossing out the things that are working and are cost efficient.

Leave a comment

Filed under Curriculum, Uncategorized

Learning from Simulation – Far more than the Debriefing

Most people have heard someone say “In Simulation, debriefing is where all of the learning occurs.” I frequently hear this when running faculty development workshops and programs, which isn’t as shocking as hearing this espoused at national and international meetings in front of large audiences! What a ridiculous statement without a shred of evidence or a realistic common sense approach to think it would be so. Sadly, I fear it represents an unfortunate instructor-centered perspective and/or a serious lack of appreciation for potential learning opportunities provided by simulation based education.LearningDuringSimulation2

Many people academically toil over the technical definitions of the word feedback and try to contrast in from a description of debriefing as if they are juxtaposed. They often present it in a way as if one is good and the other is bad. There is a misguided notion that feedback is telling someone, or lecturing to someone to get a point across. I believe that is a narrow interpretation of the word. I think that there are tremendous opportunities for learning from many facets of simulation that may be considered feedback.

Well-designed simulation activities hopefully provide targeted learning opportunities of which part of it is experiential, sometimes immersive, in some way. I like to think of debriefing as one form of feedback that a learner may encounter during simulation based learning, commonly occurring after engaging in some sort of immersive learning activity or scenario. Debriefing can be special if done properly and will actually allow the learner to “discover” new knowledge, perhaps reinforce existing knowledge, or maybe even have corrections made to inaccurate knowledge. No matter how you look at it at the end of the day it is a form of feedback, that can likely lead, or contribute to learning. But to think that during the debriefing is the only opportunity for learning is incredibly short-sighted.

There are many other forms of feedback and learning opportunities that learners may experience in the course of well-designed simulation based learning. The experience of the simulation itself is ripe with opportunities for feedback. If a learner puts supplemental oxygen on a simulated patient that is demonstrating hypoxia on the monitor via the pulse oximetry measurements and the saturations improve, that is a form of feedback. Conversely, if the learner(s) forgets to provide the supplemental oxygen and the saturations or other signs of respiratory distress continue to worsen then that can be considered feedback as well. The latter two example examples are what I refer to as intrinsic feedback as they are embedded in the scenario design to provide clues to the learners, as well as to approximate what may happen to a real patient in a similar circumstance.

With regard to intrinsic feedback, it is only beneficial if it is recognized and properly interpreted by the learner(s) either while actively involved in the simulated clinical encounter, and if not, perhaps in the debriefing. The latter should be employed if the intrinsically designed feedback is important to accomplishing the learning objectives germane to the simulation.

There are still other forms of feedback that likely contribute to the learning that are not part of the debriefing. In the setting of a simulated learning encounter involving several learners, the delineation of duties, the acceptance or rejection of treatment suggestions are all potentially ripe for learning. If a learner suggests a therapy that is embraced by the team, or perhaps stimulates a group discussion during the course of the scenario the resultant conversation and ultimate decision can significantly add to the learning of the involved participants.

Continuing that same idea, perhaps the decision to provide, withhold, or check the dosage of a particularly therapy invokes a learner to check a reference, or otherwise look up a reference that provides valuable information that solidifies a piece of information in the mind of the leaner. The learner may announce such findings to the team while the scenario is still underway thereby sharing the knowledge with the rest of the treatment team. Waaah Laaaah…… more learning that may occur outside of the debriefing!

Finally, I believe there is an additional source of learning that occurs outside of the debriefing. Imagine when a learner experiences something or becomes aware of something during a scenario which causes them to realize they have a knowledge gap in that particular area. Maybe they forgot a critical drug indication, dosage or adverse interaction. Perhaps there was something that just stimulated their natural curiosity. It is possible that those potential learning items are not covered in the debriefing as they may not be core to the learning objectives. This may indeed stimulate the learner to engage in self-study to enhance their learning further to close that perceived area of a knowledge gap. What???? Why yes, more learning outside of the debriefing!

In fact, we hope that this type of stimulation occurs on the regular basis as a part of active learning that may have been prompted by the experiential aspects provided by simulation. Such individual stimulation of learning is identified in the sentinel publication of Dr. Barry Issenberg et al in Vol 27 of Medical Teacher in 2005 describing key features of effective simulation.

So hopefully I have convinced you, or reinforced your belief that the potential for learning from simulation based education spans far beyond the debriefing. Please recognize that this statement made by others likely reflects a serious misunderstanding and underappreciation for learning that can and should be considered with the use of simulation. The implication of such short-sightedness can have huge impacts on the efficiency and effectiveness of simulation that begin with curriculum and design.

So the next time you are incorporating simulation into your education endeavor, sit back and think of all of the potential during which learning may occur. Of course the debriefing in one such activity during which we hope learning to occur. Thinking beyond the debriefing and designing for the bigger picture of potential learning that can be experienced by the participants is likely going to help you achieve positive outcomes from your overall efforts.

6 Comments

Filed under Uncategorized

Simulation Curriculum Integration via a Competency Based Model

Process_Integration.shutterstock_304375844One of the things that is a challenge for healthcare education is the reliance on random opportunity for clinical events to present themselves for a given group of learners to encounter as part of a pathway of a structured learning curriculum. This uncertainty of exposure and eventual development of competency is part of what keep our educational systems time-based which is fraught with inefficiencies by its very nature.

Simulation curriculum design at present often embeds simulation in a rather immature development model in which there is an “everybody does all of the simulations” approach. If there is a collection of some core topics that are part and parcel to a given program, combined with a belief, or perhaps proof, that simulation is a preferred modality for the topic, then it makes sense for those exposures. Let’s move beyond the topics or situations that are best experienced by everyone.

If you use a model of physician residency training for example, curriculum planners “hope” that over the course of a year a given first year resident will adequately manage an appropriate variety of cases. The types of cases, often categorized by primary diagnosis, is embedded in some curriculum accreditation document under the label “Year 1.” For the purposes of this discussion lets change the terminology from Year 1 to Level 1 as we look toward the future.

What if we had a way to know that a resident managed the cases, and managed them well for level one? Perhaps one resident could accomplish the level one goals in six months, and do it well. Let’s call that resident, Dr. Fast. This could then lead to a more appropriate advancement of the resident though the training program as opposed to them advancing by the date on the calendar.

Now let’s think about it from another angle. Another resident who didn’t quite see all of the cases, or the variety of cases needed, but they are managing things well when they do it. Let’s call them Dr. Slow. A third resident of the program is managing an adequate number and variety, but is having quality issues. Let’s refer to them as Dr. Mess. An honest assessment of the current system is that all three residents will likely be advanced to hire levels of responsibilities based on the calendar without substantial attempt at remediation of understanding of the underlying deficiencies.

What are the program or educational goals for Drs. Fast, Slow and Mess? What are the differences? What are the similarities? What information does the program need to begin thinking in this competency based model? Is that information available now? Will it likely be in the future? Does it make sense that we will spend time and resources to put all three residents through the same simulation curriculum?

While there may be many operational, culture, historical models and work conditions that provide barriers to such a model, thinking about a switch to a competency based model forces one to think deeper about the details of the overall mission. The true forms of educational methods, assessment tools, exposure to cases and environments, should be explored for both efficiency and effective effectiveness. Ultimately the outcomes we are trying to achieve for a given learner progressing through a program would be unveiled. Confidence in the underlying data will be a fundamental necessary component of a competency based system. In this simple model, the two functional data points are quantity and quality of given opportunities to learn and demonstrate competence.

This sets up intriguing possibilities for the embedding of simulation into the core curriculum to function in a more dynamic way and contribute mightily to the program outcomes.

Now think of the needs of Dr. Slow and Dr. Mess. If we had insight combined with reliable data, we could customize the simulation pathway for the learner to maximally benefit their progression through the program. We may need to provide supplement simulations to Dr. Slow to allow practice with a wider spectrum of cases, or a specific diagnosis, category of patient, or situation for them to obtain exposure. Ideally this additional exposure that is providing deliberate practice opportunities could also include learning objectives to help them increase their efficiencies.

In the case of Dr. Mess, the customization of the simulation portion of the curriculum provide deliberate practice opportunities with targeted feedback directly relevant to their area(s) of deficiency, ie a remediation model. This exposure for Dr. Mess could be constructed to provide a certain category of patient, or perhaps situation, that they are reported to handle poorly. The benefit in the case of Dr. Mess is the simulated environment can often be used to tease out the details of the underlying deficiency in a way that learning in the actual patient care environment is unable to expose.

Lastly, in our model recall that Dr. Fast may not require any “supplemental” simulation thus freeing up sparse simulation and human resources necessary to conduct it. This is part of the gains in efficiencies that can be realized through a competency -based approach to incorporating simulation into a given curriculum.

Considering a switch to a competency based curriculum in healthcare education can be overwhelming simply based on the number of operational and administrative challenges. However, using a concept of a competency based implementation as a theoretical model can help envision a more thoughtful approach to curricular integration of simulation. If we move forward in a deliberate attempt to utilize simulation in a more dynamic way, it will lead to increases in efficiencies and effectiveness along with providing better stewardship of scarce resources.

 

1 Comment

Filed under Uncategorized