Tag Archives: healthcare simulation

Not Every Simulation Scenario Needs to Have a Diagnostic Mystery!

It is quite common to mistakenly believe that there needs to be a diagnostic mystery associated with a simulation scenario. This could not be further from the truth.

Sometimes it arises from our clinical hat being confused with our educator hat (meaning we let our view of the actual clinical environment become the driving factor in the design of the scenario.) We must carefully consider the learning objectives and what we want to accomplish. One of the powerful things about simulation is that we get to pick where we start and where we stop, as well as the information given or withheld during the scenario.

Let us take an example of an Inferior Wall Myocardial Infarction (IWMI). Let us imagine that we desire to assess a resident physician’s ability to manage the case. Notice I said to manage the case, not diagnose, then manage the case. This has important distinctions on how we would choose to begin the scenario. If the objectives were to diagnose and manage, we might start the case with a person complaining of undifferentiated chest pain and have the participant work towards the diagnosis and then demonstrate the treatment. Elsewise, if we were looking to have them only demonstrate proficiency in the management of the case, we may hand them an EKG showing an IMWI (or maybe not even hand them the EKG) and start the case by saying, “your patient is having an IWMI” and direct them to start the care.  

What is the difference? Does it matter?

In the former example of starting the case, the participant has to work through the diagnostic conundrum of undifferentiated chest pain to come up with the diagnosis of IWMI. Further, it is possible that the participant does not arrive at the proper diagnosis, in which case you would not be able to observe and assess them in the management of the case. Thus, your learning objectives have become dependent on one another. By the way, there’s nothing wrong with this as long as it is intended. We tend to set up cases like this because that is the way that the sequencing would happen in the actual clinical environment (our clinical hat interfering). However, this takes up valuable minutes of simulation, which are expensive and should be planned judiciously. So, my underlying point is if you deliberately are creating the scenario to see the diagnostic reasoning and treatment, then the former approach would be appropriate.

The latter approach, however, should be able to accomplish the learning objective associated with demonstrating the management of the patient. Thus, if that is truly the intended learning objective, the case should be fast-forwarded to eliminate the diagnostic reasoning portion of the scenario. Not only will this save valuable simulation time it will also conceivably lead to more time to carefully evaluate the treatment steps associated with managing the patient. Additionally, it will eliminate the potential of prolonged simulation periods that do not contribute to accomplishing the learning objectives and/or get stuck because of a failure to achieve the initial objective (in this case, for example, the diagnosis.)

So, the next time you make decisions in the scenario’s design, take a breath and ask yourself, “Am I designing it this way because this is the way we always do it? Am I designing it this way because this is the way it appears in the real clinical environment?”

The important point is that one is asking themselves, “How can I stratify my design decisions so that the scenario is best crafted to accomplish the intended learning objectives?” If you do, you will be on the road to designing scenarios that are efficient and effective!

Leave a comment

Filed under scenario design, simulation

Sherlock Holmes and the Students of Simulation

I want to make a comparison between Sherlock Holmes and the students of our simulations! It has important implications for our scenario design process. When you think about it, there’s hypervigilance amongst our students, looking for clues during the simulation. They are doing so to figure out what we want them to do. Analyzing such clues is like the venerable detective Sherlock Holmes’s processes when investigating a crime.

Video version of this post

This has important implications for our scenario design work because many times, we get confused with the idea that our job is to create reality when in fact, it is not that at all our job. As simulation experts, our jobs are to create an environment with the reality that is sufficient to allow a student to progress through various aspects of the provision of health care. We need to be able to make a judgment and say, “hey, they need some work in this area,” and “hey, they’re doing good in this area.”

To accomplish this, we create facsimiles of what they will experience in the actual clinical environment transported into the simulated environment to help them adjust their mindset so they can progress down the pathway of taking care of those (simulated) patient encounters.

We must be mindful that during the simulated environment, people engage their best Sherlock Holmes, and as the famous song goes, [they are] “looking for clues at the scene of the crime.”
Let’s explore this more practically.

Suppose I am working in the emergency department, and I walk into the room and see a knife sitting on the tray table next to a patient. In that case, I immediately think, “wow, somebody didn’t clean this room up after the last patient, and there’s a knife on the tray. I would probably apologize about it to the patient and their family.”

Fast forward…..

Put me into a simulation as a participant, and I walk into the room. I see the knife on the tray next to the patient’s bed, and I immediately think, “Ah, I’m probably going to do a crich or some invasive procedure on this patient.”

How does that translate to our scenario design work? We must be mindful that the students of our simulations are always hypervigilant and always looking for these clues. Sometimes when we have things included in the simulation, we might just have there as window dressing or to try to (re)create some reality. However, stop to think they can be misinterpreted as necessary to be incorporated into the simulation by the student for success in their analysis.

Suddenly, the student sees this thing sitting on the table, so they think it is essential for them to use it in the simulation, and now they are using it, and the simulation is going off the tracks! As the instructor, you’re saying that what happened is not what was supposed to happen!

At times we must be able to objectively go back and look at the scenario design process and recognize maybe just maybe something we did in the design of the scenario, which includes the setup of the environment, that misled the participant(s). If we see multiple students making the same mistakes, we must go back and analyze our scenario design. I like to call it noise when we put extra things into the simulation scenario design. It’s noise, and the potential for that noise to blow up and drive the simulation off the tracks goes up exponentially with every component we include in the space. Be mindful of this and be aware of the hypervigilance associated with students undergoing simulation.

We can negate some of these things by a good orientation, by incorporating the good practice into our simulation scenario design so that we’re only including items in the room that are germane to accomplishing the learning objectives.

Tip: If you see the same mistakes happening again and again, please introspect, go back, look at the design of your simulation scenario, and recognize there could be a flaw! Who finds such flaws in the story?  Sherlock Holmes, that’s who!

1 Comment

Filed under Curriculum, design, scenario design, simulation

5 Tips to Improve Interrater Reliability During Healthcare Simulation Assessments

One of the most important concepts in simulation-based assessment is achieving reliability, and specifically interrater reliability. While I have discussed previously in this blog every simulation is assessment, in this article I am speaking of the type of simulation assessment that requires one or more raters to record data associated with the performance or more specifically an assessment tool.

Interpreter reliability simply put is that if we have multiple raters watching a simulation and using a scoring rubric or tool, that they will produce similar scores. Achieving intermittent reliability is important for several reasons including that we are usually using more than one rater to evaluate simulations over time. Other times we are engaged in research and other high stakes reasons to complete assessment tools and want to be certain that we are reaching correct conclusions.

Improving assessment capabilities for stimulation requires a significant amount of effort. The amount of time and effort that can go into the assessment process should be directly proportional to the stakes of the assessment.

In this article I offer five tips to consider for improving into rate of reliability when conducting simulation-based assessment

1 – Train Your Raters

The most basic and overlooked aspect of achieving into rate and reliability comes from training of the raters. The raters need to be trained to the process, the assessment tools, and each item of the assessment that they are rendering an opinion on. It is tempting to think of subject matter experts as knowledgeable enough to fill out simple assessments however you will find out with detailed testing that often the scoring of the item is truly in the eye of the beholder. Simple items like “asked medical history” may be difficult to achieve reliability if not defined prior to the assessment activity. Other things may affect the assessment that require rater calibration/training such as limitations of the simulation, and how something is being simulated and/or overall familiarity with the technology that may be used to collect the data.

2 – Modify Your Assessment Tool

Modifications to the assessment tool can enhance interrelated reliability. Sometimes it can be extreme as having to remove an assessment item because you figure out that you are unable to achieve reliability despite iterative attempts at improvement. Other less drastic changes can come in the form of clarifying the text directives that are associated with the item. Sometimes removing qualitative wording such as “appropriately” or “correctly” can help to improve reliability. Adding descriptors of expected behavior or behaviorally anchored statements to items can help to improve reliability. However, these modifications and qualifying statements should also be addressed in the training of the raters as described above.

3 – Make Things Assessable (Scenario Design)

An often-overlooked factor that can help to improve indurated reliability is make modifications to the simulation scenario to allow things to be more “assessable”. We make a sizable number of decisions when creating simulation-based scenarios for education purposes. There are other decisions and functions that can be designed into the scenario to allow assessments to be more accurate and reliable. For example, if we want to know if someone correctly interpreted wheezing in the lung sounds of the simulator, we introduced design elements in the scenario that could help us to gather this information accurately and thus increase into rater reliability. For example, we could embed a person in the scenario to play the role of another healthcare provider that simply asks the participant what they heard. Alternatively, we could have the participant fill out a questionnaire at the end of the scenario, or even complete an assessment form regarding the simulation encounter. Lastly, we could embed the assessment tool into the debriefing process and simply ask the participant during the debriefing what they heard when I auscultated the lungs. There is no correct way to do this, I am trying to articulate different solutions to the same problem that could represent solutions based on the context of your scenario design.

4 – Assessment Tool Technology

Gathering assessment data electronically can help significantly. When compared to a paper and pencil collection scheme technology enhanced or “smart” scoring systems can assist. For example, if there are many items on a paper scoring tool the page can sometimes become unwieldy to monitor. Electronic systems can continuously update and filter out data that does not need to be displayed at a given point in time during the unfolding of the simulation assessment. Simply having previously evaluated items disappear off the screen can reduce the clutter associated with scoring tools.

5 – Consider Video Scoring

For high stakes assessment and research purposes it is often wise to consider video scoring. High stakes meaning pass/fail criteria associated with advancement in a program, heavy weighting of a grade, licensure, or practice decisions. The ability to add multiple camera angles as well as the functionality to rewind and play back things that occurred during the simulation are valuable in improving the scoring accuracy of the collected data which will subsequently improve the interrater reliability. Video scoring associated with assessments requires considerable time and effort and thus reserved for the times when it is necessary.

I hope that you found these tips useful. Assessment during simulations can be an important part of improving the quality and safety of patient care!

If you found this post useful please consider subscribing to this blog!

Thanks and until next time! Happy Simulating.

Leave a comment

Filed under assessment, Curriculum, design, scenario design

Beware of the Educational Evangelist!

beware educational evangelistThey are everywhere now days like characters in pokemon go. They seem to hang out in high concentration around new simulation centers.

You know the type. Usually they start off by saying how terrible it is for someone to give a lecture. Then they go on to espouse the virtues and values of student – centered education claiming active participation and small group learning is the pathway to the glory land. They often toss in terms like “flipped classroom”. And just to ensure you don’t question their educational expertise they use a word ending with “-gogy” in the same paragraph as the phrase “evidence-based”.

If you ask them where they have been in the last six months you find out that they probably went to a weekend healthcare education reform retreat or something equivalent…….

My principal concern with the today’s educational evangelist is that they are in search of a new way of doing everything. Often times they recommend complete and total overhauls to existing curriculum without regard to a true understanding of how to efficiently and effectively improve, and/or analyze the existing resources required to carry out such changes.

Further, the evangelist usually has a favorite methodology such as “small group learning”, “problem-based learning” or “simulation-based learning” that they are trying to convert everyone to through prophecy.

An easy target of all educational evangelist is the lecture, and often that is where the prophecy begins. They usually want to indicate that if lecture is happening, learning is not. As I discussed in a previous blog article lecture is not dead, and when done well, can be quite engaging and create significant opportunities for learning and is maximally efficient in terms of resources.

If you think about a critically it is just as easy to do lousy small group facilitation as it is to do a lousy lecture. Thus, the potential gains in learning will not achieve maximal potential. The difference is small group facilitation like simulation, generally take significantly more faculty resources.

The truth is the educational evangelist is a great person to have in and amongst the team. Their desire for change, generally instilled with significant passion are often a source of great energy. When harnessed they can help advance and revise curricula to maximize, and modernize various educational programs.

However, to be maximally efficient all significant changes should undergo pre-analysis, hopefully derived from a needs assessment, whether it is formal or informal. Secondly, it is worth having more than one opinion to decide the prioritization of what needs to be changed in a given curriculum. While the evangelist will be suggestive that the entire curriculum is broken, often times with a more balanced review you find out that there are areas of the curriculum that would benefit from such overhaul, and some aspects that are performing just fine.

When you begin to change aspects of the curriculum, start small and measure the change if possible. Moving forward on a step-by-step basis will usually provide a far better revised curriculum then an approach that “Throws out the baby and the bathwater”. Mix the opinions of the stalwarts of the existing curriculum methods with the evangelists. Challenge existing axioms, myths and entrenched beliefs like “Nothing can replace the real patient for learning….” If this process is led well, it will allow the decision making group to reach a considerably more informed position that will lead to sound decisions, change strategies, and guide investments appropriately.

So if you’re the leader or a member of a team responsible for a given curriculum of healthcare instruction and confronted with the educational evangelist, welcome their participation. Include them in the discussions moving forward with a balanced team of people have them strive to create an objective prioritization of the needs for change. This will allow you to make excellent decisions with regard to new technologies and/or methods that you should likely embrace for your program. More importantly you will avoid tossing out the things that are working and are cost efficient.

Leave a comment

Filed under Curriculum, Uncategorized

Learning from Simulation – Far more than the Debriefing

Most people have heard someone say “In Simulation, debriefing is where all of the learning occurs.” I frequently hear this when running faculty development workshops and programs, which isn’t as shocking as hearing this espoused at national and international meetings in front of large audiences! What a ridiculous statement without a shred of evidence or a realistic common sense approach to think it would be so. Sadly, I fear it represents an unfortunate instructor-centered perspective and/or a serious lack of appreciation for potential learning opportunities provided by simulation based education.LearningDuringSimulation2

Many people academically toil over the technical definitions of the word feedback and try to contrast in from a description of debriefing as if they are juxtaposed. They often present it in a way as if one is good and the other is bad. There is a misguided notion that feedback is telling someone, or lecturing to someone to get a point across. I believe that is a narrow interpretation of the word. I think that there are tremendous opportunities for learning from many facets of simulation that may be considered feedback.

Well-designed simulation activities hopefully provide targeted learning opportunities of which part of it is experiential, sometimes immersive, in some way. I like to think of debriefing as one form of feedback that a learner may encounter during simulation based learning, commonly occurring after engaging in some sort of immersive learning activity or scenario. Debriefing can be special if done properly and will actually allow the learner to “discover” new knowledge, perhaps reinforce existing knowledge, or maybe even have corrections made to inaccurate knowledge. No matter how you look at it at the end of the day it is a form of feedback, that can likely lead, or contribute to learning. But to think that during the debriefing is the only opportunity for learning is incredibly short-sighted.

There are many other forms of feedback and learning opportunities that learners may experience in the course of well-designed simulation based learning. The experience of the simulation itself is ripe with opportunities for feedback. If a learner puts supplemental oxygen on a simulated patient that is demonstrating hypoxia on the monitor via the pulse oximetry measurements and the saturations improve, that is a form of feedback. Conversely, if the learner(s) forgets to provide the supplemental oxygen and the saturations or other signs of respiratory distress continue to worsen then that can be considered feedback as well. The latter two example examples are what I refer to as intrinsic feedback as they are embedded in the scenario design to provide clues to the learners, as well as to approximate what may happen to a real patient in a similar circumstance.

With regard to intrinsic feedback, it is only beneficial if it is recognized and properly interpreted by the learner(s) either while actively involved in the simulated clinical encounter, and if not, perhaps in the debriefing. The latter should be employed if the intrinsically designed feedback is important to accomplishing the learning objectives germane to the simulation.

There are still other forms of feedback that likely contribute to the learning that are not part of the debriefing. In the setting of a simulated learning encounter involving several learners, the delineation of duties, the acceptance or rejection of treatment suggestions are all potentially ripe for learning. If a learner suggests a therapy that is embraced by the team, or perhaps stimulates a group discussion during the course of the scenario the resultant conversation and ultimate decision can significantly add to the learning of the involved participants.

Continuing that same idea, perhaps the decision to provide, withhold, or check the dosage of a particularly therapy invokes a learner to check a reference, or otherwise look up a reference that provides valuable information that solidifies a piece of information in the mind of the leaner. The learner may announce such findings to the team while the scenario is still underway thereby sharing the knowledge with the rest of the treatment team. Waaah Laaaah…… more learning that may occur outside of the debriefing!

Finally, I believe there is an additional source of learning that occurs outside of the debriefing. Imagine when a learner experiences something or becomes aware of something during a scenario which causes them to realize they have a knowledge gap in that particular area. Maybe they forgot a critical drug indication, dosage or adverse interaction. Perhaps there was something that just stimulated their natural curiosity. It is possible that those potential learning items are not covered in the debriefing as they may not be core to the learning objectives. This may indeed stimulate the learner to engage in self-study to enhance their learning further to close that perceived area of a knowledge gap. What???? Why yes, more learning outside of the debriefing!

In fact, we hope that this type of stimulation occurs on the regular basis as a part of active learning that may have been prompted by the experiential aspects provided by simulation. Such individual stimulation of learning is identified in the sentinel publication of Dr. Barry Issenberg et al in Vol 27 of Medical Teacher in 2005 describing key features of effective simulation.

So hopefully I have convinced you, or reinforced your belief that the potential for learning from simulation based education spans far beyond the debriefing. Please recognize that this statement made by others likely reflects a serious misunderstanding and underappreciation for learning that can and should be considered with the use of simulation. The implication of such short-sightedness can have huge impacts on the efficiency and effectiveness of simulation that begin with curriculum and design.

So the next time you are incorporating simulation into your education endeavor, sit back and think of all of the potential during which learning may occur. Of course the debriefing in one such activity during which we hope learning to occur. Thinking beyond the debriefing and designing for the bigger picture of potential learning that can be experienced by the participants is likely going to help you achieve positive outcomes from your overall efforts.

6 Comments

Filed under Uncategorized

Simulation Curriculum Integration via a Competency Based Model

Process_Integration.shutterstock_304375844One of the things that is a challenge for healthcare education is the reliance on random opportunity for clinical events to present themselves for a given group of learners to encounter as part of a pathway of a structured learning curriculum. This uncertainty of exposure and eventual development of competency is part of what keep our educational systems time-based which is fraught with inefficiencies by its very nature.

Simulation curriculum design at present often embeds simulation in a rather immature development model in which there is an “everybody does all of the simulations” approach. If there is a collection of some core topics that are part and parcel to a given program, combined with a belief, or perhaps proof, that simulation is a preferred modality for the topic, then it makes sense for those exposures. Let’s move beyond the topics or situations that are best experienced by everyone.

If you use a model of physician residency training for example, curriculum planners “hope” that over the course of a year a given first year resident will adequately manage an appropriate variety of cases. The types of cases, often categorized by primary diagnosis, is embedded in some curriculum accreditation document under the label “Year 1.” For the purposes of this discussion lets change the terminology from Year 1 to Level 1 as we look toward the future.

What if we had a way to know that a resident managed the cases, and managed them well for level one? Perhaps one resident could accomplish the level one goals in six months, and do it well. Let’s call that resident, Dr. Fast. This could then lead to a more appropriate advancement of the resident though the training program as opposed to them advancing by the date on the calendar.

Now let’s think about it from another angle. Another resident who didn’t quite see all of the cases, or the variety of cases needed, but they are managing things well when they do it. Let’s call them Dr. Slow. A third resident of the program is managing an adequate number and variety, but is having quality issues. Let’s refer to them as Dr. Mess. An honest assessment of the current system is that all three residents will likely be advanced to hire levels of responsibilities based on the calendar without substantial attempt at remediation of understanding of the underlying deficiencies.

What are the program or educational goals for Drs. Fast, Slow and Mess? What are the differences? What are the similarities? What information does the program need to begin thinking in this competency based model? Is that information available now? Will it likely be in the future? Does it make sense that we will spend time and resources to put all three residents through the same simulation curriculum?

While there may be many operational, culture, historical models and work conditions that provide barriers to such a model, thinking about a switch to a competency based model forces one to think deeper about the details of the overall mission. The true forms of educational methods, assessment tools, exposure to cases and environments, should be explored for both efficiency and effective effectiveness. Ultimately the outcomes we are trying to achieve for a given learner progressing through a program would be unveiled. Confidence in the underlying data will be a fundamental necessary component of a competency based system. In this simple model, the two functional data points are quantity and quality of given opportunities to learn and demonstrate competence.

This sets up intriguing possibilities for the embedding of simulation into the core curriculum to function in a more dynamic way and contribute mightily to the program outcomes.

Now think of the needs of Dr. Slow and Dr. Mess. If we had insight combined with reliable data, we could customize the simulation pathway for the learner to maximally benefit their progression through the program. We may need to provide supplement simulations to Dr. Slow to allow practice with a wider spectrum of cases, or a specific diagnosis, category of patient, or situation for them to obtain exposure. Ideally this additional exposure that is providing deliberate practice opportunities could also include learning objectives to help them increase their efficiencies.

In the case of Dr. Mess, the customization of the simulation portion of the curriculum provide deliberate practice opportunities with targeted feedback directly relevant to their area(s) of deficiency, ie a remediation model. This exposure for Dr. Mess could be constructed to provide a certain category of patient, or perhaps situation, that they are reported to handle poorly. The benefit in the case of Dr. Mess is the simulated environment can often be used to tease out the details of the underlying deficiency in a way that learning in the actual patient care environment is unable to expose.

Lastly, in our model recall that Dr. Fast may not require any “supplemental” simulation thus freeing up sparse simulation and human resources necessary to conduct it. This is part of the gains in efficiencies that can be realized through a competency -based approach to incorporating simulation into a given curriculum.

Considering a switch to a competency based curriculum in healthcare education can be overwhelming simply based on the number of operational and administrative challenges. However, using a concept of a competency based implementation as a theoretical model can help envision a more thoughtful approach to curricular integration of simulation. If we move forward in a deliberate attempt to utilize simulation in a more dynamic way, it will lead to increases in efficiencies and effectiveness along with providing better stewardship of scarce resources.

 

1 Comment

Filed under Uncategorized

Great Debriefing Should Stimulate Active Reflection

shutterstock_284271476_aDebriefing in simulation as well as after clinical events is a common method of continuing the learning process through helping participants garner insight from their participation in the activity. It is postulated and I believe, part of the power of this “conversation” when call debriefing is when the participant engages in active reflection. The onus is on the debriefer to create an environment where active reflection occurs.

One of the most effective ways to achieve this goal is through questions. When participants are asked questions regarding the activity being debriefed it forces them to replay the scenario or activity in their mind. I find it helpful to begin with rather open-ended broader questions for two reasons. The first is to ensure the participant(s) are ready to proceed. Secondly asking broader questions at the beginning such as “Can you give me a recap of what you just experienced?” Helps to force the participant to think about the activity in a longitudinal way. Gradually the questions become much more specific to allow the participant to understand cause and effect relationships between their performance in the activity and the outcomes of the case.

Another thing to consider is that when debriefing multiple people simultaneously, when a recollection of the activity is being recalled by one participant, the other participants are actively thinking about their own recognition of said activity. Thus active reflection is again triggered. It is quite natural for the other participants to not only be thinking about the activity, but actively forming their own thoughts in a comparison/contrast type of cognitive activity. During this period they are comparing their own recollection of the activity with the one of the person answering the initial question.

Question should be focused in a way that the debriefer is controlling the conversation through a structured pathway that allows the learning objectives to be met. Further, when one develops good debriefing habits through the use of questioning it limits the possibility of the debriefing converting into a ”mini – lecture”.

I believe the Structured and Supported debriefing model created by my colleague Dr. John O’Donnell along with collaborators, provides the best framework by which to structure the debriefing. His use of the GAS mnemonic has effectively allowed the model to be introduced to both novice and expert debriefers alike and facilitate an easily learned structured framework into their debriefing work. We have been able to successfully introduce this model across many cultures and at least five different languages and have had significant success.

Worksheets, or job-aids with some example questions that parallel the learning objectives can be written on such tools prior to the scenario commencement. Supplementing the job aid with additional notes during the performance of the scenario can be helpful to recall the important points of discussion at the time of debriefing, and the preformed questions can serve as gentle reminders to the debriefer on topics that must be covered to achieve a successful learning outcome.

So a challenge to you is the next time you conduct a debriefing be thinking in the back of your mind how can I best force my participants to engage in active reflection of the activity that is bring debriefed. In addition, I would recommend that you practice debriefing as often as you can! Debriefing is an activity that improves over time with experience and deliberate practice.

1 Comment

Filed under Uncategorized

The Contract Essential to the Parties of Simulation

If you think about it an agreement needs to exist between those whom facilitate simulation and those who participate. Facilitate the purposes of this discussion is referring to those who create and execute simulation based learning encounters. Sometimes the agreement is more formal other times more implied. This phenomenon has been described in many ways over the yearsshutterstock_226296865 having been branded by such descriptors as fiction contract, psychological contract, or learning contract.

Why does this need to be the case? A contract or agreement is generally called for when two or more parties are engaging in some sort of collaborative relationship to accomplish something. Often times these type of contracts spell out the responsibilities of the parties involved. If you think about simulation at a high level the facilitator side is agreeing to provide learning activities using simulation to help the participant(s) become better healthcare providers. The participants are engaged at the highest level because they want to become better healthcare providers. While not trying to hold a comprehensive discussion, let’s explore this concept and the responsibilities of each party a bit further.

Facilitators are designing simulation activities with a variety of tools and techniques that are not perfect imitators of actual healthcare. They are crafting events for which the participant to a greater or lesser extent immerse themselves in, or at a minimum simply participate. Some of these activities are designed to contain diagnostic mystery, some demand specific knowledge, skills and attitudes be known or developed to successfully complete the program. Facilitators are also putting participants in situations that the must perform in front of others and that can create feelings of vulnerability. So all toll, the role of the facilitator comes with enormous responsibility.

Facilitators are also asking the participants to imagine part of what they are engaging in is a reasonable facsimile of what one may encounter when providing actual healthcare. Therefore another tenet of the agreement is that the facilitator will provide an adequate orientation to the simulation environment pointing out what is more and less real including the role that the participant may be playing and how their role interacts with the environment outside of the simulation, if at all. (I.e. define any communications that may occur during the simulation between the participants and the facilitator.

Facilitators trained in simulation know that mistakes occur sometimes due to a lack of knowledge, incorrect judgement or unrelated issues such as a poorly designed simulation. Facilitators thereby commit to not judge the participant in anything other than their performance during the simulation. While diagnostic conundrums are inevitable in many types of simulations the facilitator should not try to unnecessarily trick or mislead the participant in any way that is not directly contributing to helping the participant(s) improve. The facilitator must attempt to use the time of the participants wisely and responsibly.

The role of the participant shares responsibilities as a part of the agreement as well. Participants agree to a commitment to become better healthcare providers through continuous learning and improvement. This is inherent in a professional, but there are some likely good reasonsshutterstock_147464348 to remind participants of this important premise.

Participants must agree to the use of their time to participate in the simulation. The participants are also agreeing to an understanding that they know the environment of the simulation is not real, and that there will be varying levels of realism employed to help them perform in the simulation. But to be clear they agree to this tenet predicated on the trust that that facilitators are having the participant experience simulations that are relevant to what they do, with an underlying commitment to help them get better. In simulations involving multiple participants, they must also agree to similarly not judge others on what occurs in the simulation, as well as keeping the personal details of what they experience in the simulation confidential.

So in closing, successful simulation or other immersive learning environments require an agreement of sorts between those who create and execute the simulation based learning environments as well as those who participate in them. Each party brings a set of responsibilities to the table to help to ensure a rich learning environment with appropriate professional decorum and commitment to improvement. The agreements range from implicit to explicit, but when they exist and are adhered to will continue to allow the recognition of value that can arise from simulation to help improve the care ultimately delivered to our patients. After all, isn’t that our highest goal?

1 Comment

Filed under Uncategorized

Simulation Programs Should Stop Selling Simulation

SimforSaleWhatever do I mean? Many established simulation programs believe that their value is through creating simulation programs for people by which to attain knowledge, skills and/or perfect aspects of that needed to effectively care for patients. All of that is true, obviously. However, I believe that the true value of many established simulation programs is in the deep educational infrastructure that they provide to the institution with whom they may be affiliated. Whether that expertise is in the project management of educational design, educational design itself, the housing of the cadre of people who are truly interested in education, or the operational scheduling and support needed to pull off a major program, I believe these examples are the true understated value of many simulation programs.

Simulation programs tend to attract a variety of people who are truly interested in education. While I don’t think that everyone who is passionate about teaching in healthcare needs to be an educational expert, I do believe that it is important that we have people involved in the development and deployment of innovative education who are truly interested in teaching. Many hospitals and universities rely on personnel to conduct their education programs that are subject matter experts, but may or may not have desire, interest or satisfactory capabilities needed for teaching.

Many people who are passionate about teaching in healthcare have a particular topic or two that they like to teach about, but lack the skills of critical analysis, and deeper knowledge of educational design principles to help them parse their education efforts into the appropriate methods to create maximal efficiency in the uptake of the subject matter.  This very factor is likely why we still rely on good old-fashioned lecture as a cornerstone of healthcare education whether we are evaluating that from the school perspective, or the practicing healthcare arena. Not that I believe there is anything wrong with lecture, I just believe that it is often overused, often done poorly, and often done in a way that does not encourage active engagement or active learning between the lecturer in the participant’s.

Simulation programs are often the water cooler in many institutions around which people that are truly interested in and may have some additional expertise in an education will tend to congregate. The power of this proximity creates an environment rich for brainstorming, enthusiasm for pushing the envelope of capabilities, and continuous challenge to improve the methods by which we undertake healthcare education.

Simulation programs that have curricular development capabilities often have project management expertise as well as operational expertise to create complex educational solutions. This combination of skills can be exceptionally valuable to the development of any innovative education program in healthcare whether or not simulation is part of the equation.

Many times healthcare education endeavors are undertaken by one or two people who quickly become overwhelmed without the supporting infrastructure that it takes to put on educational activities of a higher complexity than a simple lecture. Often times this supporting technology or set of resources resides inside the walls of “simulation centers” are programs. By not providing access to these para-simulation resources to the rest of the institution, I argue that simulation programs are selling themselves short.

If you consider the educational outcomes from a leadership perspective (i.e. CEO, Dean etc.), They are much less concerned about how the educational endeavor occurred, but far more focused on the outcomes. So while there are many topics and situations that are perfect for simulation proper, we all know there is a larger need for educational designs with complexity larger than that of a lecture that may not involve simulation.

If a given simulation program partners with those trying to create complex educational offerings that don’t directly involve simulation, but are good for the mission of the overall institution with whom they are aligned, it is likely going to endear, or create awareness for the need for continuing or expanding the support of that particular program by the senior leadership team.

If you sit back and think about it, isn’t that an example of great teamwork?

1 Comment

Filed under Uncategorized

Ebola and Fidelity

hazmat_shutterstock_135522821_aThose of you who are used to my normal musings and rants against perfecting the “fidelity” and realism used in simulations might be surprised to hear me speak of examples of simulations where perfect/near perfect fidelity does matter.

Various association social forums are abuzz with people talking about simulations involving personal protective equipment in the light of the current unfolding of the Ebola crisis. It is important to differentiate this type of simulation and recognize the importance of re-creating the aspects of the care environment that is the subject of the education in the most highly realistic way available. In this case we are probably talking about using the actual Personal Protective Equipment (PPE) equipment that will be used in the care of the patient suspected of Ebola at any given facility.

This is a high-stakes simulation where the interaction with the actual equipment that one will be using in the care environment is germane to a successful outcome of such interaction. In this case the successful outcome is keeping the healthcare worker safe when caring for a patient with a communicable disease.  More broadly this falls under the umbrella of simulation for human factors.

Human factors in this context being defined as “In industry, human factors (also known as ergonomics) is the study of how humans behave physically and psychologically in relation to particular environments, products, or services.” (source: searchsoa.techtarget.com/definition/human-factors)

Other examples of when human factors types of simulation are employed are in areas such as product testing, equipment familiarization objectives, environmental design testing. So for instance if we are evaluating the number of errors that occurs in the programming of a specific IV pump in stressful situations, it would be important to have the actual IV pump or a highly realistic operational replica of the same. This is in contrast to having the actual IV pump used in a hospital for scenario focused on an acute resuscitation of the sepsis patient, but not specifically around the programming of the IV pump. The latter example represents more of when the IV pump is included more as a prop in the scenario versus that of the subject of the learning objectives and inquiry on the safety surrounding its programming.

So yes world, even I fully believe that there are some examples of simulations where a re-creation of highly realistic items or elements is part and parcel to successful simulations. The important thing is that we continuously match the learning objectives and educational outcomes to those elements included are simulations so that we continue to be most efficient and efficacious in our designs of simulation-based education encounters. What I continue to discourage is a simple habit of spending intense time and money in highly realistic re-creations of the care environment when they are not germane to the learning objectives and educational outcomes.

1 Comment

Filed under Uncategorized