Category Archives: scenario design

The First Four Steps of Healthcare Simulation Scenario Design

How can you make your scenario design process more consistent and efficient? One way is by following a step-by-step method to create your masterpieces!

In this post I cover the first four steps of a proven scenario design process.
There are four core steps that must be done in order. After the first four are accomplished you can branch out and be a little bit more variable in your approach to scenario design.

4 Success Steps, business concept

Step One: Pick A Topic

Picking a topic may seem like common sense but there is a lot to think about.

In healthcare simulation we have many topics to choose from. But in step one we want to a little bit specific and figure out that the major topic is that will be covered. We may be cover the teaching of physiologic, diagnostic or treatment where people are going to be making critical decisions, ordering medications, and other therapy, or perhaps our primary focus going to be on team training, teamwork, communications, team leadership. You get to pick!

Step Two: Define the Learner(s)

This is really important because in order to go to the next step which is designing the learning objectives we have to understand our learner population. For example, what do you expect of a fourth-year medical student what you expected them in terms of being able to evaluate and treat a simulated patient that is complaining of chest pain? Now contrast that to if your learners are medical students that are in the second year of medical school and haven’t had any clinical experience. In other words, we can take the same topic but as applied to two different populations, our expectations and what we are going to be evaluating from them is very different.

Step Three: Designing  the Learning Objectives

This is where you want to go into detail, great painstaking detail, about what you’re trying to accomplish with the simulation scenario. It is very important to take time on this step. Many people tend to gloss over this step which can create confusion later.

Let’s take a topic example. Let’s say asthma in the emergency department. When you think about asthma in the emergency department there could be many sub topics or areas from which to choose. It could be focused on competence of managing a minor asthma attack, or it could be a first-ever asthma attack, or it could be management of chronic asthma, or it could be major could be a life-threatening situation.

Carefully consider what do we want this learner group that we have defined in step two. Do you want them to diagnose? To treat? To critical compare and contrast it to other cases of shortness of breath in an acute patient? You get to choose!

Perhaps we want to focus on the step-by-step history presentation or the physical exam or maybe we want to see the learners perform treatment. Or maybe we want to see the overall management or the critical thinking that goes on for managing asthma in the emergency department. There are many possibilities, largely driven by your intended learner group demographics.

So, in other words were taking the big topic of asthma and we are going to cone it down to answer the question of what exactly we want our learners to accomplish by the end of the scenario. We can’t just assume that what is supposed to happen in the real clinical environment will or should happen in the simulation environment. That rarely works. We actually want to later engineer the story and situation to allow us to be able to focus on the learning objectives.

Step Four: Define the Assessment Plan

How are you going to assess that each objective defined in step three was accomplished? That is the fundamental thought process for step four.

What are you going to be watching for when you the creator of this simulation scenario are watching the participants do their thing? What are you going to be focusing your attention on that you’re going to bring into the debriefing? What are you picking up on that you might be filling out assessment tools?

Define your assessment plan with specificity of what you’re looking for. This is different than designing the assessment tools that could come later. Or perhaps not at all. It is important that you remember every simulation is an assessment of sorts. See Previous Blog Post on this!

This doesn’t mean that every simulation needs assessment tool like a checklist, rating scale or formal grading scheme. It simply is referring to consideration of how to focus the facilitating faculty member, or teacher, or whatever you call them, who are observing the simulation. Remember, that to help the learner(s) of the simulation get better, the faculty need to be focused on certain things to ensure that the goals of the scenario are accomplished for our selected learner group, associated with the topic we chose in step one.

Lastly, what I want to point out to you is that you should notice something missing. The story!

The story comes later. Everybody wants to focus on the story because the story is fun. It’s often related to what we do clinically. It’s replicating things that are fun that brings in the theatrics of simulation! But what we really want to do is bring the theatrics of simulation to cause the actors on the stage (the participants) to so some activity. This activity gives us the situation to focus our observations on the assessment of the performance. This in turn allows us to accomplish the learning objectives of the scenario and help the participants improve for the future!

Until next time, Happy Simulating!

Leave a comment

Filed under Curriculum, design, scenario design, simulation, Uncategorized

Don’t be Confused! Every Simulation is an Assessment

 

Recently as I lecture and conduct workshops I have been asking people who run simulations how often they do assessments with their simulations. The answers are astounding. Every time there are a few too many people reporting that they are performing assessments less than 100% of the time that they run their simulations. Then they are shocked when I tell them that they do assessments EVERY TIME they run their simulations.

While some of this may be a bit of a play on words there should be careful consideration given to the fact that each time we run a simulation scenario we must be assessing the student(s) that are the learners. If we are going to deliver feedback, whether intrinsic to the design of the simulation, or promote discovery during a debriefing process, somewhere at some point we had to decide what we thought they did well and identify areas for needed improvement. To be able to do this you had to perform an assessment.

Kundenbewertungen - Rezensionen

Now let’s dissect a bit. Many people tend to equate the word assessment with some sort of grade assignment. Classically we think of a test that may have some threshold of passing or failing or contribute in some way to figure out if someone has mastered certain learnings. Often this may be part of the steps one needs to move on, graduate, or perhaps obtain a license to practice. The technical term for this type of assessment is summative. People in healthcare are all too familiar with such types of assessment!

Other times however, assessments can be made periodically with a goal of NOT whether someone has mastered something, but with more of a focus of figuring out what one needs to do to get better at what they are trying to learn. The technical term for this is formative assessment. Stated another way, formative assessment is used to promote more learning while summative assesses whether something was learned.

When things can get even more confusing is when assessment activities can have components or traits of both types of assessment activities. None the less, what is less important then the technical details is the self-realization and acceptance of simulation faculty members that every time you observe a simulation and then lead a debriefing you are conducting an assessment.

Such realization should allow you to understand that there is really no such thing as non-judgmental debriefing or non-judgement observations of a simulation-based learning encounter. All goal directed debriefing MUST be predicated upon someone’s judgement of the performance of the participant(s) of the simulation. Elsewise you cannot provide and optimally promote discovery of the needed understanding of areas that require improvement, and/or understanding of the topic, skills, or decisions that were carried out correctly during the simulation.

So, if you are going to take the time and effort to conduct simulations, please be sure and understand that assessment, and rendering judgement of performance, is an integral part of the learning process. Once this concept is fully embraced by the simulation educator greater clarity can be gained in ways to optimize assessment vantage points in the design of simulations. Deciding the assessment goals with some specificity early in the process of simulation scenario design can lead to better decisions associated design elements of the scenario. The optimizing of scenario design to enhance “assess-ability” will help you whether you are applying your assessments in a formative or summative way!

So, go forth and create, facilitate and debrief simulation-based learning encounters with a keen fresh new understanding that every simulation is an assessment!

Until Next Time Happy Simulating!

Leave a comment

Filed under assessment, Curriculum, design, scenario design, simulation

Three Things True Simulationists Should NEVER Say Again

From Wiktionary: Noun. simulationist (plural simulationists) An artist involved in the simulationism art movement. One who designs or uses a simulation. One who believes in the simulation hypothesis.

Woman taping-up mans mouth

 

After attending, viewing or being involved in hundreds if not thousands of simulation lectures, webinars, workshops, briefings and conversations there are a few things that I hear that make me cringe more than others. In this post I am trying to simmer it down to the top three things that I think we should ban from the conversations and vocabularies of simulationists around the globe!

1. Simulation will never replace learning from real patients!: Of course it wont! That’s not the goal. In fact, in some aspects simulation offers some advantages over learning on real patients. And doubly in fact, real patients have some advantages too! STOP being apologetic for simulation as a methodology. When this is said it is essentially deferring to real patients as some sort of holy grail or gold standard against which to measure. CRAAAAAAAZY……   Learning on real patients is but one methodology by which to attack the complex journey of teaching, learning and assessing the competence of a person or a team of people who are engaged in healthcare.  All the methodologies associated with this goal of education have their own advantages, disadvantages, capabilities and limitations. When we agree with people and say simulation will never replace learning from real patients, or allow that notion to go unchallenged, we are doing a short service to the big picture of creating a holistic education program for learners. See previous blog post on learning on real patients. 

2. In simulation, debriefing is where all of the learning occurs!: You know you have heard this baloney before. Ahhhhhhhhhhhhh such statements are purely misinformed, not backed up by a shred of evidence, kind of contrary to COMMON SENSE, as well as demeaning to the participants as well as the staff and faculty that construct such simulations. The people who still make this statement are still stuck in a world of instructor centricity. In other words, “They are saying go experience all of that…… and then when I run the debriefing the learning will commence.” The other group of people are trying to hard sell you some training on debriefing and then make you think it is some mystical power held by only a certain few people on the planet. Kinda cra’ cra’ (slang for crazy) if you think about it.

When one says something to articulate learning cannot occur during the simulation is confirming that they are quite unthoughtful about how they construct the entire learning encounter. It also hints at the fact that they don’t take the construct of the simulation itself very seriously. The immersive experience that people are exposed to during the simulation and before the debriefing can be and should be constructed in a way that provides built in feedback, observations, as well as experiences that contribute to a feeling of success and/or recognition of the need for improvement. See previous blog post  on learning beyond debriefing

3. Recreation of reality provides the best simulation! [or some variant of this statement]: When I hear this concept even eluded to, I get tachycardic, diaphoretic, and dilated pupils. My fight or flight nervous system gets fully engaged and trust me, I don’t have any planning on running. 😊

[disclaimer on this one: I’m not talking about the type of simulation that is designed for human factors, and/or critical environmental design decisions or packaging/marketing etc. which depend upon a close replication to reality.]

This is one of the signs of a complete novice and/or misinformed person or sometimes groups of people! If you think it through it is a rather ludicrous position. Further, I believe trying to conform to this principle is one of the biggest barriers to success of many simulation endeavors. People spent inordinate amounts of time trying to put their best theatrical foot forward to try to re-create reality. Often what is actually occurring is expanding the time to set up the simulation, expanding the time to reset the simulation and dramatically increasing the time to clean up from the simulation. (All of the after mentioned time intervals increase the overall cost of the individual simulation, thereby reducing the efficiency.) While I am a huge fan of loosely modeling scenarios off of real cases in an attempt to create an environment with some sense of familiarity to the clinical analog, I frequently see people going to extremes trying to re-create details of reality.

We have hundreds and thousands of design decisions to make for even moderately complex scenarios. Every decision we make to include something to try to imitate reality has the potential to potentially cause confusion if not carefully thought out. It is easy to introduce confusion in the attempts to re-create reality since learners engage in simulation with a sense of hyper-vigilance that likely does not occur in the same fashion when they are in the real clinical learning environment. See previous blog post on cognitive third space.

If you really think about it the simulation is designed to have people perform something to allow them to learn, as well as to allow observers to form opinions about the things that the learner(s) did well, and those areas that can be improved upon. Carefully selecting how a scenario unfolds, and/or the equipment that is used to allow this performance to occur is part of the complex decision-making associated with creating simulations. The scenario should be engineered to exploit the areas, actions, situations or time frames that are desired focal points of the learning and assessment objectives.  Attention should be paid to the specifics of the learning and assessment objectives to ensure that the included cache of equipment and/or environmental accoutrements are selected to minimize the potential of confusion, create the most efficient pathway that allows the occurrence of the assessment that contributes improving the learning.

Lastly, lets put stock into the learning contract we are engaging in with our learners. We need to treat them like adult learners. (After all everybody wants to throw in the phrase adult learning principles…. Haha).

Let’s face it: A half amputated leg of a trauma patient with other signs and symptoms of hemorrhagic shock that has a blood-soaked towel under it is probably good enough for our adult learners to get the picture and we don’t actually need blood shooting out of the wound and all over the room. While the former might not be as theatrically sexy, the latter certainly contributes to the overall cost (time and resource) of the simulation. We all need to realistically ask, “what’s the value?”

While my time is up for this post, and I promised to limit my comments to only three, I cannot resist to share with you two other statements or concepts that were in the running for the top three. The first is “If you are not video recording your scenarios you cannot do adequate debriefing”, and the second one is “The simulator should never die.” (Maybe I’ll expand the rant about these and others in the future 😉).

Well… That’s a wrap. I’m off to a week of skiing with family and friends in Colorado!

Until next time,

Happy Simulating!

6 Comments

Filed under Curriculum, debriefing, scenario design, simulation

Recreating Reality is NOT the goal of Healthcare Simulation

Discussing the real goals of Healthcare Simulation as it relates to the education of individuals and teams. Avoiding the tendency to put the primary focus into recreating reality, and instead providing the adequate experience that allows deep reflection and learning should be the primary focus. This will help you achieve more from your simulation efforts!

 

Leave a comment

Filed under scenario design

Embedding Forcing Functions into Scenario Design to Enhance Assessment Capabilities

shutterstock_316201547Many people design scoring instruments for simulation encounters as part of an assessment plan. They are used for various reason ranging from a tool to help provide feedback, research purposes, to high stakes pass/fail criteria. Enhancing the ability for assessment tools to function as intended may often be closely linked to scenario design.

Often times checklists are employed. When designing checklists is critical that you are asking the question “Can I accurately measure this?”. It is easy to design checklists that seem intuitively simple and filled with common sense (from a clinical perspective) but are not actually able accurately measure what you think you are evaluating.

It is quite common to see checklists that have items such as “Observes Chest Rise”; “Identified Wheezing”; “Observed Heart Rate”. During faculty training sessions focusing on assessment tool development we routinely run scenarios that contain deliberate errors of omission. These items, some are routinely scored, or “checked” as completed. Why is this? Part of the answer is we are interjecting our own clinical bias into what we think the simulation participant is doing or thinking. This raises the possibility that we are not measuring what we are intending to measure, or assess.

Consider two checklist items for an asthma scenario, one is “Auscultates Lung Sounds”; another item is “Correctly Interprets Wheezing”. The former we can reasonably infer from watching the scenario and see the participant listen to lung fields on the simulator. The latter however is more complicated. We don’t know if the participant recognized wheezing or not by watching them listen to the lungs. Many people would check yes for “Correctly Interpreted Wheezing” if the next thing the participant did was order a bronchodilator. This would be an incorrect assumption, but could be rationalized in the mind of the evaluator because of a normal clinical sequence and context.

However, it may be completely wrong and the participant never interpreted the sounds as wheezing, but ordered a treatment because of a history of asthma. Or what would happen if the bronchodilator was ordered before auscultation of the lungs? What you have by itself, is an item on your checklist that seems simple enough, but is practically unmeasurable through simple observation.

This is where linking scenario design and assessment tools can come in handy. If the item you are trying to assess is a critical element of the learning and assessment plan perhaps something in the simulation, transition to, or during the debriefing can cause the information to be made available to more correctly or accurately, assess the item.

A solution to a real-time assessment during the flow of the scenario is possible within the design of the scenario. Perhaps inserting a confederate as a nurse caring for the patient that is scripted to ask “What did you hear?” after the participant auscultates the lungs fields. This will force the data to become available during the scenario for the assessor to act upon. Hence the term, forcing function.

Another possibility would be to have the participant complete a patient note on the encounter and evaluate their recording of the lung sounds. Another possibility would just be to have the participant write down what their interpretation of the lung sounds were. Or perhaps embed the question into the context of the debriefing. Any of these methods would provide a more accurate evaluation of the assessment item ““Correctly Interpreted Wheezing”.

While not trying to create a list of exhaustive methods I am trying to provide two things in this post. One is to have you critically evaluate your ability to accurately assess something that occurs within a scenario with higher validity. Secondly, recognize that creation of successful, reliable and valid assessment instruments are linked directly to scenario design. This can occur during the creation of the scenario, or can be as a modification to an existing scenario to enhance assessment capabilities.

This auscultation item just serves as a simple example. The realization of the challenges of accurate assessment of a participants performance is important to recognize to allow for the development of robust, valid and reliable tools.  The next time you see or design a checklist or scoring tool think in your own mind….. Can I really, truly evaluate that item accurately? If not, can I modify the scenario or debriefing to force the information to be made available?

 

Leave a comment

Filed under scenario design