Tag Archives: curriculum integration

Three Things True Simulationists Should NEVER Say Again

From Wiktionary: Noun. simulationist (plural simulationists) An artist involved in the simulationism art movement. One who designs or uses a simulation. One who believes in the simulation hypothesis.

Woman taping-up mans mouth

 

After attending, viewing or being involved in hundreds if not thousands of simulation lectures, webinars, workshops, briefings and conversations there are a few things that I hear that make me cringe more than others. In this post I am trying to simmer it down to the top three things that I think we should ban from the conversations and vocabularies of simulationists around the globe!

1. Simulation will never replace learning from real patients!: Of course it wont! That’s not the goal. In fact, in some aspects simulation offers some advantages over learning on real patients. And doubly in fact, real patients have some advantages too! STOP being apologetic for simulation as a methodology. When this is said it is essentially deferring to real patients as some sort of holy grail or gold standard against which to measure. CRAAAAAAAZY……   Learning on real patients is but one methodology by which to attack the complex journey of teaching, learning and assessing the competence of a person or a team of people who are engaged in healthcare.  All the methodologies associated with this goal of education have their own advantages, disadvantages, capabilities and limitations. When we agree with people and say simulation will never replace learning from real patients, or allow that notion to go unchallenged, we are doing a short service to the big picture of creating a holistic education program for learners. See previous blog post on learning on real patients. 

2. In simulation, debriefing is where all of the learning occurs!: You know you have heard this baloney before. Ahhhhhhhhhhhhh such statements are purely misinformed, not backed up by a shred of evidence, kind of contrary to COMMON SENSE, as well as demeaning to the participants as well as the staff and faculty that construct such simulations. The people who still make this statement are still stuck in a world of instructor centricity. In other words, “They are saying go experience all of that…… and then when I run the debriefing the learning will commence.” The other group of people are trying to hard sell you some training on debriefing and then make you think it is some mystical power held by only a certain few people on the planet. Kinda cra’ cra’ (slang for crazy) if you think about it.

When one says something to articulate learning cannot occur during the simulation is confirming that they are quite unthoughtful about how they construct the entire learning encounter. It also hints at the fact that they don’t take the construct of the simulation itself very seriously. The immersive experience that people are exposed to during the simulation and before the debriefing can be and should be constructed in a way that provides built in feedback, observations, as well as experiences that contribute to a feeling of success and/or recognition of the need for improvement. See previous blog post  on learning beyond debriefing

3. Recreation of reality provides the best simulation! [or some variant of this statement]: When I hear this concept even eluded to, I get tachycardic, diaphoretic, and dilated pupils. My fight or flight nervous system gets fully engaged and trust me, I don’t have any planning on running. 😊

[disclaimer on this one: I’m not talking about the type of simulation that is designed for human factors, and/or critical environmental design decisions or packaging/marketing etc. which depend upon a close replication to reality.]

This is one of the signs of a complete novice and/or misinformed person or sometimes groups of people! If you think it through it is a rather ludicrous position. Further, I believe trying to conform to this principle is one of the biggest barriers to success of many simulation endeavors. People spent inordinate amounts of time trying to put their best theatrical foot forward to try to re-create reality. Often what is actually occurring is expanding the time to set up the simulation, expanding the time to reset the simulation and dramatically increasing the time to clean up from the simulation. (All of the after mentioned time intervals increase the overall cost of the individual simulation, thereby reducing the efficiency.) While I am a huge fan of loosely modeling scenarios off of real cases in an attempt to create an environment with some sense of familiarity to the clinical analog, I frequently see people going to extremes trying to re-create details of reality.

We have hundreds and thousands of design decisions to make for even moderately complex scenarios. Every decision we make to include something to try to imitate reality has the potential to potentially cause confusion if not carefully thought out. It is easy to introduce confusion in the attempts to re-create reality since learners engage in simulation with a sense of hyper-vigilance that likely does not occur in the same fashion when they are in the real clinical learning environment. See previous blog post on cognitive third space.

If you really think about it the simulation is designed to have people perform something to allow them to learn, as well as to allow observers to form opinions about the things that the learner(s) did well, and those areas that can be improved upon. Carefully selecting how a scenario unfolds, and/or the equipment that is used to allow this performance to occur is part of the complex decision-making associated with creating simulations. The scenario should be engineered to exploit the areas, actions, situations or time frames that are desired focal points of the learning and assessment objectives.  Attention should be paid to the specifics of the learning and assessment objectives to ensure that the included cache of equipment and/or environmental accoutrements are selected to minimize the potential of confusion, create the most efficient pathway that allows the occurrence of the assessment that contributes improving the learning.

Lastly, lets put stock into the learning contract we are engaging in with our learners. We need to treat them like adult learners. (After all everybody wants to throw in the phrase adult learning principles…. Haha).

Let’s face it: A half amputated leg of a trauma patient with other signs and symptoms of hemorrhagic shock that has a blood-soaked towel under it is probably good enough for our adult learners to get the picture and we don’t actually need blood shooting out of the wound and all over the room. While the former might not be as theatrically sexy, the latter certainly contributes to the overall cost (time and resource) of the simulation. We all need to realistically ask, “what’s the value?”

While my time is up for this post, and I promised to limit my comments to only three, I cannot resist to share with you two other statements or concepts that were in the running for the top three. The first is “If you are not video recording your scenarios you cannot do adequate debriefing”, and the second one is “The simulator should never die.” (Maybe I’ll expand the rant about these and others in the future 😉).

Well… That’s a wrap. I’m off to a week of skiing with family and friends in Colorado!

Until next time,

Happy Simulating!

8 Comments

Filed under Curriculum, debriefing, scenario design, simulation

Beware of the Educational Evangelist!

beware educational evangelistThey are everywhere now days like characters in pokemon go. They seem to hang out in high concentration around new simulation centers.

You know the type. Usually they start off by saying how terrible it is for someone to give a lecture. Then they go on to espouse the virtues and values of student – centered education claiming active participation and small group learning is the pathway to the glory land. They often toss in terms like “flipped classroom”. And just to ensure you don’t question their educational expertise they use a word ending with “-gogy” in the same paragraph as the phrase “evidence-based”.

If you ask them where they have been in the last six months you find out that they probably went to a weekend healthcare education reform retreat or something equivalent…….

My principal concern with the today’s educational evangelist is that they are in search of a new way of doing everything. Often times they recommend complete and total overhauls to existing curriculum without regard to a true understanding of how to efficiently and effectively improve, and/or analyze the existing resources required to carry out such changes.

Further, the evangelist usually has a favorite methodology such as “small group learning”, “problem-based learning” or “simulation-based learning” that they are trying to convert everyone to through prophecy.

An easy target of all educational evangelist is the lecture, and often that is where the prophecy begins. They usually want to indicate that if lecture is happening, learning is not. As I discussed in a previous blog article lecture is not dead, and when done well, can be quite engaging and create significant opportunities for learning and is maximally efficient in terms of resources.

If you think about a critically it is just as easy to do lousy small group facilitation as it is to do a lousy lecture. Thus, the potential gains in learning will not achieve maximal potential. The difference is small group facilitation like simulation, generally take significantly more faculty resources.

The truth is the educational evangelist is a great person to have in and amongst the team. Their desire for change, generally instilled with significant passion are often a source of great energy. When harnessed they can help advance and revise curricula to maximize, and modernize various educational programs.

However, to be maximally efficient all significant changes should undergo pre-analysis, hopefully derived from a needs assessment, whether it is formal or informal. Secondly, it is worth having more than one opinion to decide the prioritization of what needs to be changed in a given curriculum. While the evangelist will be suggestive that the entire curriculum is broken, often times with a more balanced review you find out that there are areas of the curriculum that would benefit from such overhaul, and some aspects that are performing just fine.

When you begin to change aspects of the curriculum, start small and measure the change if possible. Moving forward on a step-by-step basis will usually provide a far better revised curriculum then an approach that “Throws out the baby and the bathwater”. Mix the opinions of the stalwarts of the existing curriculum methods with the evangelists. Challenge existing axioms, myths and entrenched beliefs like “Nothing can replace the real patient for learning….” If this process is led well, it will allow the decision making group to reach a considerably more informed position that will lead to sound decisions, change strategies, and guide investments appropriately.

So if you’re the leader or a member of a team responsible for a given curriculum of healthcare instruction and confronted with the educational evangelist, welcome their participation. Include them in the discussions moving forward with a balanced team of people have them strive to create an objective prioritization of the needs for change. This will allow you to make excellent decisions with regard to new technologies and/or methods that you should likely embrace for your program. More importantly you will avoid tossing out the things that are working and are cost efficient.

Leave a comment

Filed under Curriculum, Uncategorized

Simulation Curriculum Integration via a Competency Based Model

Process_Integration.shutterstock_304375844One of the things that is a challenge for healthcare education is the reliance on random opportunity for clinical events to present themselves for a given group of learners to encounter as part of a pathway of a structured learning curriculum. This uncertainty of exposure and eventual development of competency is part of what keep our educational systems time-based which is fraught with inefficiencies by its very nature.

Simulation curriculum design at present often embeds simulation in a rather immature development model in which there is an “everybody does all of the simulations” approach. If there is a collection of some core topics that are part and parcel to a given program, combined with a belief, or perhaps proof, that simulation is a preferred modality for the topic, then it makes sense for those exposures. Let’s move beyond the topics or situations that are best experienced by everyone.

If you use a model of physician residency training for example, curriculum planners “hope” that over the course of a year a given first year resident will adequately manage an appropriate variety of cases. The types of cases, often categorized by primary diagnosis, is embedded in some curriculum accreditation document under the label “Year 1.” For the purposes of this discussion lets change the terminology from Year 1 to Level 1 as we look toward the future.

What if we had a way to know that a resident managed the cases, and managed them well for level one? Perhaps one resident could accomplish the level one goals in six months, and do it well. Let’s call that resident, Dr. Fast. This could then lead to a more appropriate advancement of the resident though the training program as opposed to them advancing by the date on the calendar.

Now let’s think about it from another angle. Another resident who didn’t quite see all of the cases, or the variety of cases needed, but they are managing things well when they do it. Let’s call them Dr. Slow. A third resident of the program is managing an adequate number and variety, but is having quality issues. Let’s refer to them as Dr. Mess. An honest assessment of the current system is that all three residents will likely be advanced to hire levels of responsibilities based on the calendar without substantial attempt at remediation of understanding of the underlying deficiencies.

What are the program or educational goals for Drs. Fast, Slow and Mess? What are the differences? What are the similarities? What information does the program need to begin thinking in this competency based model? Is that information available now? Will it likely be in the future? Does it make sense that we will spend time and resources to put all three residents through the same simulation curriculum?

While there may be many operational, culture, historical models and work conditions that provide barriers to such a model, thinking about a switch to a competency based model forces one to think deeper about the details of the overall mission. The true forms of educational methods, assessment tools, exposure to cases and environments, should be explored for both efficiency and effective effectiveness. Ultimately the outcomes we are trying to achieve for a given learner progressing through a program would be unveiled. Confidence in the underlying data will be a fundamental necessary component of a competency based system. In this simple model, the two functional data points are quantity and quality of given opportunities to learn and demonstrate competence.

This sets up intriguing possibilities for the embedding of simulation into the core curriculum to function in a more dynamic way and contribute mightily to the program outcomes.

Now think of the needs of Dr. Slow and Dr. Mess. If we had insight combined with reliable data, we could customize the simulation pathway for the learner to maximally benefit their progression through the program. We may need to provide supplement simulations to Dr. Slow to allow practice with a wider spectrum of cases, or a specific diagnosis, category of patient, or situation for them to obtain exposure. Ideally this additional exposure that is providing deliberate practice opportunities could also include learning objectives to help them increase their efficiencies.

In the case of Dr. Mess, the customization of the simulation portion of the curriculum provide deliberate practice opportunities with targeted feedback directly relevant to their area(s) of deficiency, ie a remediation model. This exposure for Dr. Mess could be constructed to provide a certain category of patient, or perhaps situation, that they are reported to handle poorly. The benefit in the case of Dr. Mess is the simulated environment can often be used to tease out the details of the underlying deficiency in a way that learning in the actual patient care environment is unable to expose.

Lastly, in our model recall that Dr. Fast may not require any “supplemental” simulation thus freeing up sparse simulation and human resources necessary to conduct it. This is part of the gains in efficiencies that can be realized through a competency -based approach to incorporating simulation into a given curriculum.

Considering a switch to a competency based curriculum in healthcare education can be overwhelming simply based on the number of operational and administrative challenges. However, using a concept of a competency based implementation as a theoretical model can help envision a more thoughtful approach to curricular integration of simulation. If we move forward in a deliberate attempt to utilize simulation in a more dynamic way, it will lead to increases in efficiencies and effectiveness along with providing better stewardship of scarce resources.

 

1 Comment

Filed under Uncategorized