Category Archives: design

Don’t be Confused! Every Simulation is an Assessment

 

Recently as I lecture and conduct workshops I have been asking people who run simulations how often they do assessments with their simulations. The answers are astounding. Every time there are a few too many people reporting that they are performing assessments less than 100% of the time that they run their simulations. Then they are shocked when I tell them that they do assessments EVERY TIME they run their simulations.

While some of this may be a bit of a play on words there should be careful consideration given to the fact that each time we run a simulation scenario we must be assessing the student(s) that are the learners. If we are going to deliver feedback, whether intrinsic to the design of the simulation, or promote discovery during a debriefing process, somewhere at some point we had to decide what we thought they did well and identify areas for needed improvement. To be able to do this you had to perform an assessment.

Kundenbewertungen - Rezensionen

Now let’s dissect a bit. Many people tend to equate the word assessment with some sort of grade assignment. Classically we think of a test that may have some threshold of passing or failing or contribute in some way to figure out if someone has mastered certain learnings. Often this may be part of the steps one needs to move on, graduate, or perhaps obtain a license to practice. The technical term for this type of assessment is summative. People in healthcare are all too familiar with such types of assessment!

Other times however, assessments can be made periodically with a goal of NOT whether someone has mastered something, but with more of a focus of figuring out what one needs to do to get better at what they are trying to learn. The technical term for this is formative assessment. Stated another way, formative assessment is used to promote more learning while summative assesses whether something was learned.

When things can get even more confusing is when assessment activities can have components or traits of both types of assessment activities. None the less, what is less important then the technical details is the self-realization and acceptance of simulation faculty members that every time you observe a simulation and then lead a debriefing you are conducting an assessment.

Such realization should allow you to understand that there is really no such thing as non-judgmental debriefing or non-judgement observations of a simulation-based learning encounter. All goal directed debriefing MUST be predicated upon someone’s judgement of the performance of the participant(s) of the simulation. Elsewise you cannot provide and optimally promote discovery of the needed understanding of areas that require improvement, and/or understanding of the topic, skills, or decisions that were carried out correctly during the simulation.

So, if you are going to take the time and effort to conduct simulations, please be sure and understand that assessment, and rendering judgement of performance, is an integral part of the learning process. Once this concept is fully embraced by the simulation educator greater clarity can be gained in ways to optimize assessment vantage points in the design of simulations. Deciding the assessment goals with some specificity early in the process of simulation scenario design can lead to better decisions associated design elements of the scenario. The optimizing of scenario design to enhance “assess-ability” will help you whether you are applying your assessments in a formative or summative way!

So, go forth and create, facilitate and debrief simulation-based learning encounters with a keen fresh new understanding that every simulation is an assessment!

Until Next Time Happy Simulating!

Leave a comment

Filed under assessment, Curriculum, design, scenario design, simulation

Don’t Let the Theory Wonks Slow Down the Progress of Healthcare Simulation

AdobeStock_85761977_rasterized

Those of us in the simulation community know well that when used appropriately and effectively simulation allows for amazing learning and contributes to students and providers of healthcare improving the craft. We also know there is very little published literature that conclusively demonstrates the “right way to do it”.

Yet in the scholarly literature there is still a struggle to define best practices and ways to move forward. I believe it is becoming a rate limiting step in helping people get started, grow and flourish in the development of simulation efforts.

I believe that part of the struggle is a diversity of the mission of various simulation programs ranging from entry level students to practicing professionals, varying foci on individualized learning incompetence, versus and/or team working communications training etc. Part of the challenges in these types of scholarly endeavors people try to describe a “one-size-fits-all“ approach to the solution of best practices. To me, this seems ridiculous when you consider the depths and breadth of possibilities for simulation in healthcare.

I believe another barrier (and FINALLY, the real point of this blog post 🙂  is trying to overly theorize everything that goes on with simulation and shooting down scholarly efforts to publish and disseminate successes in simulation based on some missing link to some often-esoteric deep theory in learning. While I believe that attachments to learning theory are important, I think it is ridiculous to think that every decision, best practice and policy in simulation, or experimental design, needs to reach back and betide to some learning theory to be effective.

As I have the good fortune to review a significant number simulation papers it is concerning to me to see many of my fellow reviewers shredding people’s efforts based on ties to learning theories, as well as their own interpretations on how simulation should be conducted. They have decided by reading the literature that is out there (of which there is very little, if any, conclusive arguments on best practices) has become a standard.

My most recent example is that of a paper I reviewed of a manuscript describing an experimental design looking at conducting simulation one way with a certain technology and comparing it to conducting the simulation another way without the technology. The authors then went on to report the resulting differences. As long as the testing circumstances are clearly articulated, along with the intentions and limitations, this is the type of literature the needs to appear for the simulation community to evaluate and digest, and build upon.

Time after time after time more recently I am seeing arguments steeped in theory attachments that seem to indicate this type of experimental testing is irrelevant, or worse yet inappropriate. There is a time and place for theoretical underpinnings and separately there is a time and place for attempting to move things forward with good solid implementation studies.

The theory wonks are holding up the valuable dissemination of information that could assist simulation efforts moving forward. Such information is crucial to assist us collectively to advance the community of practice of healthcare simulation forward to help improve healthcare globally.  There is a time to theorize and a time to get work done.

While I invite the theorist to postulate new and better ways to do things based on their philosophies, let those in the operational world, tell their stories of successes and opportunities as they are discovered.

Or perhaps it is time that we develop a forum or publication of high quality, that provides a better vehicle for dissemination of such information.

So…… in the mean time….. beware of the theory wonks. Try not to let them deter from your efforts to not only move your own simulation investigations forward, but to be able to disseminate and share them with the rest of the world!

Leave a comment

Filed under Curriculum, design, patient safety, return on investment

FIVE TIPS on effectively engaging adult learners in healthcare simulation

Leave a comment

Filed under Curriculum, design