Monthly Archives: January 2017

Value and Learning Propositions for Safety through Simulation – Don’t Sell Your Efforts Short

shutterstock_561835375aAll too often it is easy to be stuck in a mindset which can create tunnel vision. One of those time frames in the simulation world can come from an overall short-sightedness, into the usefulness, power, wisdom and change that can result from well-run simulation efforts. Many people have heard the adage “with simulation is within the debriefing that all the learning occurs.”  While phrases like this are meant to underscore the importance of the debriefing following a simulation if they are taken too literally they can result in a lack of recognition of total value of the simulation program investments and contributions.

This phenomenon is prevalent when evaluating the impact of simulation programs as part of patient safety efforts in healthcare systems in hospitals. In-situ simulation programs, or mock code evaluation programs are of unquestionable value to those of us who are in leadership in patient safety roles. Undoubtedly learning can occur during the simulation itself as I discussed in a previous blog post. Further, we all recognize the value of learning that can occur during well-run debriefing sessions. Lastly and perhaps most importantly great value can come from the information obtained during the simulation.

Scenario and debriefing sessions involved in in situ and other simulation programs that occur with practicing professional’s as participants have their limitations. First, and most practically is the operational recognition that healthcare professionals can only be kept “off-line” for a certain period of time to accomplish the simulation and debriefing. Secondly, some topics may be more sensitive than others and are not appropriate to be addressed directly with individuals during a debriefing that involves peers, as well as other healthcare colleagues. This point may be considered when evaluating the political and perceptions of your in-situ programs as received by the staff. Lastly, when you execute such a simulation there is only so much that can be absorbed at one point in time before cognitive overload becomes a significantly limiting factor.

Thinking traditionally from a “simulationist” point of view, is easy to think that all of the learning that will be recognized comes from the performance of the simulation combined with debriefing. With structure, planning and a systems-based approach to the simulation efforts, data can be gathered and analyzed to help a given hospital, or health system, understand the capabilities and limitation of their various clinical delivery systems. This can be invaluable learning for the system itself, which can then be incorporated into a plan of change to improve safety or in other cases efficiency in the delivery of care.

The given plan of change may incorporate additional educational efforts, policy, procedure or process changes that will be made in a more informed way than if the data from the simulation was not available. To garner such useful information at a systems-based level it is important that the curriculum integration be developed with consistent measurement strategies, objectives and tools that will allow meaning information to accrue.

A well planned, needs based targeted implementation strategy will create larger value than the simulation efforts occurring in a silo not connected to a larger strategic plan of improvement. If you think about a simulation event it is easy to picture small groups of people learning a great deal from the participation in the scenario or program. Simulation has the unique capability to abstract information to help provide insight into aspects of the patient care that both go smoothly as well as identify opportunities for improvement simultaneous with deployment of useful learning.

Once these opportunities are catalogued and recognized, a transformation of greater scale can take place through careful planning and implementation of further patient safety efforts with defined targets. Partnering with your risk management or patient safety colleagues to work on the integration plan can be valuable for increasing leadership buy-in for supporting your simulation efforts.

So I challenge you! If you are running relations in situ make sure that you keep in mind that your educational efforts during the simulation scenario are part of a bigger picture of increasing the safety and/or efficiency for providing care to patients, thus bringing a higher return on investment for the simulation efforts that you are conducting.

Until next time…… Happy Simulating!

1 Comment

Filed under return on investment

Embedding Forcing Functions into Scenario Design to Enhance Assessment Capabilities

shutterstock_316201547Many people design scoring instruments for simulation encounters as part of an assessment plan. They are used for various reason ranging from a tool to help provide feedback, research purposes, to high stakes pass/fail criteria. Enhancing the ability for assessment tools to function as intended may often be closely linked to scenario design.

Often times checklists are employed. When designing checklists is critical that you are asking the question “Can I accurately measure this?”. It is easy to design checklists that seem intuitively simple and filled with common sense (from a clinical perspective) but are not actually able accurately measure what you think you are evaluating.

It is quite common to see checklists that have items such as “Observes Chest Rise”; “Identified Wheezing”; “Observed Heart Rate”. During faculty training sessions focusing on assessment tool development we routinely run scenarios that contain deliberate errors of omission. These items, some are routinely scored, or “checked” as completed. Why is this? Part of the answer is we are interjecting our own clinical bias into what we think the simulation participant is doing or thinking. This raises the possibility that we are not measuring what we are intending to measure, or assess.

Consider two checklist items for an asthma scenario, one is “Auscultates Lung Sounds”; another item is “Correctly Interprets Wheezing”. The former we can reasonably infer from watching the scenario and see the participant listen to lung fields on the simulator. The latter however is more complicated. We don’t know if the participant recognized wheezing or not by watching them listen to the lungs. Many people would check yes for “Correctly Interpreted Wheezing” if the next thing the participant did was order a bronchodilator. This would be an incorrect assumption, but could be rationalized in the mind of the evaluator because of a normal clinical sequence and context.

However, it may be completely wrong and the participant never interpreted the sounds as wheezing, but ordered a treatment because of a history of asthma. Or what would happen if the bronchodilator was ordered before auscultation of the lungs? What you have by itself, is an item on your checklist that seems simple enough, but is practically unmeasurable through simple observation.

This is where linking scenario design and assessment tools can come in handy. If the item you are trying to assess is a critical element of the learning and assessment plan perhaps something in the simulation, transition to, or during the debriefing can cause the information to be made available to more correctly or accurately, assess the item.

A solution to a real-time assessment during the flow of the scenario is possible within the design of the scenario. Perhaps inserting a confederate as a nurse caring for the patient that is scripted to ask “What did you hear?” after the participant auscultates the lungs fields. This will force the data to become available during the scenario for the assessor to act upon. Hence the term, forcing function.

Another possibility would be to have the participant complete a patient note on the encounter and evaluate their recording of the lung sounds. Another possibility would just be to have the participant write down what their interpretation of the lung sounds were. Or perhaps embed the question into the context of the debriefing. Any of these methods would provide a more accurate evaluation of the assessment item ““Correctly Interpreted Wheezing”.

While not trying to create a list of exhaustive methods I am trying to provide two things in this post. One is to have you critically evaluate your ability to accurately assess something that occurs within a scenario with higher validity. Secondly, recognize that creation of successful, reliable and valid assessment instruments are linked directly to scenario design. This can occur during the creation of the scenario, or can be as a modification to an existing scenario to enhance assessment capabilities.

This auscultation item just serves as a simple example. The realization of the challenges of accurate assessment of a participants performance is important to recognize to allow for the development of robust, valid and reliable tools.  The next time you see or design a checklist or scoring tool think in your own mind….. Can I really, truly evaluate that item accurately? If not, can I modify the scenario or debriefing to force the information to be made available?

 

Leave a comment

Filed under scenario design