Tag Archives: assessment

Embedding Forcing Functions into Scenario Design to Enhance Assessment Capabilities

shutterstock_316201547Many people design scoring instruments for simulation encounters as part of an assessment plan. They are used for various reason ranging from a tool to help provide feedback, research purposes, to high stakes pass/fail criteria. Enhancing the ability for assessment tools to function as intended may often be closely linked to scenario design.

Often times checklists are employed. When designing checklists is critical that you are asking the question “Can I accurately measure this?”. It is easy to design checklists that seem intuitively simple and filled with common sense (from a clinical perspective) but are not actually able accurately measure what you think you are evaluating.

It is quite common to see checklists that have items such as “Observes Chest Rise”; “Identified Wheezing”; “Observed Heart Rate”. During faculty training sessions focusing on assessment tool development we routinely run scenarios that contain deliberate errors of omission. These items, some are routinely scored, or “checked” as completed. Why is this? Part of the answer is we are interjecting our own clinical bias into what we think the simulation participant is doing or thinking. This raises the possibility that we are not measuring what we are intending to measure, or assess.

Consider two checklist items for an asthma scenario, one is “Auscultates Lung Sounds”; another item is “Correctly Interprets Wheezing”. The former we can reasonably infer from watching the scenario and see the participant listen to lung fields on the simulator. The latter however is more complicated. We don’t know if the participant recognized wheezing or not by watching them listen to the lungs. Many people would check yes for “Correctly Interpreted Wheezing” if the next thing the participant did was order a bronchodilator. This would be an incorrect assumption, but could be rationalized in the mind of the evaluator because of a normal clinical sequence and context.

However, it may be completely wrong and the participant never interpreted the sounds as wheezing, but ordered a treatment because of a history of asthma. Or what would happen if the bronchodilator was ordered before auscultation of the lungs? What you have by itself, is an item on your checklist that seems simple enough, but is practically unmeasurable through simple observation.

This is where linking scenario design and assessment tools can come in handy. If the item you are trying to assess is a critical element of the learning and assessment plan perhaps something in the simulation, transition to, or during the debriefing can cause the information to be made available to more correctly or accurately, assess the item.

A solution to a real-time assessment during the flow of the scenario is possible within the design of the scenario. Perhaps inserting a confederate as a nurse caring for the patient that is scripted to ask “What did you hear?” after the participant auscultates the lungs fields. This will force the data to become available during the scenario for the assessor to act upon. Hence the term, forcing function.

Another possibility would be to have the participant complete a patient note on the encounter and evaluate their recording of the lung sounds. Another possibility would just be to have the participant write down what their interpretation of the lung sounds were. Or perhaps embed the question into the context of the debriefing. Any of these methods would provide a more accurate evaluation of the assessment item ““Correctly Interpreted Wheezing”.

While not trying to create a list of exhaustive methods I am trying to provide two things in this post. One is to have you critically evaluate your ability to accurately assess something that occurs within a scenario with higher validity. Secondly, recognize that creation of successful, reliable and valid assessment instruments are linked directly to scenario design. This can occur during the creation of the scenario, or can be as a modification to an existing scenario to enhance assessment capabilities.

This auscultation item just serves as a simple example. The realization of the challenges of accurate assessment of a participants performance is important to recognize to allow for the development of robust, valid and reliable tools.  The next time you see or design a checklist or scoring tool think in your own mind….. Can I really, truly evaluate that item accurately? If not, can I modify the scenario or debriefing to force the information to be made available?

 

Leave a comment

Filed under scenario design

Unpacking of Expertise Contributes to Effective Simulation (Education) Design

shutterstock_188725688aPart of the challenge in creating any simulation-based learning encounter is the interactions that occurs with subject matter experts to serve as a source that helps to guide the design of the event. The challenge lies within the fact that as healthcare providers ascend to a position of expertise many of their thoughts and approaches to the clinical situation at hand undergo automaticity in terms of the way decisions are made or procedures are executed. No longer does an experienced surgeon think step-by-step on how to create a knot. They rely on muscle memory, experience and packaged expertise to complete the task. DeconstructionOfExpertiseSimilarly, a skilled diagnostician will often identify a clinical condition or stratification of the level of criticality of a patient seemingly by intuition that can occur in a brief encounter. But it is not luck or intuition. It is the honed art of observation combined with a stepwise knowledge stratification process combined with experience that has been integrated over time and bundled, or packaged, into what we call expertise.

Getting the experienced healthcare provider to unpack their expertise into tangible stepwise learning events can be the key to creating effective educational encounters. More simply put, aligning the mind of the expert to walk in the shoes of the novice and try to recall their own experiences as novices will help to create more effective learning counters. It is quite difficult for experts in areas of complex cognition or psychomotor skill areas (healthcare) to relate to the needs of the junior learner.  The junior learner who is on the journey to expertise has varying needs for granular application of individual pieces of learning along with the experience and mentoring that allows the connection of seemingly disparate small chunks of information into a fluid situation that allows for analysis and application of the final product (i.e. the delivery of healthcare).

This unpacking of expertise can effectively be carried out by ensuring that curricular activities address the need of learning and multiple stages of progress. Similarly, it is often a successful practice to combine several different individuals, perhaps with different vantage points with regard to levels of proficiency and even core expertise. This promotes a design environment that promotes a successful deconstruction of an expert situation into a series of tasks that require competence in component form, integration, practice and implementation. This is exceptionally true in healthcare where there is great variability in the process of acquiring information, analysis and affecting treatment that will be eventually rendered for a given patient for a given situation. I.e. in healthcare there are often times that there are many right answers.

There are several structured method of Hierarchical Task Analysis (HTA) in the literature that are used in various forms in many different industries. The essential underlying element of the HTA is the breaking down complicated situations into their component forms. This is a method that while time-consuming, can lead to effective strategies to build learning platforms, and in particular help guide the creation of assessment tools in simulation to help promote formative step-wise learning toward expertise. While this discussion is focusing on simulation, conceptually this applies to all aspects of education design in healthcare that will likely help us increase the efficiency and effectiveness of our programs.  After all isn’t simulation a subset of healthcare education? Now there’s a concept worth remembering!

Leave a comment

Filed under Uncategorized

Assess-a-Phobia In Simulation – What Do We Tell the Patients?

shutterstock_119870338We need to end the terror at the thought of using simulation for assessment. I am of the opinion that one of many significant returns on investment to the healthcare community for simulation is in our ability to develop tools to help objectively assess performance. Healthcare education has evolved with a strong reliance on the knowledge base test, those that assess one’s cognitive knowledge. There is little doubt that no matter what profession in healthcare one practices a high degree of cognitive knowledge and intellectual competence are both necessary.

However, in healthcare it is important that one can apply the knowledge given the contextual circumstances they are faced to deal with when caring for patients.  Simulation is evolving as an excellent tool to help provide some insight into one’s performance competence. In other words it gives us another vantage point to understand if one can actually practice and put into good use the intellectual capital that resides in their brain that was acquired through various methods of educ ation as well as experience.

Having been an educator for well over 25 years I have conducted my share of assessments. Admittedly, most of them are cognitive tests, i.e. written tests to do some sort of knowledge base shutterstock_89679427assessment. In fact, I bet most people reading this have developed some sort of written test assessment that they have given to students sometime in their career as an educator. We think nothing of creating a written test, dealing it out to a room full of students and assessing them on their ability to pass the test by whatever bar we have set as a passing score. Typically a written test has many, many items to account for the fact that variability in the testing process, variability in the interpretation of a question, as well as the fact that one not knowing a single fact should not contribute to a pass or fail on its own.

Depending on the level of stakes of the exam, we will apply more and less rigor in trying to statistically validate each item, or the test overall. Over time we become confident in our written testing instruments and use them over and over again. Eventually, we develop the confidence to say someone actually passes or fails. This can be high-stakes such as passing or failing a course, passing or failing an examination of competence that may lead to certification or other such examples.

I believe some aspects of simulation have evolved to be just as good as that bubble sheet in one’s assessment. In fact I suppose that’s not even a fair comparison because the bubble sheet is going to assess something different than the simulation. Why of course there might be overlap in what we are assessing with these two instruments but they are different tools indeed. The bubble sheet is going to best be assessing cognitive knowledge, and the simulation can be engineered to assess application of knowledge into practice.

shutterstock_85476502There is a reluctance amongst many to engage in simulation assessment activities and I am not sure why. If we think about the analogy to the written test I described above, we feel really comfortable going into a room by our self or with several of our colleagues in creating a multiple-choice written test. Why is it that we cannot go into a room with a number of our colleagues and develop some sort of assessment for simulation environment and capitalize on the advantage of the ability to observe one’s application of knowledge as described above? Continuing with the same analogy, we can then collect data over time and compare the individual items that were assessed, or the ability of the test as a whole for validity.

The origin of this reluctance is complex. In fact, the whole notion of performance assessment of humans is quite complex. However, we should not run from things because they’re complex. I think another part of the reluctance is it is a challenge to our own comfort zone. Arguably it is more difficult to give direct feedback and let someone know that they did something incorrectly, unsafe or otherwise in a face-to-face discussion as compared to letting them go to the wall where their grades are posted and see that they failed the written exam. Another piece is it takes a lot of work to design such assessment tools. But again, the mission is so critically important can’t run from the hard work.

Another part of the reluctance comes in the fact that very few of us were actually trained in the creation of human performance examinations. However if you think that through, very few of us were formally trained in the science of creating written test examinations either. Yet the comfort factor with the latter allows it to happen much more routinely.

Another way to think about it is, if you are a clinician that is supervising a trainee doing a potentially dangerous procedure on a patient is your job to give them feedback that will allow them to improve in the future. Likely some of this conversation will be reinforcing the things that were done appropriately while other aspects of the conversation will require them to make changes for the future for the things that they did improperly. So in essence, you have created an assessment!

Some will read this and argue that assessment violates the principles of a safe learning environment. I fall back on the topic of one of my previous posts, and say that we need to concentrate on Patient Centered Simulation. Likely those who argue that point are not on the front lines of healthcare and understand the need for near-perfect performance in everything we do as we do things for and to patients. Nothing is quite as disconcerting as seeing a trainee make a mistake on actual patient that has a high potential, or actually causes harm. Do we then turn to the patient or the family and say, “I’m sorry, we had the ability to assess their competency in the placement of a central line, but we thought we shouldn’t do that because it wouldn’t represent a safe learning environment.”? Seriously? I think not!!!!!

Simulation has what is so far a largely untapped capacity and capability to assist us in the journey that will make an assessment pathway to help assess competence in the newly acquired skills and critical thought process, as well as evaluation of the maintenance of proficiency and currency of knowledge, as well as application of knowledge over time.

I truly believe that it is the assessment component that will help bring the demands of simulation to the next level. It can be an important tool in the migration from time-based objectives, to a more rational system of performance-based objectives when considering things such as acquisition of practice competence, advancement in training, or even the measurement of competency in the maintenance of practice over time.

Leave a comment

Filed under Uncategorized