Assess-a-Phobia In Simulation – What Do We Tell the Patients?

shutterstock_119870338We need to end the terror at the thought of using simulation for assessment. I am of the opinion that one of many significant returns on investment to the healthcare community for simulation is in our ability to develop tools to help objectively assess performance. Healthcare education has evolved with a strong reliance on the knowledge base test, those that assess one’s cognitive knowledge. There is little doubt that no matter what profession in healthcare one practices a high degree of cognitive knowledge and intellectual competence are both necessary.

However, in healthcare it is important that one can apply the knowledge given the contextual circumstances they are faced to deal with when caring for patients.  Simulation is evolving as an excellent tool to help provide some insight into one’s performance competence. In other words it gives us another vantage point to understand if one can actually practice and put into good use the intellectual capital that resides in their brain that was acquired through various methods of educ ation as well as experience.

Having been an educator for well over 25 years I have conducted my share of assessments. Admittedly, most of them are cognitive tests, i.e. written tests to do some sort of knowledge base shutterstock_89679427assessment. In fact, I bet most people reading this have developed some sort of written test assessment that they have given to students sometime in their career as an educator. We think nothing of creating a written test, dealing it out to a room full of students and assessing them on their ability to pass the test by whatever bar we have set as a passing score. Typically a written test has many, many items to account for the fact that variability in the testing process, variability in the interpretation of a question, as well as the fact that one not knowing a single fact should not contribute to a pass or fail on its own.

Depending on the level of stakes of the exam, we will apply more and less rigor in trying to statistically validate each item, or the test overall. Over time we become confident in our written testing instruments and use them over and over again. Eventually, we develop the confidence to say someone actually passes or fails. This can be high-stakes such as passing or failing a course, passing or failing an examination of competence that may lead to certification or other such examples.

I believe some aspects of simulation have evolved to be just as good as that bubble sheet in one’s assessment. In fact I suppose that’s not even a fair comparison because the bubble sheet is going to assess something different than the simulation. Why of course there might be overlap in what we are assessing with these two instruments but they are different tools indeed. The bubble sheet is going to best be assessing cognitive knowledge, and the simulation can be engineered to assess application of knowledge into practice.

shutterstock_85476502There is a reluctance amongst many to engage in simulation assessment activities and I am not sure why. If we think about the analogy to the written test I described above, we feel really comfortable going into a room by our self or with several of our colleagues in creating a multiple-choice written test. Why is it that we cannot go into a room with a number of our colleagues and develop some sort of assessment for simulation environment and capitalize on the advantage of the ability to observe one’s application of knowledge as described above? Continuing with the same analogy, we can then collect data over time and compare the individual items that were assessed, or the ability of the test as a whole for validity.

The origin of this reluctance is complex. In fact, the whole notion of performance assessment of humans is quite complex. However, we should not run from things because they’re complex. I think another part of the reluctance is it is a challenge to our own comfort zone. Arguably it is more difficult to give direct feedback and let someone know that they did something incorrectly, unsafe or otherwise in a face-to-face discussion as compared to letting them go to the wall where their grades are posted and see that they failed the written exam. Another piece is it takes a lot of work to design such assessment tools. But again, the mission is so critically important can’t run from the hard work.

Another part of the reluctance comes in the fact that very few of us were actually trained in the creation of human performance examinations. However if you think that through, very few of us were formally trained in the science of creating written test examinations either. Yet the comfort factor with the latter allows it to happen much more routinely.

Another way to think about it is, if you are a clinician that is supervising a trainee doing a potentially dangerous procedure on a patient is your job to give them feedback that will allow them to improve in the future. Likely some of this conversation will be reinforcing the things that were done appropriately while other aspects of the conversation will require them to make changes for the future for the things that they did improperly. So in essence, you have created an assessment!

Some will read this and argue that assessment violates the principles of a safe learning environment. I fall back on the topic of one of my previous posts, and say that we need to concentrate on Patient Centered Simulation. Likely those who argue that point are not on the front lines of healthcare and understand the need for near-perfect performance in everything we do as we do things for and to patients. Nothing is quite as disconcerting as seeing a trainee make a mistake on actual patient that has a high potential, or actually causes harm. Do we then turn to the patient or the family and say, “I’m sorry, we had the ability to assess their competency in the placement of a central line, but we thought we shouldn’t do that because it wouldn’t represent a safe learning environment.”? Seriously? I think not!!!!!

Simulation has what is so far a largely untapped capacity and capability to assist us in the journey that will make an assessment pathway to help assess competence in the newly acquired skills and critical thought process, as well as evaluation of the maintenance of proficiency and currency of knowledge, as well as application of knowledge over time.

I truly believe that it is the assessment component that will help bring the demands of simulation to the next level. It can be an important tool in the migration from time-based objectives, to a more rational system of performance-based objectives when considering things such as acquisition of practice competence, advancement in training, or even the measurement of competency in the maintenance of practice over time.

Leave a comment

Filed under Uncategorized

Leave a Reply