Disclaimer (before you read on): This post is not referring to research projects that have been through an institutional review board or other ethics committee reviews.
What I am actually referring to is the practice of many simulation programs that do routine written pre-test, followed by written posttest to attempt to document a change in the learner’s knowledge as a result of participating in the simulation. This is a very common practice of simulation programs. It seems the basis of such testing would be to eventually be able to use the anticipated increase in knowledge as a justification for the effectiveness of the simulation-based training.
However we must stop and wonder if this ethical? I believe as described in some of my previous posts that there is a contract that exists between participants of simulation encounters, and those who are the purveyors of such learning activities. As part of this contract we are agreeing to utilize the time of the participating in a way that is most advantageous to their educational efforts that help them become a better healthcare provider.
With regard to pretesting, we could argue from an educational standpoint that we are going to customize the simulation education to help tailor of the learning to the needs of the learners as guided by the results of some pretest. I.e. using to pretesting some sort of needs analysis fashion. But this argument requires that we actually used the results of said pre-test in this fashion.
The second argument and one that we embark upon in several of the programs of which I have designed is that we are assessing the baseline knowledge to evaluate the effectiveness of pre-course content, or pre-course knowledge that participants are programs to do either complete or possess prior to coming to the simulation center. I.e. A readiness assessment of sorts. In other words the question being is this person cognitively prepared to engage in the simulation endeavors that I am about to ask them to participate in.
Finally another argument from an educational standpoint for pretesting could be made that we would like to point out to the participants of the simulation areas of opportunity to enhance their learning. We could essentially say that we are helping the learner direct where they will pay close attention and focus on during the simulation activities or participation in the program. Again this is predicated on the fact that there will be a review of the pretest answers, and/or at least feedback to the intended participants of the simulation program on the topic areas, questions or subjects of which they did not answer the questions successfully.
The posttest argument becomes a bit more difficult from an ethical perspective outside of the aforementioned justification of the simulation-based education. I suppose we could say that we are trying to continue to advise the learner on areas that we believe there are opportunity for improvement and hopefully inspire self-directed learning.
However my underlying belief is if we look at ourselves in the mirror, myself included, we are trying to collect the data over time so that we can perform some sort of retrospective review and hopefully uncover there was a significant change in pretest versus posttest testing scores that we can use to justify our simulation efforts in whole or in parts.
This becomes more and more concerning if for no other reason than it can lead to sloppy educational design. What I mean is if we are able to ADEQUATELY assess the objectives of a simulation program with a given pair written tests, it is likely more knowledge-based domain items we are assessing and we always have to question is simulation the most efficient and effective modality for this effort. I.e. if this is the case may be every time I give a lecture I should give a pre-and posttest (although this would make the CME industry happy) to determine the usefulness of my education and justify the time of the participants attending the session. Although in this example if I was lecturing and potentially enhancing knowledge, perhaps one could argue that a written test is the correct tool. However the example is intended to put out the impracticality and limited usefulness of such an endeavor.
As we continue to attempt to create arguments for the value of simulation and overcome the hurdles that are apparent as well as hidden, I think that we owe it to ourselves to decide whether such ROUTINE use of pre-and post-testing is significantly beneficial to the participants of our simulation, or are we justifying the need to do so on the half of the simulation entity. Because we owe it to our participants to ensure that the answer reflects the former in an honest appraisal.
I think that we have to measure performance…the “shows how” of Miller’s Pyramid. Pre and post test really only measure “knows” or “knows how”, You are absolutely right. I’m thinking we should measure objectives that relate to performance in repeated simulation. So an initial sim would measure their baseline performance and after several learning experiences later, measure same performance to see improvement. Pre-assessment to simulation events could measure confidence and self-rated performance, don’t you think?