Tag Archives: simlulation

Why Waste Time Learning On Real Patients?

MannequinMammalsOkay, admit it, this title will be challenging for some. Although the converse of the question is what some of the naysayers say about simulation. What’s the real deal? The real deal is learning from real patients is an invaluable part of the healthcare education experience. At the risk of alienating some we must think of the real patient experience as a “tool” in the educational toolbox. But there are many tools in the toolbox!

We must also recognize the value of learning in the simulated environment as an additional tool in the toolbox. When we have many different tools by which to complete a goal, going through the risks and benefits of each will help us make the proper decisions to allow us to proceed with the most efficiency and effectiveness in our educational endeavors.

When I observe aspects of examples of learning in the clinical environment I become easily frustrated with seeing examples of colossal amounts of time wasted while waiting for some nugget of education to randomly appear in the clinical environment. Paramedic and nursing students working in the clinical environment that are changing bed linens over and over again are clearly being used as a service to someone and not functioning in a capacity that is enhancing their clinical learning. Similarly interns that may be on a specialty care service that are dictating their 30th discharge summary of the month are probably being used more in a service capacity than one in an environment that enhances their education.

Some of the advantages of simulation include being able to structure the learning environment so that the time can be accounted for in a more robust fashion that helps to ensure that valuable learning opportunities are presented, encountered or participated in. Additionally the ability to do and re-do exist in the simulated environment, where as in most cases this is not possible in the actual clinical care environment.  This is important to enhance and create programs of mastery learning with incorporated deliberate practice. And this applies whether we are talking about individual expertise or that of groups of people working on collaborative goals in team training environment. Additionally, in many simulation program designs there is much closer oversight of what a learner or groups of learners is/are accomplishing in the simulated environment when compared to the oversight that occurs in most clinical learning environments.

Please don’t misunderstand; I am not trying to diminish the value of learning on real patients in the clinical environment. I am merely stating that there are pros and cons, limitations and capabilities of all different modalities of learning as we bring people along the journey of what it takes to become a practicing healthcare professional. It is one that is complex that requires multiple repetitions from different vantage points, perspectives, as well as opportunities for learning. Carefully evaluating those opportunities, the resources that are available in a given program are important concepts to ensure that we continue to improve the health care education for creating tomorrow’s healthcare providers.

Those who are in the capacity of creating new curriculum or revamping and revising old, would do well to think broadly about the needs of the learner, the level of learner and what would be optimal exposure to create the most efficient and effective learning at that point in time. We need to begin to challenge the existing status quo so that we can truly move forward in revising healthcare education to continue to allow people to achieve excellence.

3 Comments

Filed under Uncategorized

Are Routine Pre-and Post Simulation Knowledge Tests Ethical? Useful? To whom?

shutterstock_77554009X_aDisclaimer (before you read on): This post is not referring to research projects that have been through an institutional review board or other ethics committee reviews.

What I am actually referring to is the practice of many simulation programs that do routine written pre-test, followed by written posttest to attempt to document a change in the learner’s knowledge as a result of participating in the simulation. This is a very common practice of simulation programs. It seems the basis of such testing would be to eventually be able to use the anticipated increase in knowledge as a justification for the effectiveness of the simulation-based training.

However we must stop and wonder if this ethical? I believe as described in some of my previous posts that there is a contract that exists between participants of simulation encounters, and those who are the purveyors of such learning activities. As part of this contract we are agreeing to utilize the time of the participating in a way that is most advantageous to their educational efforts that help them become a better healthcare provider.

With regard to pretesting, we could argue from an educational standpoint that we are going to customize the simulation education to help tailor of the learning to the needs of the learners as guided by the results of some pretest. I.e. using to pretesting some sort of needs analysis fashion. But this argument requires that we actually used the results of said pre-test in this fashion.

The second argument and one that we embark upon in several of the programs of which I have designed is that we are assessing the baseline knowledge to evaluate the effectiveness of pre-course content, or pre-course knowledge that participants are programs to do either complete or possess prior to coming to the simulation center.  I.e.  A readiness assessment of sorts. In other words the question being is this person cognitively prepared to engage in the simulation endeavors that I am about to ask them to participate in.

Finally another argument from an educational standpoint for pretesting could be made that we would like to point out to the participants of the simulation areas of opportunity to enhance their learning. We could essentially say that we are helping the learner direct where they will pay close attention and focus on during the simulation activities or participation in the program. Again this is predicated on the fact that there will be a review of the pretest answers, and/or at least feedback to the intended participants of the simulation program on the topic areas, questions or subjects of which they did not answer the questions successfully.shutterstock_201601739-a

The posttest argument becomes a bit more difficult from an ethical perspective outside of the aforementioned justification of the simulation-based education. I suppose we could say that we are trying to continue to advise the learner on areas that we believe there are opportunity for improvement and hopefully inspire self-directed learning.

However my underlying belief is if we look at ourselves in the mirror, myself included, we are trying to collect the data over time so that we can perform some sort of retrospective review and hopefully uncover there was a significant change in pretest versus posttest testing scores that we can use to justify our simulation efforts in whole or in parts.

This becomes more and more concerning if for no other reason than it can lead to sloppy educational design. What I mean is if we are able to ADEQUATELY assess the objectives of a simulation program with a given pair written tests, it is likely more knowledge-based domain items we are assessing and we always have to question is simulation the most efficient and effective modality for this effort. I.e. if this is the case may be every time I give a lecture I should give a pre-and posttest (although this would make the CME industry happy) to determine the usefulness of my education and justify the time of the participants attending the session. Although in this example if I was lecturing and potentially enhancing knowledge, perhaps one could argue that a written test is the correct tool. However the example is intended to put out the impracticality and limited usefulness of such an endeavor.

As we continue to attempt to create arguments for the value of simulation and overcome the hurdles that are apparent as well as hidden, I think that we owe it to ourselves to decide whether such ROUTINE use of pre-and post-testing is significantly beneficial to the participants of our simulation, or are we justifying the need to do so on the half of the simulation entity. Because we owe it to our participants to ensure that the answer reflects the former in an honest appraisal.

1 Comment

Filed under Uncategorized

Simulation Programs Should Stop Selling Simulation

SimforSaleWhatever do I mean? Many established simulation programs believe that their value is through creating simulation programs for people by which to attain knowledge, skills and/or perfect aspects of that needed to effectively care for patients. All of that is true, obviously. However, I believe that the true value of many established simulation programs is in the deep educational infrastructure that they provide to the institution with whom they may be affiliated. Whether that expertise is in the project management of educational design, educational design itself, the housing of the cadre of people who are truly interested in education, or the operational scheduling and support needed to pull off a major program, I believe these examples are the true understated value of many simulation programs.

Simulation programs tend to attract a variety of people who are truly interested in education. While I don’t think that everyone who is passionate about teaching in healthcare needs to be an educational expert, I do believe that it is important that we have people involved in the development and deployment of innovative education who are truly interested in teaching. Many hospitals and universities rely on personnel to conduct their education programs that are subject matter experts, but may or may not have desire, interest or satisfactory capabilities needed for teaching.

Many people who are passionate about teaching in healthcare have a particular topic or two that they like to teach about, but lack the skills of critical analysis, and deeper knowledge of educational design principles to help them parse their education efforts into the appropriate methods to create maximal efficiency in the uptake of the subject matter.  This very factor is likely why we still rely on good old-fashioned lecture as a cornerstone of healthcare education whether we are evaluating that from the school perspective, or the practicing healthcare arena. Not that I believe there is anything wrong with lecture, I just believe that it is often overused, often done poorly, and often done in a way that does not encourage active engagement or active learning between the lecturer in the participant’s.

Simulation programs are often the water cooler in many institutions around which people that are truly interested in and may have some additional expertise in an education will tend to congregate. The power of this proximity creates an environment rich for brainstorming, enthusiasm for pushing the envelope of capabilities, and continuous challenge to improve the methods by which we undertake healthcare education.

Simulation programs that have curricular development capabilities often have project management expertise as well as operational expertise to create complex educational solutions. This combination of skills can be exceptionally valuable to the development of any innovative education program in healthcare whether or not simulation is part of the equation.

Many times healthcare education endeavors are undertaken by one or two people who quickly become overwhelmed without the supporting infrastructure that it takes to put on educational activities of a higher complexity than a simple lecture. Often times this supporting technology or set of resources resides inside the walls of “simulation centers” are programs. By not providing access to these para-simulation resources to the rest of the institution, I argue that simulation programs are selling themselves short.

If you consider the educational outcomes from a leadership perspective (i.e. CEO, Dean etc.), They are much less concerned about how the educational endeavor occurred, but far more focused on the outcomes. So while there are many topics and situations that are perfect for simulation proper, we all know there is a larger need for educational designs with complexity larger than that of a lecture that may not involve simulation.

If a given simulation program partners with those trying to create complex educational offerings that don’t directly involve simulation, but are good for the mission of the overall institution with whom they are aligned, it is likely going to endear, or create awareness for the need for continuing or expanding the support of that particular program by the senior leadership team.

If you sit back and think about it, isn’t that an example of great teamwork?

1 Comment

Filed under Uncategorized

The True Value of Simulation into the Future: Assessment. Still.

I recently had the honor of delivering the keynote address at the Annual Meeting of the Association for Simulated Practice in Healthcare (ASPiH) in Nottingham, England. ASPiH has established itself as the simulation association for the United Kingdom and is certainly one of the premiere societies in all of Europe dedicated to simulation. I was asked to talk about incorporating simulation into the assessment of professional practice. The development of the talk gave me a good bit of time to introspect, reflect and consider many of the possibilities.

One thing that became clear in my mind though is a reinforcement of thoughts that the true value over time that will provide the necessary return on investment for simulation is assessment. The ability for simulation to provide opportunity to assess the competency of individuals whether they are completing undergraduate programs leading to a certificate, degree, or some sort of license in healthcare, or they are practicing professionals on the front line is critical to the future of healthcare as well as the community of simulation.

The next decade of global healthcare in the developed world will shift to have tremendous focus on improving quality, value and safety like no other era in the past. Multiple factors are driving this agenda ranging from a demand from the public to improve healthcare as well as a continually rising expectation in excellence, a realistic need to lower the cost of care, the gathering of transparency of quality and safety data just to name a few. Improving the demonstration of clinical competence amongst individuals as well as teams is linked to each and every effort to improve care. Yet despite hundreds of years of evolution of the teachings of healthcare professions we still have not yet developed widespread, valid, reliable performance exams that evaluate the application of knowledge.

So why aren’t more people using simulation for assessment? The answer is complicated. I believe part of it is the assess-o-phobia that I have mentioned in a previous blog posts. Next, defining measures of clinical performance is in general, difficult. In my opinion to develop assessment tools in simulation is much harder than any other facet involved in the creating of simulation scenarios and associated learning programs. This presents a formidable barrier. Lastly, there is a pervasive discomfort felt by many people associated with creating assessment tools that would assign a “grade” or something similar to a simulation.

It is rather interesting with the comfort that we deal out a written examination often times made up of multiple choice questions that we have developed either personally or with groups of people and use it as a knowledge assessment tool. While I’m not disputing the ability of the written test to serve as an assessment of knowledge, the striking thing is the contrast in the discomfort to developing such a measurement tool for simulation, or even actual clinical operations or provision of care of real patients.

Some people profoundly advocate simulation should be used for assessment because it is not appropriate tool, and others feel that it violates the safe learning environment. I think as we shift to a patient centric approach to simulation we should be able to create a reduction in this reluctance that allows assessment forward. In fact, I always find it interesting to point out to people during debriefing training programs, particularly those that are vocal against concepts of assessment, and let them realize that when they watch a simulation and then conduct and/or facilitated briefing they have actually already performed assessment in their minds. The very items that they have formed an opinion on, or “assessed” will play a part in the educational strategy that should ultimately reinforce what participants did well and encourage change in the areas where deficiencies were noted that will lead to an effective debriefing and the accomplishment of learning objectives.

Allowing participants to demonstrate competence could be one of the most important parts of the value equation for simulation. Manager and leaders of healthcare providing institutions are grappling with ways to improve quality and significantly improve patient safety all over the world. A patient centric approach to simulation would certainly suggest that as well.

This inevitably will help us in making stronger arguments for the case for simulation. At the moment many people try to sell the idea of simulation to their leadership. This creates thoughts and visions of expensive investments in technology and the daily pains of leaders. If we shift the point of focus point our sales pitch pivots to the selling of the concept of excellence, improved patient care, and safer patient care it will far better align with the pain points of those running healthcare systems. That becomes harder to deny!

1 Comment

Filed under Uncategorized