Shortcomings of Evidence Dissemination
As we welcome 2022, the healthcare landscape is replete with organizations whose main purpose is to disseminate empirically supported evidence to a wide range of knowledge users. To be sure, getting evidence to the right people, at the right time, and in the right format remains an important goal, and we are still building capacity on this front. People cannot make informed decisions in a vacuum, and as the world contends with insurmountable misinformation compounded by the pandemic, targeted and effective dissemination of evidence-based knowledge remains imperative for achieving optimal health and well-being.
Drivers for improved dissemination include voluntary health sector and philanthropic institutions and health funders who have backed the creation of intermediary and dissemination-focused organizations rooted in specific domains (e.g., see pain, mental health, injury prevention, to name but a few). These endeavours have gone a long way to improve access to evidence-based knowledge. Evidence dissemination has been buoyed by targeted funding (i.e., NCE-KM, now retired), advances in social media and e-health technologies, and capacity building for a multi-skilled knowledge practitioner (KTP) workforce that bring the evidence to light. Evidence about a broad range of health topics is now widely accessible on the internet, although the problem of evidence quality and source credibility remains ubiquitous.
Shortcoming One
For some time now, I have observed two shortcomings in the evidence dissemination space. The first is that despite the volume of evidence dissemination, there appears to be very little meaningful evaluation of its’ outcomes and impacts. If intermediaries are indeed evaluating their dissemination efforts, they’re not sharing what they’re learning. The field seems content with disseminating well-crafted soundbites to targeted audiences but it’s hard to discern to what effect.
Many intermediaries collect indicators of access or spread because, frankly, this is automated and easy to do. But I surmise, based on years of interacting with KT workshop trainees, that the impact of these efforts is not being optimized due to lack of evaluation. Content about dissemination evaluation is something our workshop participants can’t get enough of. What appears to be the norm is that the effort is surrendered mid-way, after a dissemination output is launched and the task is taken to be complete. The result is a missed opportunity to learn whether the dissemination outputs are meaningful and impactful relative to their intended purpose.
The analogy that comes to mind is the dandelion. Some efforts to share evidence are characterized as passive diffusion – the dandelion seed flies with the wind and lands where it may, whether on sterile or fertile soil, to eke out an existence or not. Diffused evidence may travel far but efforts are random, untargeted and unintentional, and thus, impacts are rare. Might as well not have dispersed the knowledge at all.
In the more recent era of evidence dissemination, we have taken care to plant the seeds of knowledge in a more systematic and intentional manner, row upon row, ripe for the picking. Yet, more often than not, we fail to return to the crop to ascertain how it grew, who picked it, and what they did with it. Did it grow as expected (the product), was it seen and by whom (access, reach), and most importantly, how was it experienced (benefits for the knowledge user).
The result is a somewhat routine productivity cycle that overemphasizes a volume of products (dissemination outputs) that are loosely tethered to their purpose and intended goal, and that provides little evidence of goal attainment and subsequent benefits. We are missing the ‘so what’ of the effort, often focusing solely on the production line without assessing the impacts of our dissemination products. This is a huge, missed opportunity that leaves few insights as to whether the investment is yielding impact. Evaluation insights inform future dissemination efforts by highlighting strategies that were effective for specific audiences and aims. Evaluation provides those all-important impact stories. Failure to evaluate leaves us shooting in the dark.
What to do?
Comprehensive frameworks for evidence dissemination lay out an approach that informs intentional, explicit and systematic steps that, if followed, will ensure dissemination outputs are linked to the audience, main message, clarity of purpose, appropriate dissemination strategies, and evaluation of the dissemination effort relative to its’ purpose: where the message landed, who saw it, what they thought of it, and how they benefitted. Sadly, the evaluation component is often unplanned (no identified purpose for the communication and no subsequent indicator of whether it was achieved), aborted or solely captured as reach using web analytics.
It is common to share evidence-based knowledge on the web, as this offers greater visibility, interactivity, and access for a wider audience. Web analytics can tell you such things as how many users are on your site right now, what cities and countries your users are visiting from, what devices your audience uses, the channels that are driving the most traffic to your site, and how users navigate your site and content. What web analytics can’t tell you is how users experienced and subsequently benefitted from the content, and isn’t this also what we want to know?
A couple of years ago, the editorial advisory board at AboutKidsHealth (AKH) gathered to review annual analytics. AKH is a web-based health education resource for children, youth and caregivers that is approved by healthcare providers at The Hospital for Sick Children. The resource aims to empower families to partner in their own health care by equipping them with reliable, evidence-based health information. It does this by making complex health information easy to understand for families and making it immediately available whenever and wherever they are in Canada or the world.
I was new to the editorial board and wondered aloud if we couldn’t do more to explore how users experienced and benefitted from the content. It was a lightbulb moment that subsequently gave rise to connecting with Dr Pierre Pluye at McGill University, who had developed and piloted a survey methodology that captures this type of information about web content. We subsequently adopted the Instrument Assessment Method as a feedback pop up survey on AKH content pages and are now tracking users’ impressions of content relevance and comprehension, usefulness and intended use of the information, and anticipated benefits. These data will inform content revisions, new content, and help us better understand user impacts. Pretty neat.
In short, dissemination organizations need more evaluation of their efforts and they need more opportunities to showcase what they learn with the field. Some conferences offer appropriate venues for this interchange, but these are few and far between. Years ago, I proposed a practice-based magazine (not a journal) where KTPs and their organizations could share their dissemination and implementation practice-based evaluation work, but I didn’t get any traction. I’m still on the lookout for opportunities to realize this vision.
Shortcoming Two
Most intermediary organizations focus solely on dissemination and do not venture into the implementation space. Dissemination is most definitely an important effort, and for some types of evidence – conceptual and symbolic –[1] this is sufficient if done well. Of late, however, many of my workshop participants have questioned how they can support the implementation of empirically supported innovations when they are not mandated, equipped or resourced to do so. What, they ask, do we do when we want to go beyond dissemination to support how evidence is applied in practice settings or when it has the potential to inform policy (instrumental use1). This seems to be beyond the mandate of many dissemination-focused organizations and represents a need to build knowledge and capacity in implementation science (IS) within the bounds of organizational mandates, resources, and workforce capacity.
What to do
The first solution to this predicament is to ramp up KTPs’ familiarity with implementation science and practice; what it is, what it entails, and how it’s related but different from dissemination[2]. KTPs have an opportunity to disseminate both the evidence-based intervention and the evidence-based guidance on how it can be taken up, yet this rarely happens. Without implementation guidance, we’re essentially disseminating interventions with the intention they will be taken up but without the necessary supports to facilitate this.
This is akin to IKEA sending you home with the flat box of bits and pieces only to discover there are no instructions on how to put it together. Exasperation ensues and the box then sits in the corner of a room, untouched. And so goes the outcome potential for reams of evidence-based interventions left languishing because no one thought to include instructions for use. Tsk, tsk.
The intention here is not to transform KTPs into implementation facilitators necessarily[3], but this might be feasible for some organizations that are prepared to build capacity in this area. Herein lies solution two. Dissemination organizations could broaden their mandate to include implementation facilitation. To do this would be a significant undertaking in workforce development and take time to do well, but kudos to those who can steer their organization in this direction for this is what we need. For some types of evidence, that which is ready for use, dissemination is only part of the journey. We can’t arrive at our destination of improved health and well-being without explicitly attending to implementation.
The third solution is to build IS capacity in health research. Researchers can and should do more to consider how their innovations will be used in practice and to incorporate new research designs (hybrid implementation studies), equitable research and engagement practices, and practical implementation facilitation whilst establishing effectiveness rather than after years of randomized trials. Building IS knowledge among health researchers will yield research innovations that are more amenable to application because implementation considerations will be embedded alongside the intervention.
In other words, innovations are only as effective as their complementary implementation guidance, so both must travel the dissemination pathway hand-in-hand. Effective interventions that are not effectively implemented and used optimally will fail to reap value from existing investments (aka research waste). Implementation considerations are fundamental to intervention development. Leave it too late, and implementation will remain an afterthought that will have you playing catch up for several more years, further delaying optimal outcomes for the population.
So, there you have it, my thoughts on the shortcomings of evidence dissemination occurring across the world. These are modifiable, and I believe making the recommended shifts will move the field ahead and improve the perceived value of dissemination work and the organizations dedicated to it.
[1] J. M. Beyer (1997) summarizes the three types of research use in the following way: Research on the utilization of research findings has revealed three types of use: instrumental, conceptual, and symbolic. Instrumental use involves applying research results in specific, direct ways. Conceptual use involves using research results for general enlightenment; results influence actions but more indirectly and less specifically than in instrumental use. Symbolic use involves using research results to legitimate and sustain predetermined positions. (P. 17). Quoted from Amara N, Ouimet M, Landry Ré. New Evidence on Instrumental, Conceptual, and Symbolic Utilization of University Research in Government Agencies. Science Communication. 2004;26(1):75-106. doi:10.1177/1075547004267491
[2]Dissemination refers to the process and dissemination strategies that make scientific findings accessible and understandable to the knowledge user. Implementation is the use of implementation process and strategies that promote the adoption, integration and scale-up of evidence-based interventions and change practices within specific settings. These are related but different methods.
[3] Implementation facilitators are specifically trained to apply implementation science knowledge and interventions to enable others to understand what they need to change, plan and execute changes and address barriers to change efforts. They work with implementation teams within implementing organizations to select innovations, adapt them to the local context, and steer implementation process, strategies, and evaluation to support implementation. Adapted from BMC Health Serv Res. 2017; 17: 294. doi: 10.1186/s12913-017-2217-0. Mona J. Ritchie, Louise E. Parker, Carrie N. Edlund, and JoAnn E. Kirchner