As we welcome 2022, the healthcare landscape is replete with organizations whose main purpose is to disseminate empirically supported evidence to a wide range of knowledge users. To be sure, getting evidence to the right people, at the right time, and in the right format remains an important goal, and we are still building capacity on this front. People cannot make informed decisions in a vacuum, and as the world contends with insurmountable misinformation compounded by the pandemic, targeted and effective dissemination of evidence-based knowledge remains imperative for achieving optimal health and well-being. <\/p>\n\n\n\n
Drivers for improved dissemination include voluntary health sector and philanthropic institutions and health funders who have backed the creation of intermediary and dissemination-focused organizations rooted in specific domains (e.g., see pain<\/a>, mental health<\/a>, injury prevention<\/a>, to name but a few). These endeavours have gone a long way to improve access to evidence-based knowledge. Evidence dissemination has been buoyed by targeted funding (i.e., NCE-KM, now retired), advances in social media and e-health technologies, and capacity building for a multi-skilled knowledge practitioner (KTP) workforce that bring the evidence to light. Evidence about a broad range of health topics is now widely accessible on the internet, although the problem of evidence quality and source credibility remains ubiquitous. <\/p>\n\n\n\n Shortcoming One<\/strong><\/p>\n\n\n\n For some time now, I have observed two shortcomings in the evidence dissemination space. The first is that despite the volume of evidence dissemination, there appears to be very little meaningful evaluation of its\u2019 outcomes and impacts. If intermediaries are indeed evaluating their dissemination efforts, they\u2019re not sharing what they\u2019re learning. The field seems content with disseminating well-crafted soundbites to targeted audiences but it\u2019s hard to discern to what effect. <\/p>\n\n\n\n Many intermediaries collect indicators of access or spread because, frankly, this is automated and easy to do. But I surmise, based on years of interacting with KT workshop trainees, that the impact of these efforts is not being optimized due to lack of evaluation. Content about dissemination evaluation is something our workshop participants can\u2019t get enough of. What appears to be the norm is that the effort is surrendered mid-way, after a dissemination output is launched and the task is taken to be complete. The result is a missed opportunity to learn whether the dissemination outputs are meaningful and impactful relative to their intended purpose.<\/p>\n\n\n\n