Background. The Agency for Healthcare Research and Quality (AHRQ) through its Effective Health Care (EHC) Program partners with networks of researchers and clinical teams across North America, using input from stakeholders throughout the process of comparative effectiveness research, translation, dissemination, and implementation of research findings. The Evidence-based Practice Centers (EPCs) perform in-depth reviews of existing evidence. An important part of these reviews is to not only synthesize the evidence, but also identify the gaps in evidence that limited the ability to answer the systematic review questions. AHRQ supports EPCs to work with various stakeholders to further develop and prioritize the future research needed by decisionmakers. AHRQ has commissioned a series of methods papers to inform this activity. Objective. Clearly defined criteria are integral to the future research needs (FRN) prioritization process. The objective of this paper is to propose preliminary criteria and a model worksheet that EPCs and stakeholders could use when identifying, developing, and prioritizing FRNs. Methods/Approach. The EHC Program topic selection criteria were used as a starting point. The experiences and reports of eight EPCs that conducted pilot projects for FRN prioritization were then utilized to refine the criteria. A draft proposal for FRN prioritization criteria and methodology was developed and circulated to the eight EPCs; feedback further informed a series of iterations, leading to this document. Results. The 18 EHC Program topic selection criteria were modified by the 8 EPCs as part of their FRN pilot projects. Criteria that did not apply to future research needs were dropped. Criteria that were already met by default, due to requirements for the selection of the topic for the comparative effectiveness reviews and systematic reviews, were set aside. The remaining criteria were separated into two domains: potential value and probability of success (feasibility, likelihood, capacity). The process for FRN projects was refined. The potential value criteria would be utilized for stakeholder prioritization of FRNs. The probability of success criteria would be applied after the priority FRNs underwent study design consideration by the EPC. EPCs could work with stakeholders to prioritize research gaps that are not or have not been addressed but are of high potential value. After identifying these high-priority research needs, the EPC will consider the feasibility and capacity criteria when developing potential study designs. Hide
A Framework to Facilitate the Use of Systematic Reviews and Meta-analyses in the Design of Primary Research Studies
Author:
Thompson, M., Tiwari, A., Fu, R., Moe, E. and Buckley, D. I.
Objectives. Systematic reviews are currently used by only a minority of researchers to inform the design of research studies. This may lead to inefficient and potentially wasteful research. We aimed to develop a framework which clinical researchers can apply to existing systematic reviews in order to effectively inform the design of proposed new clinical research studies. Data Sources. Published frameworks or models designed to use results of systematic reviews or meta-analyses in new research study design. Review Methods. A multiphase iterative process was used to develop the framework. Phase 1 involved a focused literature search to identify existing frameworks and processes that have been proposed as methods to identify research gaps by systematic reviews. In phase 2, we convened a multidisciplinary group with varied expertise to develop a stepwise framework. In phase 3, we identified two systematic reviews and applied this framework to their results. Phase 4 invited external opinions from additional experts to further refine the framework. Results. We developed a four-step framework designed to be useable by primary researchers: Step 1 involves clearly laying out the crucial design elements of the proposed study using PICOTS (populations, interventions, comparators, outcomes, timing, and setting) elements. Step 2 provides a simple method to identify an existing systematic review which is current, valid, and relevant enough to the proposed research study to inform its design. In Step 3, the details of the systematic review are examined to determine the extent to which it has already addressed the questions proposed by the new study, and uses the PICOTS elements of the primary studies included in the systematic review to modify the design of the proposed study. Finally, Step 4 establishes the need (or otherwise) for the proposed study, and prioritizes modifications to the research design. Conclusions. The four-step framework proposes a practical method which can be used by clinical researchers who are not experts in systematic reviews to determine whether further research studies are needed and suggest ways that the primary literature identified by the systematic review can be used to modify the design of further research studies. Further research needs to determine how useful and practical this proposed framework is for researchers, and attempt to measure its value in modifying research designs and optimizing research efficiency. Hide
Stakeholder Involvement in Improving Comparative Effectiveness Reviews: AHRQ and the Effective Health Care Program
Author:
Balshem, H., Curtis, P., Joplin, L., Justman, R. A. and Rosenberg, A. B.
The Agency for Healthcare Research and Quality (AHRQ) Effective Healthcare (EHC) Program has noted the challenge of decisionmaking when evidence of safety and effectiveness is in dispute and, along with others, has identified the importance of values, preferences, and other contextual factors as part of the decision-making process. Stakeholders have played a significant and important role in identifying ways to make comparative effectiveness reviews more useful to decisionmakers by providing input on the needs of the diverse audience of stakeholders when evidence of safety and effectiveness is weak or uncertain. This paper discusses the valuable input that members of the EHC Program Product Development Workgroup provided regarding several key programmatic and content areas, including: report enhancements designed to support decision making when evidence of safety and effectiveness is limited, weak, or conflicting; improvements to the readability and accessibility of program reports; and enhancements to the future research sections of reports. Hide
Defining an Optimal Format for Presenting Research Needs
Author:
Trikalinos, T. A., Dahabreh, I. J., Lee, J. and Moorthy, D.
Systematic reviews and other secondary research reports that are based on data from multiple sources, such as decision or cost-effectiveness analyses, often conclude by noting gaps in the available evidence and make recommendations for future research. Potential users of these recommendations include policy makers and funders, as well as healthcare researchers. The purpose of this project is to determine an optimal format for presenting a new type of product of the Evidence-based Practice Center (EPC) Program, the Future Research Needs documents. In particular, we address the following questions: What level of specificity is needed by various funders or researchers for a research needs document to be useful? How can one categorize and present research needs? What are the specific barriers to making a research needs document useful to researchers and funders? To answer these questions, we performed an empirical assessment of the literature to understand how future research needs have been presented in the published literature, and sought feedback from healthcare researchers, research funders, or payers in the form of open-ended interviews. Based on the results of the empirical assessment and the qualitative interviews we outline the preliminary recommendations. Future research needs documents for the EPC program should provide succinct yet adequate description of methods and results following guidelines for reporting for qualitative research and modeling, as applicable. It is important to justify the selection of the stakeholders who participate in identifying or prioritizing research needs, and to be clear on their degree of engagement. It may be useful to report results of future research needs assessments at two levels of detail: the more abstract level would mention general areas of future research without details on potential research designs or specific details on e.g., populations, interventions and outcomes, which could be elaborated in the second level. It may be preferable to avoid explicit prioritization of research needs when there are no clear differences in the perceived strength of alternative recommendations. Overall, future research needs recommendations are projections and therefore should not be prescriptive. Hide
AHRQ series paper 3: identifying, selecting, and refining topics for comparative effectiveness systematic reviews: AHRQ and the effective health-care program
Author:
Whitlock, E. P., Lopez, S. A., Chang, S., Helfand, M., Eder, M. and Floyd, N.
OBJECTIVE: This article discusses the identification, selection, and refinement of topics for comparative effectiveness systematic reviews within the Agency for Healthcare Research and Quality's Effective Health Care (EHC) program. STUDY DESIGN AND SETTING: The EHC program seeks to align its research topic selection with the overall goals of the program, impartially and consistently apply predefined criteria to potential topics, involve stakeholders to identify high-priority topics, be transparent and accountable, and continually evaluate and improve processes. RESULTS: A topic prioritization group representing stakeholder and scientific perspectives evaluates topic nominations that fit within the EHC program (are "appropriate") to determine how "important" topics are as considered against seven criteria. The group then judges whether a new comparative effectiveness systematic review would be a duplication of existing research syntheses, and if not duplicative, if there is adequate type and volume of research to conduct a new systematic review. Finally, the group considers the "potential value and impact" of a comparative effectiveness systematic review. CONCLUSION: As the EHC program develops, ongoing challenges include ensuring the program addresses truly unmet needs for synthesized research because national and international efforts in this arena are uncoordinated, as well as engaging a range of stakeholders in program decisions while also achieving efficiency and timeliness. Hide
Identifying, Selecting, and Refining Topics
Author:
Whitlock, E. P., Lopez, S. A., Chang, S., Helfand, M., Eder, M. and Floyd, N.
Year:
2009 Source: Methods Guide for Effectiveness and Comparative Effectiveness Reviews, Vol. , Issue , PP 41659
Key Points AHRQ's Effective Health Care (EHC) Program seeks to: ? Align its research topic selection with the overall goals of the program. ? Impartially and consistently apply predefined criteria to potential topics. ? Involve stakeholders to identify high-priority topics. ? Be transparent and accountable. ? Continually evaluate and improve processes. A topic prioritization group representing stakeholder and scientific perspectives evaluates topic nominations for: ? Appropriateness (fit within the EHC Program). ? Importance. ? Potential for duplication of existing research. ? Feasibility (adequate type and volume of research for a new comparative effectiveness systematic review). ? Potential value and impact of a comparative effectiveness systematic review. As the EHC Program develops, ongoing challenges include: ? Ensuring the program addresses truly unmet needs for synthesized research, since national and international efforts in this arena are uncoordinated. ? Engaging a range of stakeholders in program decisions while also achieving efficiency and timeliness. Introduction Globally, people are struggling with the reality of limited resources to address the breadth of health and health care needs. Evidence has been recognized as the "new anchor for medical decisions,"1 and many consider systematic reviews to be the best source of information for making clinical and health policy decisions.2 These research products rigorously summarize existing research studies so that health and health care decisions by practitioners, policymakers, and patients are more evidence based. Yet, dollars for research—whether for systematic reviews, trials, or observational studies—are constrained, and are likely to be constrained in the future. Effective prioritization is clearly necessary in order to identify the most important topics for synthesized research investment that may help the U.S. health care system realize powerful and meaningful improvements in health status. This paper discusses the identification, selection, and refinement of topics for comparative effectiveness systematic reviews within the Effective Health Care (EHC) Program of the Agency for Healthcare Research and Quality (AHRQ), which has been described in more detail elsewhere.3 In 2003, the U.S. Congress authorized AHRQ's Effective Health Care Program to conduct and support research on the outcomes, comparative clinical effectiveness, and appropriateness of pharmaceuticals, devices, and health care services. This program utilizes the AHRQ Evidence-based Practice Center (EPC) Program, with 14 designated centers throughout North America that conduct comparative effectiveness systematic reviews, among other research products of the program. AHRQ has designated a Scientific Resource Center (SRC), currently housed at the Oregon EPC, to support the EHC Program as a whole. The SRC has specific responsibilities, including assisting AHRQ with all aspects of research topic development (Figure 1), providing scientific and technical support for systematic reviews and outcomes research, and collaborating with EHC stakeholder and program partners. It is not a simple process to select and develop good topics for research. Researchers'success depends in large part on their ability to identify meaningful questions, while funding agencies continually seek to maximize the return on their investment by funding research on important, answerable questions relevant to significant portions of priority populations. Some have criticized how well funders have actually achieved these results.4 However, there is little guidance for successfully developing a research program that generates the type of evidence necessary to improve the public's health. Hide
CADTH
Methodology for the Development of a Canadian National EMS Research Agenda
Author:
Jensen, J. L., Blanchard, I. E., Bigham, B. L., Dainty, K. N., Socha, D., Carter, A., Brown, L. H., Craig, A. M., Travers, A. H., Brown, R., Cain, E. and Morrison, L. J.
BACKGROUND: Many health care disciplines use evidence-based decision making to improve patient care and system performance. While the amount and quality of emergency medical services (EMS) research in Canada has increased over the past two decades, there has not been a unified national plan to enable research, ensure efficient use of research resources, guide funding decisions and build capacity in EMS research. Other countries have used research agendas to identify barriers and opportunities in EMS research and define national research priorities. The objective of this project is to develop a national EMS research agenda for Canada that will: 1) explore what barriers to EMS research currently exist, 2) identify current strengths and opportunities that may be of benefit to advancing EMS research, 3) make recommendations to overcome barriers and capitalize on opportunities, and 4) identify national EMS research priorities. METHODS/DESIGN: Paramedics, educators, EMS managers, medical directors, researchers and other key stakeholders from across Canada will be purposefully recruited to participate in this mixed methods study, which consists of three phases: 1) qualitative interviews with a selection of the study participants, who will be asked about their experience and opinions about the four study objectives, 2) a facilitated roundtable discussion, in which all participants will explore and discuss the study objectives, and 3) an online Delphi consensus survey, in which all participants will be asked to score the importance of each topic discovered during the interviews and roundtable as they relate to the study objectives. Results will be analyzed to determine the level of consensus achieved for each topic. DISCUSSION: A mixed methods approach will be used to address the four study objectives. We anticipate that the keys to success will be: 1) ensuring a representative sample of EMS stakeholders, 2) fostering an open and collaborative roundtable discussion, and 3) adhering to a predefined approach to measure consensus on each topic. Steps have been taken in the methodology to address each of these a priori concerns. Hide
Priority setting for health technology assessment at CADTH
OBJECTIVES: The aim of this study was to describe a current practical approach of priority setting of health technology assessment (HTA) research that involves multi-criteria decision analysis and a deliberative process. METHODS: Criteria related to HTA prioritization were identified and grouped through a systematic review and consultation with a selection committee. Criteria were scored through a pair-wise comparison approach. Criteria were pruned based on the average weights obtained from consistent (consistency index < 0.2) responders and consensus. HTA proposals are ranked based on available information and a weighted criteria score. The rank, along with additional contextual information and discussion among committee members, is used to achieve consensus on HTA research priorities. RESULTS: Six of eleven criteria represented > 75 percent of the weight behind committee member decisions to conduct an HTA. These criteria were disease burden, clinical impact, alternatives, budget impact, economic impact, and available evidence. Since May 2006, committees have considered 102 proposals at sixteen biannual in-person advisory committee meetings. These have selected twenty-nine research priorities for the HTA program. CONCLUSIONS: The approach works well and was easy to implement. Feedback from committee members has been positive. This approach may assist HTA and other research agencies in better priority setting by informing the selection of the most important and policy-relevant topics in the presence of a wide variety of research proposals. This may in turn lead to efficiently allocating resources available for HTA research. Hide
Cochrane
The Cochrane Collaboration review prioritization projects show that a variety of approaches successfully identify high-priority topics
Background: The National Institute for Health Research (NIHR) is a strong supporter of The Cochrane Collaboration in the UK. With a planned investment of £21 million over the five years up to March 2015, it is the largest single contributor to the infrastructure costs of Cochrane entities. However, despite this financial support, the potential review workload is always likely to be greater than the resource available to support it, creating the need to prioritise review topics. Objectives: To share methods for prioritising review topics as reported to their funders by UK-based Cochrane Review Groups. Methods: Hide
Galician Health Technology Assessment Agency
Identification, prioritisation and assessment of obsolete health technologies. A methodological guideline
Author:
Ruano-Raviña, A., Velasco González, M., Varela-Lema, L., Cerdá Mota, T., Ibargoyen-Roteta, N. and Gutiérrez Ibarluzea, I.
Introduction: At present there is a growing interest in obsolete health technology identification and assessment. A number of institutions are initiating activities targeted in this direction because the classification of given technologies as obsolete would amount to an important benefit for patients and health systems, in that patients would stop being treated with less effective or less safe technologies. Despite this growing interest, a negligible amount of literature has been published on the topic, as this is a little developed field in health technology assessment, and so a great part of the content matter of this guide has been based on expert opinion. The working group defined obsolete health technology as any health technology in use for one or more indications, whose clinical benefit, safety or cost-effectiveness has been significantly superseded by other available alternatives Objectives: To propose a methodology for identification, prioritisation and obsolete health technology assessment. Methods: We conducted a review of the scientific literature until April 2009 in specialised systematic review databases, such as HTA (Health Technology Assessment), DARE (Database of Abstracts of Reviews of Effectiveness), NHS EED (National Health Service Economic Evaluation Database) and the Cochrane Plus Library; and in general databases, such as Medline, Embase, IME (Índice Médico Español-Spanish Medical Index) and IBECS (Índice Bibliográfico en Ciencias de la Salud). Furthermore, a number of databases and Internet search engines were reviewed, with special emphasis on the web pages of various national health technology assessment agencies and government bodies, particularly in the area of health services research. For perusal of the complete text, we selected records in which any type of obsolete technology was assessed or which contained opinions, ideas, advantages or limitations concerning any aspect linked to obsolete health technologies. There were no inclusion or exclusion criteria per se: instead, these records were selected on a consensus basis by two authors. In addition to the systematic review, a specific methodology was developed for each of the guide's 3 sections. Results: This methodological guide proposes three differentiated sections for identification, prioritisation and assessment of potentially obsolete health technologies. For the first of these sections (identification), five potential detection sources, classified as active or proactive, were established. Active sources include: 1) direct consultation of medical literature (in Medlinetype databases); 2) consultation of new and emerging technology databases (EuroScan, GENTecS, Hayes, ECRI, ASERNIP-S); 3) consultation of systematic reviews published in the literature or by assessment agencies; and, 4) consultation with secretariats tasked with updating National Health System, hospital or regional service portfolios. Insofar as proactive systems are concerned, networks of health professionals would submit reports on potentially obsolete technologies to health technology assessment agencies or units. After probably obsolete health technologies had been detected by means of any of the above channels, the assessment agencies would then use a standardised procedure to confirm that the identified technology could be classified as potentially obsolete and be prioritised or, alternatively, assessed in cases where it had already been duly prioritised for the purpose. To prioritise potentially obsolete health technologies for subsequent assessment, a prioritisation tool (PriTec tool) and a web application were created. This tool consists of three domains (population/end-users; risk/benefit; and costs, organisation and other implications) with a total of ten criteria. These domains have a weight on the scale of 36.7%, 36.7% and 26.6% respectively. Clinicians, managers and end-users participated in the weighting of the scale and selection of criteria. Using these results, a web application in Spanish and English, which is available and usable free of charge, can be accessed via the avalia-t web page (http://avalia-t.sergas.es/) or directly at www.pritectools.com or www.pritectools.es, and enables up to 50 potentially obsolete health technologies to be compared and prioritised for assessment purposes. To assess a potentially obsolete technology, an assessment-document structure has been proposed, with different sections, centred on comparison of the benefits (in terms of efficacy and of safety, efficiency, cost or other implications) of the potentially obsolete versus the proposed alternative technology. The technology assessment section is based on a systematic review and should meet the requirements of being straightforward, methodical and reproducible. Discussion: The guide can be used by different bodies interested in obsolete health technology assessment. All sections of the guide have advantages and limitations. The identification section should be used on a pilot basis to ascertain which sources of detection are most appropriate or efficient for identification of potentially obsolete health technologies. The prioritisation section enables a range of potentially obsolete technologies to be compared. This is an initial version which can be improved over the course of time. It will be interesting to see how it performs and the degree to which it is used in settings other than Spain. Conclusions and recommendations: To assess any obsolete health technology, a standardised process that enables identification, prioritisation and assessment of such technologies must be established. It is essential to determine the impact to be expected a priori from defining any given technology as obsolete, since the greater the impact, the more the health system will benefit from its assessment and subsequent exclusion. Hide
IOM
Prioritizing Comparative-Effectiveness Research -- IOM Recommendations
Author:
Iglehart, J. K.
Year:
2009 Source: New England Journal of Medicine, Vol. 361, Issue 4, PP 325-328
Clinical research presents health care providers with information on the natural history and clinical presentations of disease as well as diagnostic and treatment options. Consumers, patients, and caregivers also require this information to decide how to evaluate and treat their conditions. All too often, the information necessary to inform these medical decisions is incomplete or unavailable, resulting in more than half of the treatments delivered today lacking clear evidence of effectiveness. Comparative effectiveness research (CER) identifies what works best for which patients under what circumstances. Congress, in the American Recovery and Reinvestment Act (ARRA) of 2009, tasked the IOM to recommend national priorities for research questions to be addressed by CER and supported by ARRA funds. In its 2009 report, Initial National Priorities for Comparative Effectiveness Research, the authoring committee establishes a working definition of CER, develops a priority list of research topics to be undertaken with ARRA funding using broad stakeholder input, and identifies the necessary requirements to support a robust and sustainable CER enterprise. The full list of priorities and recommendations can be found in the below report brief. - See more at: http://www.iom.edu/reports/2009/comparativeeffectivenessresearchpriorities.aspx#sthash.s6wPmfWu.dpuf Hide
Institute of Medicine Outlines Priorities for Comparative Effectiveness Research
Author:
Kuehn, B. M.
Year:
2009 Source: JAMA: the journal of the American Medical Association, Vol. 302, Issue 9, PP 936-937
ID:
10.1001/jama.2009.1186
Priority areas for national action : transforming health care quality
This report follows several studies spearheaded by the Institute of Medicine (IOM) and other groups that document disturbing shortfalls in the quality of health care in the United States. The following statement prepared for the National Roundtable on Health Care Quality captures the magnitude and scope of the problem: Serious and widespread quality problems exist throughout American medicine….[They] occur in small and large communities alike, in all parts of the country and with approximately equal frequency in managed care and fee-for-service systems of care. Very large numbers of Americans are harmed as a result (Chassin and Galvin, 1998:1000). Likewise, two subsequent IOM studies—To Err is Human: Building a Safer Health System (Institute of Medicine, 2000) and Crossing the Quality Chasm: A New Health System for the 21stCentury (Institute of Medicine, 2001a)—focus national attention on patient safety concerns surrounding the high incidence of medical errors and sizable gaps in health care quality, respectively. In addition to the IOM, many others have assumed leadership roles in the movement to address and improve health care safety and quality. These efforts have included both large-scale national initiatives, such as the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry (1998) and Healthy People 2010 (United States Department of Health and Human Services, 2000), and private efforts such as the work of the RAND Corporation, which resulted in a call for mandatory tracking and reporting of health care quality (Schuster et al., 1998). The newly released chart book from the Commonwealth Fund, which examines the current status ofquality of health care in the United States, confirms that quality problems persist (Leatherman and McCarthy, 2002): Fewer than half of adults aged 50 and over were found to have received recommended screening tests for colorectal cancer (Centers for Disease Control and Prevention, 2001; Leatherman and McCarthy, 2002). Inadequate care after a heart attack results in 18,000 unnecessary deaths per year (Chassin, 1997). In a recent survey, 17 million people reported being told by their pharmacist that the drugs they were prescribed could cause an interaction (Harris Interactive, 2001). Problems such as those cited above have now been noted so frequently that we risk becoming desensitized even as we pursue change. Our technical lexicon of performance improvements and system interventions can obscure the stark reality that we invest billions in research to find appropriate treatments (National Institutes of Health, 2002), we spend more than $1 trillion on health care annually (Heffler et al., 2002), we have extraordinary knowledge and capacity to deliver the best care in the world, but we repeatedly fail to translate that knowledge and capacity into clinical practice. Hide
Setting Priorities for Clinical Practice Guidelines
Author:
IOM, Committee on Methods for Setting Priorities for Guidelines Development
In the Omnibus Budget Reconciliation Act of 1989 (P.L. 101-239), Congress created the Agency for Health Care Policy and Research (AHCPR). One mission of the agency—through its Forum for Quality and Effectiveness in Health Care—was to sponsor and encourage the development, dissemination, and evaluation of clinical practice guidelines. Reflecting concerns about the Forum's initial choice of guidelines topics, the 1992 legislation that reauthorized the agency directed it to report to Congress in June 1995 on ''optimal methods for setting priorities for guidelines topics" (P.L. 102-410). The AHCPR, in turn, requested guidance from the Institute of Medicine (IOM). This report presents the Institute's analyses and recommendations as developed by a formally appointed study committee. Hide
NICE
How NICE clinical guidelines are developed: an overview for stakeholders, the public and the NHS
Clinical research has been driven traditionally by investigators, from generating research questions and outcomes through analysis and release of study results. Building on the work of others, the Patient-Centered Outcomes Research Institute (PCORI) is tapping into its broad-based stakeholder community--especially patients, caregivers, and their clinicians--to generate topics for research, help the institute prioritize those topics, select topics for funding, and ensure patients'involvement in the design of research projects. This article describes PCORI's approach, which is emblematic of the organization's mandate under the Affordable Care Act to seek meaningful ways to integrate the patient's voice into the research process, and describes how it is being used in selection of research that PCORI will fund. We also describe challenges facing our approach, including a lack of common language and training on the part of patients and resistance on the part of researchers to questions that are not researcher generated. Faced with the reality that PCORI will not be able to fund all research questions posed to it, there will also be difficult decisions to make when selecting those that have the highest priority for funding. Hide
US Federal Agencies
Comparative effectiveness research priorities at federal agencies: the view from the Department of Veterans Affairs, National Institute on Aging, and Agency for Healthcare Research and Quality
Author:
O'leary, T. J., Slutsky, J. R. and Bernard, M. A.
Year:
2010 Source: Journal of the American Geriatrics Society, Vol. 58, Issue 6, PP 1187-1192
In the last year, attention has been focused on translating federally sponsored health research into better health for Americans. Since the passage of the American Recovery and Reinvestment Act (ARRA) on February 17, 2009, ARRA funds to support Comparative Effectiveness Research (CER) have increased this focus. A large proportion of topical areas of interest in CER affects the older segment of the population. The Department of Veterans Affairs (VA), the National Institute on Aging (NIA), and the Agency for Healthcare Research and Quality (AHRQ) have supported robust research portfolios focused on aging populations that meet the varying definitions of CER. This short article briefly describes the research missions of the AHRQ, NIA, and VA. The various definitions of CER as the Congressional Budget Office, the Institute of Medicine, and the ARRA-established Federal Coordinating Council have put forward, as well as important topics for which CER is particularly needed, are then reviewed. Finally, approaches in which the three agencies support CER involving the aging population are set forth and opportunities for future CER research outlined. Hide
Description of current practice
Comparative effectiveness topics from a large, integrated delivery system
Author:
Danforth, K. N., Patnode, C. D., Kapka, T. J., Butler, M. G., Collins, B., Compton-Phillips, A., Baxter, R. J., Weissberg, J., Mcglynn, E. A. and Whitlock, E. P.
OBJECTIVE: To identify high-priority comparative effectiveness questions directly relevant to care delivery in a large, US integrated health care system. METHODS: In 2010, a total of 792 clinical and operational leaders in Kaiser Permanente were sent an electronic survey requesting nominations of comparative effectiveness research questions; most recipients (83%) had direct clinical roles. Nominated questions were divided into 18 surveys of related topics that included 9 to 23 questions for prioritization. The next year, 648 recipients were electronically sent 1 of the 18 surveys to prioritize nominated questions. Surveys were assigned to recipients on the basis of their nominations or specialty. High-priority questions were identified by comparing the frequency a question was selected to an "expected" frequency, calculated to account for the varying number of questions and respondents across prioritization surveys. High-priority questions were those selected more frequently than expected. RESULTS: More than 320 research questions were nominated from 181 individuals. Questions most frequently addressed cardiovascular and peripheral vascular disease; obesity, diabetes, endocrinology, and metabolic disorders; or service delivery and systems-level questions. Ninety-five high-priority research questions were identified, encompassing a wide range of health questions that ranged from prevention and screening to treatment and quality of life. Many were complex questions from a systems perspective regarding how to deliver the best care. CONCLUSIONS: The 95 questions identified and prioritized by leaders on the front lines of health care delivery may inform the national discussion regarding comparative effectiveness research. Additionally, our experience provides insight in engaging real-world stakeholders in setting a health care research agenda. Hide
A model for incorporating patient and stakeholder voices in a learning health care network: Washington State's Comparative Effectiveness Research Translation Network
Author:
Devine, E. B., Alfonso-Cristancho, R., Devlin, A., Edwards, T. C., Farrokhi, E. T., Kessler, L., Lavallee, D. C., Patrick, D. L., Sullivan, S. D., Tarczy-Hornoch, P., Yanez, N. D. and Flum, D. R.
Objective To describe the inaugural comparative effectiveness research (CER) cohort study of Washington State's Comparative Effectiveness Research Translation Network (CERTAIN), which compares invasive with noninvasive treatments for peripheral artery disease, and to focus on the patient centeredness of this cohort study by describing it within the context of a newly published conceptual framework for patient-centered outcomes research (PCOR). Study Design and Setting The peripheral artery disease study was selected because of clinician-identified uncertainty in treatment selection and differences in desired outcomes between patients and clinicians. Patient centeredness is achieved through the “Patient Voices Project,” a CERTAIN initiative through which patient-reported outcome (PRO) instruments are administered for research and clinical purposes, and a study-specific patient advisory group where patients are meaningfully engaged throughout the life cycle of the study. A clinician-led research advisory panel follows in parallel. Results Primary outcomes are PRO instruments that measure function, health-related quality of life, and symptoms, the latter developed with input from the patients. Input from the patient advisory group led to revised retention procedures, which now focus on short-term (3–6 months) follow-up. The research advisory panel is piloting a point-of-care, patient assessment checklist, thereby returning study results to practice. The cohort study is aligned with the tenets of one of the new conceptual frameworks for conducting PCOR. Conclusion The CERTAIN's inaugural cohort study may serve as a useful model for conducting PCOR and creating a learning health care network. Hide
A practice-based tool for engaging stakeholders in future research: a synthesis of current practices
Author:
Guise, J. M., O'haire, C., Mcpheeters, M., Most, C., Labrant, L., Lee, K., Barth Cottrell, E. K. and Graham, E.
Background. Research gaps prevent systematic reviewers from making conclusions and, ultimately, limit our ability to make informed health care decisions. While there are well-defined methods for conducting a systematic review, there has been no explicit process for the identification of research gaps from systematic reviews. In a prior project we developed a framework to facilitate the systematic identification and characterization of research gaps from systematic reviews. This framework uses elements of PICOS (Population, Intervention, Comparison, Outcomes, Setting) to describe the gaps and categorizes the reasons for the gaps as (A) insufficient or imprecise information, (B) biased information, (C) inconsistent or unknown consistency results, and/or (D) not the right information. Objective. To further develop and evaluate a framework for the identification and characterization of research gaps from systematic reviews. Methods. We conducted two types of evaluation: (1) We applied the framework to existing systematic reviews, and (2) Evidence-based Practice Centers (EPCs) applied the framework either during a systematic review or during a future research needs project (FRN). EPCs provided feedback on the framework using an evaluation form. Results. Our application of the framework to 50 systematic reviews identified about 600 unique research gaps. Key issues emerging from this evaluation included the need to clarify instructions for dealing with multiple comparisons (lumping vs. splitting) and need for guidance on applying the framework retrospectively. We received evaluation forms from seven EPCs. EPCs applied the framework in 8 projects, five of which were FRNs. Challenges identified by the EPCs led to revisions in the instructions including guidance for teams to decide a priori whether to limit the use of the framework to questions for which strength of evidence has been assessed, and the level of detail needed for the characterization of the gaps. Conclusions. Our team evaluated a revised framework, and developed guidance for its application. A final version is provided that incorporates revisions based on use of the framework across existing systematic reviews and feedback from other EPCs on their use of the framework. Future research is needed to evaluate the relative costs and benefits of using the framework, for review authors and for users of the systematic reviews. Hide
Methods of evidence mapping : A systematic review
Author:
Schmucker, C., Motschall, E., Antes, G. and Meerpohl, J. J.
BACKGROUND: Evidence mapping is an increasingly popular approach to systematically evaluate published research. While there are methodological standards for systematic reviews, discrepancies exist between the terminology and methods used within evidence mapping. AIM: The aim of this systematic review is to describe the methodology and terminology used in evidence mapping and to demonstrate the continuum between evidence mapping and traditional systematic reviews. METHODS: A systematic literature search was conducted in 10 databases in order to obtain a comprehensive picture of the state of the research standards for evidence mapping. In addition, websites of institutions which are already conducting evidence mapping were searched. RESULTS: The included study pool (n?=?12) shows that the terms 'evidence map'and 'scoping review'are widely used within evidence mapping. Evidence maps are an approach to depict both the number and characteristics of studies in tabular form that exist as well as evidence gaps based on primary studies and systematic reviews of broad clinical questions. Scoping reviews also summarize the literature in a tabular form but also give a descriptive narrative summary of the results. A quality assessment of the studies is generally not included. CONCLUSION: Evidence mapping allows the identification of research gaps. This aspect is particularly important for interventions which are used without sufficient evidence. In contrast, systematic reviews are mainly used to estimate effects for interventions and evaluate whether the included studies are reliable. Hide
What Comparative Effectiveness Research Is Needed? A Framework for Using Guidelines and Systematic Reviews to Identify Evidence Gaps and Research Priorities
Author:
Li, T., Vedula, S. S., Scherer, R. and Dickersin, K.
The authors developed and tested a framework for identifying evidence gaps and prioritizing comparative effectiveness research by using a combination of clinical practice guidelines and systematic reviews. In phase 1 of the project, reported elsewhere, 45 clinical questions on the management of primary open-angle glaucoma were derived from practice guidelines and prioritized by using a 2-round Delphi survey of clinicians. On the basis of the clinicians'responses, 9 questions were classified as high-priority. In phase 2, reported here, systematic reviews that addressed the 45 clinical questions were identified. The reviews were classified as at low, high, or unclear risk of bias, and evidence gaps (in which no systematic review was at low risk of bias) were identified. The following comparative effectiveness research agenda is proposed: Two of the 9 high-priority questions require new primary research (such as a randomized, controlled trial) and 4 require a new systematic review. The utility and limitations of the framework and future adaptations are discussed. Hide
Planning future studies based on the conditional power of a meta-analysis
Systematic reviews often provide recommendations for further research. When meta-analyses are inconclusive, such recommendations typically argue for further studies to be conducted. However, the nature and amount of future research should depend on the nature and amount of the existing research. We propose a method based on conditional power to make these recommendations more specific. Assuming a random-effects meta-analysis model, we evaluate the influence of the number of additional studies, of their information sizes and of the heterogeneity anticipated among them on the ability of an updated meta-analysis to detect a prespecified effect size. The conditional powers of possible design alternatives can be summarized in a simple graph which can also be the basis for decision making. We use three examples from the Cochrane Database of Systematic Reviews to demonstrate our strategy. We demonstrate that if heterogeneity is anticipated, it might not be possible for a single study to reach the desirable power no matter how large it is. Copyright (c) 2012 John Wiley & Sons, Ltd. Hide
Choosing health technology assessment and systematic review topics: the development of priority-setting criteria for patients' and consumers' interests
Author:
Bastian, H., Scheibler, F., Knelangen, M., Zschorlich, B., Nasser, M. and Waltering, A.
Year:
2011 Source: International Journal of Technology Assessment in Health Care, Vol. 27, Issue 4, PP 348-356
BACKGROUND: The Institute for Quality and Efficiency in Health Care (IQWiG) was established in 2003 by the German parliament. Its legislative responsibilities are health technology assessment, mostly to support policy making and reimbursement decisions. It also has a mandate to serve patients'interests directly, by assessing and communicating evidence for the general public. OBJECTIVES: To develop a priority-setting framework based on the interests of patients and the general public. METHODS: A theoretical framework for priority setting from a patient/consumer perspective was developed. The process of development began with a poll to determine level of lay and health professional interest in the conclusions of 124 systematic reviews (194 responses). Data sources to identify patients'and consumers'information needs and interests were identified. RESULTS: IQWiG's theoretical framework encompasses criteria for quality of evidence and interest, as well as being explicit about editorial considerations, including potential for harm. Dimensions of "patient interest" were identified, such as patients'concerns, information seeking, and use. Rather than being a single item capable of measurement by one means, the concept of "patients'interests" requires consideration of data and opinions from various sources. CONCLUSIONS: The best evidence to communicate to patients/consumers is right, relevant and likely to be considered interesting and/or important to the people affected. What is likely to be interesting for the community generally is sufficient evidence for a concrete conclusion, in a common condition. More research is needed on characteristics of information that interest patients and consumers, methods of evaluating the effectiveness of priority setting, and methods to determine priorities for disinvestment. Hide
Health technology prioritization: which criteria for prioritizing new technologies and what are their relative weights?
Author:
Golan, O., Hansen, P., Kaplan, G. and Tal, O.
Year:
2011 Source: Health Policy, Vol. 102, Issue 41673, PP 126-135
BACKGROUND: The sustainability of healthcare systems worldwide is threatened by a growing demand for services and expensive innovative technologies. Decision makers struggle in this environment to set priorities appropriately, particularly because they lack consensus about which values should guide their decisions. One way to approach this problem is to determine what all relevant stakeholders understand successful priority setting to mean. The goal of this research was to develop a conceptual framework for successful priority setting. METHODS: Three separate empirical studies were completed using qualitative data collection methods (one-on-one interviews with healthcare decision makers from across Canada; focus groups with representation of patients, caregivers and policy makers; and Delphi study including scholars and decision makers from five countries). RESULTS: This paper synthesizes the findings from three studies into a framework of ten separate but interconnected elements germane to successful priority setting: stakeholder understanding, shifted priorities/reallocation of resources, decision making quality, stakeholder acceptance and satisfaction, positive externalities, stakeholder engagement, use of explicit process, information management, consideration of values and context, and revision or appeals mechanism. CONCLUSION: The ten elements specify both quantitative and qualitative dimensions of priority setting and relate to both process and outcome components. To our knowledge, this is the first framework that describes successful priority setting. The ten elements identified in this research provide guidance for decision makers and a common language to discuss priority setting success and work toward improving priority setting efforts. Hide
Priority setting for health technology assessments: A systematic review of current practical approaches
Author:
Noorani, H. Z., Husereau, D. R., Boudreau, R. and Skidmore, B.
Year:
2007 Source: International Journal of Technology Assessment in Health Care, Vol. 23, Issue 3, PP 310-5
OBJECTIVES: This study sought to identify and compare various practical and current approaches of health technology assessment (HTA) priority setting. METHODS: A literature search was performed across PubMed, MEDLINE, EMBASE, BIOSIS, and Cochrane. Given an earlier review conducted by European agencies (EUR-ASSESS project), the search was limited to literature indexed from 1996 onward. We also searched Web sites of HTA agencies as well as HTAi and ISTAHC conference abstracts. Agency representatives were contacted for information about their priority-setting processes. Reports on practical approaches selected through these sources were identified independently by two reviewers. RESULTS: A total of twelve current priority-setting frameworks from eleven agencies were identified. Ten countries were represented: Canada, Denmark, England, Hungary, Israel, Scotland, Spain, Sweden, The Netherlands, and United States. Fifty-nine unique HTA priority-setting criteria were divided into eleven categories (alternatives; budget impact; clinical impact; controversial nature of proposed technology; disease burden; economic impact; ethical, legal, or psychosocial implications; evidence; interest; timeliness of review; variation in rates of use). Differences across HTA agencies were found regarding procedures for categorizing, scoring, and weighing of policy criteria. CONCLUSIONS: Variability exists in the methods for priority setting of health technology assessment across HTA agencies. Quantitative rating methods and consideration of cost benefit for priority setting were seldom used. These study results will assist HTA agencies that are re-visiting or developing their prioritization methods. Hide
Setting priorities in health care organizations: criteria, processes, and parameters of success
BACKGROUND: Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. DISCUSSION: We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. SUMMARY: Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly. Hide