Available online at www.sciencedirect.com

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0 www.nursingoutlook.org

Using meta-analyses for comparative effectiveness research

Vicki S. Conn, PhD, RN, FAAN*, Todd M. Ruppar, PhD, RN, GCNS-BC, Lorraine J. Phillips, PhD, RN, Jo-Ana D. Chase, MN, APRN-BC

Meta-Analysis Research Center, School of Nursing, University of Missouri, Columbia, MO

a r t i c l e i n f o

Article history: Received 30 December 2011 Revised 16 April 2012 Accepted 22 April 2012

Keywords: Comparative effectiveness research Meta-analysis

* Corresponding author: Dr. Vicki S. Conn, A Center, S317 School of Nursing, University o

E-mail address: conn@missouri.edu (V.S.

0029-6554/$ – see front matter � 2012 Elsevi doi:10.1016/j.outlook.2012.04.004

a b s t r a c t

Comparative effectiveness research seeks to identify the most effective inter- ventions for particular patient populations. Meta-analysis is an especially valuable form of comparative effectiveness research because it emphasizes the magnitude of intervention effects rather than relying on tests of statistical significance among primary studies. Overall effects can be calculated for diverse clinical and patient-centered variables to determine the outcome patterns. Moderator analyses compare intervention characteristics among primary studies by determining whether effect sizes vary among studies with different intervention characteristics. Intervention effectiveness can be linked to patient characteristics to provide evidence for patient-centered care. Moderator anal- yses often answer questions never posed by primary studies because neither multiple intervention characteristics nor populations are compared in single primary studies. Thus, meta-analyses provide unique contributions to knowl- edge. Although meta-analysis is a powerful comparative effectiveness strategy, methodological challenges and limitations in primary research must be acknowledged to interpret findings.

Cite this article: Conn, V. S., Ruppar, T. M., Phillips, L. J., & Chase, J.-A. D. (2012, AUGUST). Using meta-

analyses for comparative effectiveness research. Nursing Outlook, 60(4), 182-190. doi:10.1016/

j.outlook.2012.04.004.

Despite remarkable scientific advances over recent decades, the effectiveness of many health interven- tions remains unclear. The Institute of Medicine noted that evidence of effectiveness exists for less than half of the interventions in use today.1 Scant evidence exists comparing multiple possible interventions for the same health problem.2 Newer or more costly interventions may not be linked with better outcomes, and variations in health care expenditure may be unrelated to changes in health outcomes.3-5 The troubling lack of information about interventions’ relative effectiveness led to comparative effectiveness research (CER) initiatives.

ssociate Dean & Potter-B f Missouri, Columbia, MO Conn).

er Inc. All rights reserved

CER can be defined as research designed to discover which interventions work best, under what circum- stances, for whom, and at what cost.1,6 CER methods include randomized, controlled trials; nonrandomized comparison studies; prospective and retrospective observational studies; analyses of registry and practice datasets; practice-based evidence studies; and meta- analyses.6-9 This paper examines using meta-analytic approaches for CER. Examples of nurse-led meta- analyses will be used to demonstrate key points. The paper begins with an explanation of meta-analytic overall effect size estimates for CER, especially in

rinton Distinguished Professor, Director, Meta-Analysis Research 65211.

.http://dx.doi.org/10.1016/j.outlook.2012.04.004http://dx.doi.org/10.1016/j.outlook.2012.04.004mailto:conn@missouri.eduhttp://dx.doi.org/10.1016/j.outlook.2012.04.004http://dx.doi.org/10.1016/j.outlook.2012.04.004http://dx.doi.org/10.1016/j.outlook.2012.04.004http://www.nursingoutlook.org

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0 183

situations with inconsistent findings among primary studies. The value of statistically quantifying the magnitude of effects for both clinical and patient- centered outcomes is described. Unique contributions of meta-analysis for both specifying temporal patterns of outcomes and adverse outcomes are presented. Then the importance of including diverse studies that repre- sent clinical heterogeneity is explained. The use of patient characteristic moderator analysis to accomplish CER goals of identifying which interventions work best for which subjects is explored. The use of moderator analyses to determine whether intervention character- istics are linked with outcomes is presented. The use of moderator analyses to determine whether setting characteristics are associated with outcomes is described. The potential use of moderator analyses to explore intervention worth is briefly addressed. Finally, selected limitations of meta-analytic methods and primary studies are discussed to provide a context for interpreting meta-analytic CER. Full details of meta- analysis methods, including limitations, are available in other sources.10-15

Application of Overall Effect Sizes to Comparative Effectiveness Research

CER includes determining effectiveness of interven- tions on clinical and patient-centered outcomes. CER can involve performing a meta-analysis of primary studies to quantify intervention outcomes. Meta- analyses can synthesize results of head-to-head comparisons of 2 interventions in primary studies or compare 2 interventions tested in different primary studies. Meta-analytic statistical procedures generate a unitless effect size for each study. Thus, outcomes reported using different measures of the same construct in primary studies may be combined. Each effect size is weighted by the inverse of its sampling variance so studies with larger samples have more influence in aggregate effect-size estimates.11

The meta-analytic approach of estimating an effect size for each primary study does not depend on P values in original studies, which makes it valuable in areas of science where underpowered studies are common. Some areas have multiple small primary studies without statistical power to detect important changes. Reviews of such work conducted without meta-analysis, such as those relying on vote counting of the proportion of studies with statistically signifi- cant findings, might conclude that the primary studies did not support the effectiveness of the tested inter- vention because they reported statistically nonsignifi- cant differences between treatment and comparison groups. However, meta-analytic strategies can combine the magnitude of differences between treat- ment groups across primary studies to discover a clin- ically important intervention effect. For example, we retrieved 10 studies testing the effects of physical

activity behavior self-monitoring as an intervention to increase physical activity.16-25 Four of the studies reported statistically significant findings in favor of self-monitoring. Six other studies reported that self- monitoring did not significantly improve physical activity behavior. A review without meta-analysis would conclude that the evidence is mixed, inconclu- sive, or did not support the efficacy of self-monitoring. In contrast, a meta-analysis of the same studies documented an overall effect size of .435 (standardized mean difference), which is significantly different from no effect (P < 0.001, 95% confidence interval .278, .592). Thus the meta-analysis concluded that self- monitoring increased physical activity. Figure 1 includes a forest plot that demonstrates these findings.

CER aims to determine the extent to which inter- ventions are effective, not whether they are better than control conditions. Meta-analysis calculates and emphasizes the magnitude of the effect, rather than the tests of statistical significance reported in primary studies. The emphasis on effect size, instead of tests of statistical significance, also aids interpretation of findings from overpowered primary studies with statistically significant findings that may not be clini- cally important. For example, a study of an interven- tion to reduce pain may have a statistically significant P value if hundreds of subjects are included, whereas the average reduction in pain between the treatment and control group might be from 6.5 to 6.2 on a pain scale of 0 to 10. Meta-analysis findings emphasize the magni- tude of effects, thus overpowered studies are inter- preted in the context of the effect size they achieved.

Because CER results are intended to improve clinical practice, outcomes need to be interpretable by practi- tioners. The meta-analysis overall effect size, which quantifies the magnitude of effects, can be converted to the original clinical metric to enhance interpreta- tion. For example, a meta-analysis of metabolic outcomes of diabetes self-management programs reported an overall mean difference effect size of .26. The conversion to the original metric depicted findings in clinically meaningful terms: HbA1c of 7.38 for treatment subjects compared with HbA1c of 7.83 for control subjects.26 Clinical practice can be further supported by making comparisons across meta- analyses to determine consistency of findings. These comparisons can be accomplished by the ability to convert meta-analysis effect size metrics (eg, odds ratios to standardized mean difference).27

CER aims to examine intervention effects on multiple clinical and patient-centered outcomes. Meta- analyses compute separate effect sizes for diverse outcomes that are reported in primary research. Although a main health outcome may be considered most important, other outcomes may be summarized separately to estimate intervention effects for multiple outcomes. For example, a meta-analysis comparing passive descent to immediate pushing during second- stage labor in nulliparous women with epidural anes- thesia examined multiple outcomes: Spontaneoushttp://dx.doi.org/10.1016/j.outlook.2012.04.004

Figure 1 e Forest plot of 10 studies that tested self-monitoring interventions. The horizontal line adjacent to each study on the forest plot reflects the confidence interval for that study’s effect size. Studies with horizontal lines crossing 0 did not report a statistically significant outcome in the individual studies. The meta-analysis standardized mean difference effect size, the final row in the figure marked “Effect size,” is represented by the diamond whose width corresponds to the confidence interval.

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0184

vaginal birth, instrument-assisted delivery, cesarean birth, lacerations, and episiotomies.28 Varied patterns of findings among related outcomes can be interesting. For example, a meta-analysis of exercise interventions among older adults found improvement in objective physical performance measures but no improvement in the ability to perform activities of daily living.29

Patient-centered outcomes research emphasizes outcomes of importance to patients such as quality of life, symptoms, or functional status. Patient-centered outcomes can be synthesized in addition to other outcomes health providers typically value.30 For example, a meta-analysis of silver-releasing wound dressings included pain-related symptoms and quality of life measures, as well as typical clinical outcomes of wound healing, exudate, and dressing wearing time.31

Analyzing multiple outcomes is important because the definition of “success” for interventions varies.32

Comparisons between interventions may reveal small or negligible differences in main outcome effect sizes. In these cases, comparisons of other nonprimary outcomes, such as patient convenience, may provide valuable information about complex tradeoffs for making decisions about patient care.33

Providers are interested in CER research that docu- ments persisting health benefits of interventions, not just immediate improvements. Effect sizes calculated for multiple time points can provide information about the temporal pattern of effects. Some primary studies report outcomes over multiple time points. Others report only one outcome assessment, though its timing mayvaryacrossstudies. Thesedatacan beused inmeta- analyses to identify interventions whose effects are transient or those showing limited immediate impact

but long-term positive outcomes.32 These patterns may reveal themselves as interventions first become effec- tive, peak in effectiveness, and then decay. For example, VanKuikendocumentedchangesintheeffects ofguided imagery on outcomes over 5 to 18 weeks.34

CER is intended to develop information to providers and patients about both positive and negative outcomes of interventions so advantages and disad- vantages may be considered in making treatment decisions. Adverse or negative events are important sequelae that CER meta-analyses can address. Many adverse events are rare, which makes it difficult to assess incidence in individual primary studies. Combining adverse event rates across multiple primary studies with thousands of subjects provides more stable estimates of incidence than are available in single studies. For example, Lo et al documented no increased incidence of adverse events when using silver-releasing dressings over alternative dressing by aggregating findings across many patients in multiple primary studies.31 Although primary research tends to emphasize positive outcomes in research reports, providers need accurate information about negative events or neutral outcomes to weigh the advantages and disadvantages of interventions for practice.

Heterogeneity in Meta-Analyses Comparative Effectiveness Research

CER values real-world tests of interventions. Hetero- geneity is expected in CER meta-analyses because primary studies (1) include samples of diverse, real-http://dx.doi.org/10.1016/j.outlook.2012.04.004

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0 185

world populations; (2) commonly have planned and unplanned variations in interventions; and (3) test interventions in varied clinical settings that may influence their effectiveness or patient responsiveness. Meta-analysts’ decisions regarding inclusion and exclusion of potential primary studies with diverse samples and interventions should be directed by conceptually clear definitions about what kinds of interventions should be combined and for which types of subjects. CER meta-analyses generally use random- effects model analyses, which assume diversity in sample, interventions, and study methods. (Methodo- logical challenges related to inclusion criteria and primary study quality are addressed in the Limitations section.)

Heterogeneity is valuable because CER includes studies conducted with diverse populations and varied methods to provide strong evidence about interven- tions’ effectiveness. CER expects variations in patients, interventions, and outcomes. This approach stands in contrast to efficacy findings commonly established in tightly controlled, randomized, controlled trials.8,35

The emphasis on randomized, controlled trials in some Cochrane Collaboration reviews is one reason these may have limited CER impact. A strength of meta-analysis is its ability to estimate heterogeneity and examine potential moderating variables that contribute to it. Even when testing identical interven- tions, heterogeneity of outcome effects is common because patients vary in their response to treatments, and treatment effects may vary by setting.35 Hetero- geneity offers the opportunity to conduct moderator analyses to explore how primary studies differ by examining sample, intervention, and setting charac- teristics that may be linked to outcomes. CER meta- analysis facilitates discovery of best practices by identifying interventions that are the most effective overall and for certain populations once sufficient primary research has accumulated.8

Patient Characteristic Moderator Analyses

One focus of CER is identifying differential intervention effectiveness for specific populations. CER subgroup moderator analyses can focus on demographic features such as ethnicity or gender, or they can examine health characteristics such as disease severity or functional status. Meta-analysis moderator analyses can examine whether intervention effective- ness varies by patient subgroups. For example, a meta- analysis of interventions to increase medication adherence among older adults found that interven- tions were most effective for those with 3 to 5 prescription medications.36 This could be because those taking fewer medications needed little assis- tance with medication adherence and those taking more than 5 might need more intense interventions than those typically tested.36 Rice reported that

smoking cessation interventions were more effective for cardiac patients than for other populations.37

The increased CER emphasis on patient-level attributes linked with better or worse outcomes may lead to more personalized care.38 Findings that intervention effects do not vary by sample charac- teristics may mean that a range of patients may experience similar benefit from the intervention. For example, a meta-analysis of respiratory rehabilita- tion interventions on exercise capacity found similar benefits across sample age or initial forced expiratory volume.39

Intervention Characteristic Moderator Analyses

CER aims to provide clinical guidance by comparing interventions to determine which interventions are most effective.

Intervention Moderators

In a few situations, meta-analysis can prove useful in determining whether an intervention is better than no intervention, such as a watchful waiting approach.38

For some interventions, it can be valuable to synthe- size comparisons between new interventions and usual care. If usual care is standardized, these analyses provide information comparing 2 interventions. However, oftentimes usual care is not standardized and such comparisons cannot yield clear recommen- dations for practice. More commonly, providers need to know which interventions are most effective.

Meta-analyses can address comparisons between interventions by either synthesizing extant primary research with head-to-head comparisons of treat- ments or by using moderator analyses on primary studies that test different interventions. Using meta- analysis, researchers can directly compare interven- tions from multiple primary studies that compare the same 2 interventions. The effect sizes for the difference between the 2 interventions provided information about the most effective intervention when methodo- logical quality was similar between studies and valid outcome measures were used. For example, Lo et al synthesized findings of primary studies that each compared silver-releasing dressing with other dressings.31

Unfortunately, many primary studies of nursing interventions are not compared against other inter- ventions. Head-to-head comparisons of multiple interventions in the same primary studies are unusual because of funding, feasibility, and very large sample challenges. Rather, interventions are gener- ally compared with usual care or a control group. Using meta-analysis, interventions not directly compared in primary studies can be indirectly compared to accomplish the goals of CER to comparehttp://dx.doi.org/10.1016/j.outlook.2012.04.004

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0186

interventions.7 The effect of one intervention compared with a control group can be contrasted either with the effect of a second intervention compared with a control group.38 Two interventions each compared with usual care in separate primary studies can be compared using meta-analysis.38 An effect size is computed for the first intervention compared with control subjects. A separate effect size is calculated for the second intervention compared with control groups. The difference in the effect sizes is tested statistically to determine whether the first or second intervention was most effective. Because no primary studies directly compared the 2 interven- tions, this indirect comparison is a unique contribu- tion of meta-analysis. For example, a meta-analysis by Jung et al compared exercise-only interventions with exercise-and-education interventions to reduce fear of falling in older adults.40 Primary studies did not compare the 2 interventions but rather compared each one with a control group. Their meta-analysis statistically compared the interventions despite the absence of any primary studies making this direct comparison.40

Nurses often use common labels to describe variable interventions. For example, patient education could describe work to change the knowledge and attitudes about exercise or it could describe behavioral strategies to change exercise (eg, self-monitoring, prompts, contracts). Meta-analysis adds clarity in such cases with its ability to compare characteristics of interven- tions to determine the best one. For example, a recent meta-analysis of physical activity interventions found that behavioral interventions (eg, self-monitoring, cues, rewards, behavioral goals) were more effective than cognitive interventions (ie, changing knowledge, attitudes, beliefs) at increasing physical activity behavior.41 These comparative analyses provide evidence about best practices to achieve desired outcomes.42

Moderator analyses can examine intervention features that may vary along dimensions beyond content.43 Dose variations include individual dose amount, dose frequency, and total number of doses. Intervention timing may be linked to index events or other determining factors. Mode of delivery can include face-to-face or mediated mechanisms (eg, email, telephone). Interventions may be delivered to the target, who is expected to benefit from the intervention, or to other recipients (eg, family members of patients, health care providers). Moderator analyses can compare standardized interventions to those tailored to an individual (ie, intervention features matched to individual subject characteristics) or tar- geted to groups (eg, different interventions for subgroups such as women vs. men). Unplanned inter- vention variations (eg, unanticipated content or dose variations) can relate to outcomes. Moderator analyses on such characteristics can provide information to help design interventions that improve health and well-being outcomes.

Setting and Context Moderator Analyses

CER aims to discover the best interventions in specific situations. Meta-analyses can compare interventions’ setting and context characteristics using moderator analyses to discover circumstances in which interven- tions are most effective. For example, interventionist characteristics that vary among primary studies (eg, advanced practice nurses vs. physicians) can be compared statistically. Setting features, such as home vs. clinic or individual patient vs. group of patients, also can be examined to determine the most effective setting. For example, Conn et al’s meta-analysis of physical activity behavior outcomes compared inter- ventions delivered to groups versus individuals and compared interventions delivered face-to-face versus mediated mechanisms (eg, telephone).41 Modifications in health care delivery are important potential moder- ators in health services research. For example, Kim and Soeken examined how hospital-based case manage- ment affected length of stay and readmission rates.44

Intervention Worth

Although current national CER discussions have not emphasized cost analyses, an examination of cost issues is relevant. Meta-analysis methods can address relationships between intervention costs and outcomes. Ideal primary intervention reports contain adequate data about intervention costs and outcomes to estimate the amount of improvement in outcome variables per unit cost. It is important that the full range of outcomes be compared with costs to provide a complete cost-benefit. Unfortunately, few existing intervention studies provide adequate cost data to include this important variable in meta-analyses. As cost information takes on greater importance in primary research, such analyses will be possible in the future.

Interpreting Meta-Analysis Results for Comparative Effectiveness Research

Meta-analysis is a powerful CER tool. Valid interpre- tations of meta-analyses results require researchers to consider limitations of both meta-analysis methods and primary studies. In-depth explanation of meta- analysis methods is beyond the scope of this paper. Other excellent resources provide detailed informa- tion.10-15 Two checklists with criteria for evaluating meta-analyses are available online (PRISMA: http:// www.prisma-statement.org/statement.htm; MOOSE: http://www.editorialmanager.com/jognn/account/ MOOSE.pdf). This discussion will focus on CER meta- analysis.http://www.prisma-statement.org/statement.htmhttp://www.prisma-statement.org/statement.htmhttp://www.editorialmanager.com/jognn/account/MOOSE.pdfhttp://www.editorialmanager.com/jognn/account/MOOSE.pdfhttp://dx.doi.org/10.1016/j.outlook.2012.04.004

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0 187

The findings of meta-analyses may be generalized to situations similar to the primary studies included in the analyses. Thus, if only randomized, controlled trials are included in meta-analyses, they may provide limited information about effectiveness while providing excellent estimates of efficacy. Because CER does not seek to determine whether interventions are efficacious under highly controlled conditions, CER meta-analyses should include primary trials with varied populations and broad clinical practice, as well as tightly controlled efficacy trials, so findings are generalizable to practice settings.45,46

Limitations and Challenges of Meta-Analysis CER

Meta-analysis inclusion criteria determine which primary studies to include in aggregate analyses. Excessively narrow inclusion criteria may exclude studies conducted in the practice setting, which might provide the most valuable evidence for changing practice. For example, the Cochrane Collaboration emphasis on randomized, controlled trials and exclu- sion of patient-centered outcomes may limit the usefulness of some reviews for CER.14

Including studies with varied methodological diffi- culties can be both valuable and challenging. Meta- analysts manage primary study quality in 3 ways.47

First, meta-analysts may set inclusion criteria that address methodological quality. This approach can be effective for CER if it does not exclude the very field studies that provide the best evidence about effec- tiveness. Second, a meta-analysis could weight effect sizes by quality scores. This approach is fraught with problems because no valid measures of primary study quality exist and the importance of specific quality attributes may differ by scientific topic.47 Third, meta- analysts may consider quality features as an empirical question. Conducting moderator analyses to examine associations between effect sizes and methods char- acteristics (eg, allocation, masked outcome assess- ment, attrition) can be informative. For example, Lee, Soeken, and Picot compared effect sizes of studies with strong internal validity with those with significant weaknesses.48 Combination approaches may be most effective if CER research is to ensure that studies con- ducted in realistic clinical settings are included while testing linkages between methods and effect sizes.

Primary study limitations profoundly influence meta-analyses. Poorly described interventions are a persistent problem.49-52 Studies that describe inter- ventions as patient education or social support, without additional details, provide insufficient infor- mation about intervention content. Other studies use well known labels for interventions but provide insuf- ficient evidence about intervention content or delivery. For example, studies may claim “motivational inter- viewing” without conducting an intervention entirely

consistent with motivational interviewing principles. Inadequate details about interventions and outcomes make valid coding difficult for some primary studies and may necessitate exclusion from meta-analyses.

Reporting bias, the tendency for articles to report statistically significant findings and not report findings that are not statistically significant, and publication bias, the tendency for studies with statistically signif- icant findings to be published, alter meta-analysis findings in unknown ways.53 Inadequate statistical information in primary studies, such as not reporting sample sizes, means, and measures of variability, is frustratingly common.54,55 Some primary studies may use outcome measures with no recognized standards for clinically relevant differences, hindering meaning- ful interpretation.

Perhaps the most common limitation in published meta-analyses is inadequate searching for primary studies. This is important because easier-to-find studies generally have larger effect sizes than obscure studies.56,57 Publication bias is a persistent problem that thwarts scientific progress.57,58 Considerable resources must be devoted to adequate searching to ensure valid CER meta-analyses.56

Meta-analysts can only synthesize existing infor- mation. For example, some populations may be under- represented in research.59 The comprehensive searching completed for valid meta-analyses allows investigators to identify missing populations.

Individual studies are the unit of analysis in meta- analyses. To ensure independent data (subjects do not enter any one meta-analysis statistical procedure multiple times), meta-analysts must make principled decisions regarding which measures to use or create an index score when studies report multiple measures of the same construct. Procedures also must be in place to ensure that the same subjects do not enter meta- analysis effect sizes multiple times when more than one article reports on the same subjects.

Use of CER Meta-Analysis Results

In some CER meta-analyses, moderator analyses may be more important than overall effect sizes. Researchers should place less emphasis on overall effects in meta-analyses that include significant clin- ical and methodological diversity. Researchers should use caution when interpreting overall effect sizes of small meta-analyses with significant heterogeneity and no explanatory moderator analyses.42

CER meta-analysis results may be conclusive regarding best practices if primary studies offer strong and consistent evidence. In these situations, no further research comparing interventions may be necessary. Primary research often yields less conclusive findings when few studies are available, all studies have significant methodological weaknesses, or extensive heterogeneity cannot be explored through moderatorhttp://dx.doi.org/10.1016/j.outlook.2012.04.004

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0188

analyses. In these situations, meta-analysis may contribute most by identifying comparisons that further research should address. Rather than simply suggesting additional research on a topic, meta- analyses usually can specify the nature of the comparisons that should be made (eg, intervention characteristics, samples).

Comprehensive meta-analyses can provide evidence for practice. Consistent findings across multiple meta-analyses that address the same funda- mental research question provide powerful evidence for practice. For example, 3 meta-analyses have docu- mented that behavioral interventions are more powerful than cognitive interventions to change physical activity behavior among healthy, chronically ill, and older adults.41,60,61 Contradictory findings across multiple meta-analyses should be evaluated carefully. Considerations include differences in search strategies, inclusion criteria, and outcome variables to identify potential sources of discrepancies before making practice recommendations.

Meta-analyses must be updated with newly avail- able evidence. The shelf-life of meta-analyses depends on the amount of new evidence that could change findings.59 A meta-analysis may suggest comparisons to make in primary studies, the findings of which could require updates to the seminal meta-analysis. Newer studies may include populations that older studies included infrequently. Important methodological advances may affect the results of more recent studies. Emerging data should be included in updated meta- analyses.7 Meta-analyses may also need to be updated as new methods of meta-analyzing data become available.62

Conclusions

Meta-analyses can address central CER questions of which interventions work best, for whom, in what situations, and at what cost. Moderator analyses that compare intervention characteristics, patient attri- butes, and clinical circumstances on clinical outcomes make the largest CER contribution to knowledge for practice. These moderator analyses typically answer questions that primary studies never ask; meta- analyses can make unique contributions to scientific knowledge of health interventions. Methodological challenges and weaknesses in extant primary research should provide the context for interpreting findings. Rigorously conducted meta-analyses are a useful method for conducting valid CER.

Acknowledgments

Financial support provided by grants from the National Institutes of Health (R01NR009656 & R01NR011990) to

Vicki Conn, principal investigator. The content is solely the responsibility of the authors and does not neces- sarily represent the official views of the National Institutes of Health.

r e f e r e n c e s

1. Institute of Medicine. Roundtable on evidence-based medicine. Learning what works best: the nation’s need for evidence on comparative effectiveness in health care. Available at: http:// www.iom.edu/w/media/Files/Activity%20Files/Quality/VSRT/ ComparativeEffectivenessWhitePaperESF.pdf. Accessed May 29, 2012.

2. Donnelly J, Garber AM, Wilensky GR, Dentzer S, Agres T. Health policy brief: Comparative effectiveness research. 2010. Health Aff. Available at: http://www.healthaffairs.org/healthpolicy briefs/brief.php?brief_id¼28. Accessed May 29, 2012.

3. Fisher ES, Bynum JP, Skinner JS. Slowing the growth of health care costsdlessons from regional variation. New Engl J Med 2009;360(9):849-52.

4. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med 2003;138(4):288-98.

5. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138(4):273-87.

6. Clancy C. Patient-centered outcomes research: what is it and why do we need it? Presented at Council for the Advancement of Nursing Science special topics conference, October 12, 2011; Washington, DC.

7. DuBois RW, Kindermann SL. Demystifying comparative effectiveness research: a case study learning guide. National Pharmaceutical Council. 2009. Available at: http://www. npcnow.org/Public/Research___Publications/Publications/ pub_ebm/Demystifying_Comparative_Effectiveness_ Research__A_Case_Study_Learning_Guide_.aspx. Accessed May 29, 2012.

8. Horn SD, Gassaway J. Practice based evidence: Incorporating clinical heterogeneity and patient-reported outcomes for comparative effectiveness research. Med Care 2010; 48(6 Suppl):S17-22.

9. Manchikanti L, Falco FJ, Boswell MV, Hirsch JA. Facts, fallacies, and politics of comparative effectiveness research: Part I. Basic considerations. Pain Physician 2010; 13(1):E23-54.

10. Cooper H, Hedges LV, Valentine JC, editors. The handbook of research synthesis and meta-analysis. 2nd ed. New York: Russell Sage Foundation; 2009.

11. Cooper H. Research synthesis and meta-analysis: a step by step approach. 4th ed. Los Angeles, CA: Sage Publications, Inc.; 2010.

12. Borenstein M, Hedges L, Higgins J, Rothstein H. Introduction to meta-analysis. West Sussex: John Wiley & Sons; 2009.

13. Lipsey M, Wilson D. Practical meta-analysis. Los Angeles, CA: Sage Publications, Inc.; 2000.

14. Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions. United Kingdom: Cochrane Collaboration. Available at: http://www.cochrane.org/ training/cochrane-handbook. Accessed May 29, 2012.

15. Campbell Collaboration. Oslo, Norway: Campbell Collaboration. Available at: http://www.campbell collaboration.org/. Accessed May 29, 2012.

16. Napolitano MA, Fotheringham M, Tate D, Sciamanna C, Leslie E, Owen N, et al. Evaluation of an internet-basedhttp://www.iom.edu/~/media/Files/Activity&percnt;20Files/Quality/VSRT/ComparativeEffectivenessWhitePaperESF.pdfhttp://www.iom.edu/~/media/Files/Activity&percnt;20Files/Quality/VSRT/ComparativeEffectivenessWhitePaperESF.pdfhttp://www.iom.edu/~/media/Files/Activity&percnt;20Files/Quality/VSRT/ComparativeEffectivenessWhitePaperESF.pdfhttp://www.iom.edu/~/media/Files/Activity&percnt;20Files/Quality/VSRT/ComparativeEffectivenessWhitePaperESF.pdfhttp://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=28http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=28http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=28http://www.npcnow.org/Public/Research___Publications/Publications/pub_ebm/Demystifying_Comparative_Effectiveness_Research__A_Case_Study_Learning_Guide_.aspxhttp://www.npcnow.org/Public/Research___Publications/Publications/pub_ebm/Demystifying_Comparative_Effectiveness_Research__A_Case_Study_Learning_Guide_.aspxhttp://www.npcnow.org/Public/Research___Publications/Publications/pub_ebm/Demystifying_Comparative_Effectiveness_Research__A_Case_Study_Learning_Guide_.aspxhttp://www.npcnow.org/Public/Research___Publications/Publications/pub_ebm/Demystifying_Comparative_Effectiveness_Research__A_Case_Study_Learning_Guide_.aspxhttp://www.cochrane.org/training/cochrane-handbookhttp://www.cochrane.org/training/cochrane-handbookhttp://www.campbellcollaboration.org/http://www.campbellcollaboration.org/http://dx.doi.org/10.1016/j.outlook.2012.04.004

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0 189

physical activity intervention: A preliminary investigation. Ann Behav Med 2003;25(2):92-9.

17. Furukawa F, Kazuma K, Kawa M, Miyashita M, Niiro K, Kusukawa R, et al. Effects of an off-site walking program on energy expenditure, serum lipids, and glucose metabolism in middle-aged women. Biol Res Nurs 2003; 4(3):181-92.

18. Hubball HT. Development and evaluation of a worksite health promotion program: application of critical self-directed learning for exercise behaviour change (Unpublished dissertation). The University of British Columbia, Vancouver; 1996.

19. Nichols GJ. Testing a culturally consistent behavioral outcomes strategy for cardiovascular disease risk reduction and prevention in low income African-American women (Unpublished dissertation). University of Maryland, Baltimore; 1995.

20. Blanchard CM, Fortier M, Sweet S, O’Sullivan T, Hogg W, Reid RD, et al. Explaining physical activity levels from a self- efficacy perspective: The physical activity counseling trial. Ann Behav Med 2007;34(3):323-8.

21. Annesi JJ. Effects of music, television, and a combination entertainment system on distraction, exercise adherence, and physical output in adults. Canadian J Behav Sci 2001; 33(3):193-202.

22. King AC, Baumann K, O’Sullivan P, Wilcox S, Castro C. Effects of moderate-intensity exercise on physiological, behavioral, and emotional responses to family caregiving: A randomized controlled trial. J Gerontol A Biol Sci Med Sci 2003;57(1):M26-36.

23. Bennett JA, Young HM, Nail LM, Winters-Stone K, Hanson G. A telephone-only motivational intervention to increase physical activity in rural adults: a randomized controlled trial. Nurs Res 2008;57(1):24-32.

24. Raber AC. Empowering women: a health promotion program for weight-related problems (Unpublished dissertation). Bowling Green State University, Ohio; 2004.

25. King AC, Friedman R, Marcus B, Castro C, Napolitano M, Ahn D, Baker L. Ongoing physical activity advice by humans versus computers: The Community Health Advice by Telephone (CHAT) trial. Health Psychol 2007; 26(6):718-27.

26. Conn VS, Hafdahl AR, Mehr DR, LeMaster JW, Brown SA, Nielsen PJ. Metabolic effects of interventions to increase exercise in adults with type 2 diabetes. Diabetologia 2007; 50(5):913-21.

27. Borestein M. Effect size for continuous data. In: Cooper H, Hedges L, Valentine J, editors. Handbook of research synthesis and meta-analysis. 2nd ed. New York: Russell Sage Foundation; 2009. p. 221-35.

28. Brancato RM, Church S, Stone PW. A meta-analysis of passive descent versus immediate pushing in nulliparous women with epidural analgesia in the second stage of labor. J Obstet Gynecol Neonatal Nurs 2008;37(1):4-12.

29. Gu MO, Conn VS. Meta-analysis of the effects of exercise interventions on functional status in older adults. Research Nurs Health 2008;31(6):594-603.

30. Navathe AS, Clancy C, Glied S. Advancing research data infrastructure for patient-centered outcomes research. JAMA 2011;306(11):1254-5.

31. Lo SF, Chang CJ, Hu WY, Hayter M, Chang YT. The effectiveness of silver-releasing dressings in the management of non-healing chronic wounds: a meta- analysis. J Clin Nurs 2009;18(5):716-28.

32. Lohr KN. Comparative effectiveness research methods: symposium overview and summary. Med Care 2010; 48(6 Suppl):S3-6.

33. Atkins D, Kupersmith J. Implementation research: A critical component of realizing the benefits of comparative effectiveness research. Am J Med 2010;123(12 Suppl. 1):e38-45.

34. Van Kuiken D. A meta-analysis of the effect of guided imagery practice on outcomes. J Holist Nurs 2004;22(2):164-79.

35. Kaplan SH, Billimek J, Sorkin DH, Ngo-Metzger Q, Greenfield S. Who can respond to treatment? Identifying patient characteristics related to heterogeneity of treatment effects. Med Care 2010;48(6 Suppl):S9-16.

36. Conn VS, Hafdahl AR, Cooper PS, Ruppar TM, Mehr DR, Russell CL. Interventions to improve medication adherence among older adults: meta-analysis of adherence outcomes among randomized controlled trials. Gerontologist 2009;49(4): 447-62.

37. Rice VH, Stead L. Nursing intervention and smoking cessation: meta-analysis update. Heart Lung 2006;35(3):147-63.

38. Committee on Comparative Effectiveness Research Prioritization, Institute of Medicine. Initial national priorities of comparative effectiveness research. Washington DC: National Academies Press; 2009.

39. Oh H, Seo W. Meta-analysis of the effects of respiratory rehabilitation programmes on exercise capacity in accordance with programme characteristics. J Clin Nurs 2007;16(1):3-15.

40. Jung D, Lee J, Lee SM. A meta-analysis of fear of falling treatment programs for the elderly. West J Nurs Res 2009; 31(1):6-16.

41. Conn VS, Hafdahl AR, Mehr DR. Interventions to increase physical activity among healthy adults: meta-analysis of outcomes. Am J Public Health 2011;101(4):751-8.

42. Fu R, Gartlehner G, Grant M, Shamliyan T, Sedrakyan A, Wilt TJ, et al. Conducting quantitative synthesis when comparing medical interventions: AHRQ and the Effective Health Care Program. J Clin Epidemiol 2011;64(11):1187-97.

43. Conn VS, Groves P. Protecting the power of interventions through proper reporting. Nurs Outlook 2011;59(6):318-25.

44. Kim Y-J, Soeken KL. A meta-analysis of the effect of hospital- based case management on hospital length-of-stay and readmission. Nurs Res 2005;54(4):255-64.

45. Gibbons RJ, Gardner TJ, Anderson JL, Goldstein LB, Weintraub WS, Yancy CW. The American Heart Association’s principles for comparative effectiveness research: A policy statement from the American Heart Association. Circulation 2009;119(22):2955-62.

46. Hadler NM, McNutt RA. The illusory side of “comparative effectiveness research.” 2011. Health Beat. Available at: http:// www.healthbeatblog.com/2011/04/the-illusory-side-of- comparative-effectiveness-research-.html. Accessed May 29, 2012.

47. Conn VS, Rantz MJ. Research methods: managing primary study quality in meta-analyses. Res Nurs Health 2003;26(4): 322-33.

48. Lee J, Soeken K, Picot SJ. A meta-analysis of interventions for informal stroke caregivers. West J Nurs Res 2007;29(3):344-56. discussion 357-64.

49. Conn VS, Cooper PS, Ruppar TM, Russell CL. Searching for the intervention in intervention research reports. J Nurs Scholarsh 2008;40(1):52-9.

50. McGilton KS, Boscart V, Fox M, Sidani S, Rochon E, Sorin- Peters R. A systematic review of the effectiveness of communication interventions for health care providers caring for patients in residential care settings. Worldviews Evid Based Nurs 2009;6(3):149-59.

51. Forbes A. Clinical intervention research in nursing. Int J Nurs Stud 2009;46(4):557-68.

52. Conn VS. Intervention? What intervention? West J Nurs Res 2007;29(5):521-2.

53. Smyth RMD, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR. Frequency and reasons for outcome reportinghttp://www.healthbeatblog.com/2011/04/the-illusory-side-of-comparative-effectiveness-research-.htmlhttp://www.healthbeatblog.com/2011/04/the-illusory-side-of-comparative-effectiveness-research-.htmlhttp://www.healthbeatblog.com/2011/04/the-illusory-side-of-comparative-effectiveness-research-.htmlhttp://dx.doi.org/10.1016/j.outlook.2012.04.004

N u r s O u t l o o k 6 0 ( 2 0 1 2 ) 1 8 2 e 1 9 0190

bias in clinical trials: interviews with trialists. Brit Med J 2011; 342:c7153.

54. Orwin R, Vevea J. Evaluating coding decisions. In: Cooper H, Hedges L, Valentine J, editors. The handbook of research synthesis and meta-analysis. 2nd ed. New York: Russell Sage Foundation; 2009. p. 177-203.

55. Pigott T. Handling missing data. In: Cooper H, Hedges L, Valentine J, editors. The handbook of research synthesis and meta-analysis. 2nd ed. New York: Russell Sage Foundation; 2009. p. 399-416.

56. Conn V, Isaramalai S, Rath S, Jantarakupt P, Wadhawan R, Dash Y. Beyond MEDLINE for literature searches. J Nurs Scholarsh 2003;35(2):177-82.

57. Conn VS, Valentine JC, Cooper HM, Rantz MJ. Grey literature in meta-analyses. Nurs Res 2003;52(4):256-61.

58. Dickersin K. Publication bias: recognizing the problem, understanding its origins and scope, and preventing harm. In: Rothstein HR, Sutton AJ, Borenstein M, editors. Publication

bias in meta-analysis. West Sussex, United Kingdom: John Wiley & Sons, Ltd.; 2006. p. 9-33.

59. Jones JB, Blecker S, Shah NR. Meta-analysis 101: What you want to know in the era of comparative effectiveness. 2008. Am Health Drug Benefits. Available at: http://www.ahdbonline.com/feature/meta-analysis- 101-what-you-want-know-era-comparative- effectiveness. Accessed May 29, 2012.

60. Conn VS, Hafdahl AR, Brown SA, Brown LM. Meta-analysis of patient education interventions to increase physical activity among chronically ill adults. Patient Educ Couns 2008; 70(2):157-72.

61. Conn VS, Valentine JC, Cooper HM. Interventions to increase physical activity among aging adults: a meta-analysis. Ann Behav Med 2002;24(3):190-200.

62. Berry D, Wathen JK, Newell M. Bayesian model averaging in meta-analysis: Vitamin E supplementation and mortality. Clin Trials 2009;6(1):28-41.http://www.ahdbonline.com/feature/meta-analysis-101-what-you-want-know-era-comparative-effectivenesshttp://www.ahdbonline.com/feature/meta-analysis-101-what-you-want-know-era-comparative-effectivenesshttp://www.ahdbonline.com/feature/meta-analysis-101-what-you-want-know-era-comparative-effectivenesshttp://dx.doi.org/10.1016/j.outlook.2012.04.004

  • Using meta-analyses for comparative effectiveness research
    • Application of Overall Effect Sizes to Comparative Effectiveness Research
    • Heterogeneity in Meta-Analyses Comparative Effectiveness Research
    • Patient Characteristic Moderator Analyses
    • Intervention Characteristic Moderator Analyses
      • Intervention Moderators
    • Setting and Context Moderator Analyses
    • Intervention Worth
    • Interpreting Meta-Analysis Results for Comparative Effectiveness Research
    • Limitations and Challenges of Meta-Analysis CER
    • Use of CER Meta-Analysis Results
    • Conclusions
    • Acknowledgments
    • References