Meta Analysis of Observational Studies in Epidemiology Summary Write a 2 page APA format summary of your informed opinion regarding the validity of the use of meta-analysis in epidemiological research. Include at least two strengths or limitations of meta-analysis as a systematic review technique. Provide evidence from at least one of the articles (attached) and justify your position. CONSENSUS STATEMENT
Meta-analysis of Observational Studies
in Epidemiology
A Proposal for Reporting
Donna F. Stroup, PhD, MSc
Jesse A. Berlin, ScD
Sally C. Morton, PhD
Ingram Olkin, PhD
G. David Williamson, PhD
Drummond Rennie, MD
David Moher, MSc
Betsy J. Becker, PhD
Theresa Ann Sipe, PhD
Stephen B. Thacker, MD, MSc
for the Meta-analysis Of
Observational Studies in
Epidemiology (MOOSE) Group
B
ECAUSE OF PRESSURE FOR TIMELY
and informed decisions in public health and medicine and the
explosion of information in the
scientific literature, research results must
be synthesized to answer urgent questions.1-4 Principles of evidence-based
methods to assess the effectiveness of
health care interventions and set policy
are cited increasingly.5 Meta-analysis, a
systematic approach to identifying, appraising, synthesizing, and (if appropriate) combining the results of relevant
studies to arrive at conclusions about a
body of research, has been applied with
increasing frequency to randomized controlled trials (RCTs), which are considered to provide the strongest evidence
regarding an intervention.6,7
However, in many situations randomized controlled designs are not feasible, and only data from observational
2008 JAMA, April 19, 2000Vol 283, No. 15
Objective Because of the pressure for timely, informed decisions in public health and
clinical practice and the explosion of information in the scientific literature, research
results must be synthesized. Meta-analyses are increasingly used to address this problem, and they often evaluate observational studies. A workshop was held in Atlanta,
Ga, in April 1997, to examine the reporting of meta-analyses of observational studies
and to make recommendations to aid authors, reviewers, editors, and readers.
Participants Twenty-seven participants were selected by a steering committee, based
on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedical editing. Deliberations of the workshop were open to other interested scientists. Funding for this activity was provided by the Centers for Disease Control and Prevention.
Evidence We conducted a systematic review of the published literature on the conduct and reporting of meta-analyses in observational studies using MEDLINE, Educational Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics.
We also examined reference lists of the 32 studies retrieved and contacted experts in
the field. Participants were assigned to small-group discussions on the subjects of bias,
searching and abstracting, heterogeneity, study categorization, and statistical methods.
Consensus Process From the material presented at the workshop, the authors
developed a checklist summarizing recommendations for reporting meta-analyses of observational studies. The checklist and supporting evidence were circulated to all conference attendees and additional experts. All suggestions for revisions were addressed.
Conclusions The proposed checklist contains specifications for reporting of metaanalyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve
the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision
makers. An evaluation plan is suggested and research areas are explored.
www.jama.com
JAMA. 2000;283:2008-2012
studies are available.8 Here, we define an
observational study as an etiologic or effectiveness study using data from an existing database, a cross-sectional study,
a case series, a case-control design, a design with historical controls, or a cohort design.9 Observational designs may
lack the experimental element of a random allocation to an intervention and
rely on studies of association between
changes or differences in 1 characteristic (eg, an exposure or intervention) and
changes or differences in an outcome of
Author Affiliations: Centers for Disease Control and
Prevention, Atlanta, Ga (Drs Stroup, Williamson, and
Thacker); University of Pennsylvania School of Medicine, Philadelphia (Dr Berlin); RAND Corporation, Santa
Monica (Dr Morton), University of California, San Francisco (Dr Rennie), Stanford University, Stanford (Dr
Olkin), Calif; JAMA, Chicago, Ill (Dr Rennie); Thomas
C. Chalmers Centre for Systematic Reviews, Childrens Hospital of Eastern Ontario Research Institute, Ottawa (Mr Moher); Michigan State University, East Lansing (Dr Becker); and Georgia State
University, Atlanta (Dr Sipe).
A complete list of members of the MOOSE Group appears at the end of this article.
Corresponding Author and Reprints: Donna F. Stroup,
PhD, MSc, Centers for Disease Control and Prevention, 1600 Clifton Rd NE, Mail Stop C08, Atlanta, GA
30333 (e-mail: dfs2@cdc.gov).
©2000 American Medical Association. All rights reserved.
REPORTING META-ANALYSES OF OBSERVATIONAL STUDIES
interest. These designs have long been
used in the evaluation of educational
programs10 and exposures that might
cause disease or injury.11 Studies of risk
factors generally cannot be randomized because they relate to inherent human characteristics or practices, and exposing subjects to harmful risk factors
is unethical.12 At times, clinical data may
be summarized in order to design a randomized comparison.13 Observational
data may also be needed to assess the
effectiveness of an intervention in a
community as opposed to the special
setting of a controlled trial.14 Thus, a
clear understanding of the advantages
and limitations of statistical syntheses
of observational data is needed.15
Although meta-analysis restricted to
RCTs is usually preferred to metaanalyses of observational studies,16-18 the
number of published meta-analyses
concerning observational studies in
health has increased substantially during the past 4 decades (678 in 19551992, 525 in 1992-1995, and more than
400 in 1996 alone).19
While guidelines for meta-analyses
have been proposed, many are written
from the meta-analysts (authors) rather
than from the reviewers, editors, or
readers perspective20 and restrict attention to reporting of meta-analyses of
RCTs.21,22 Meta-analyses of observational studies present particular challenges because of inherent biases and
differences in study designs23; yet, they
may provide a tool for helping to understand and quantify sources of variability in results across studies.24
We describe here the results of a
workshop held in Atlanta, Ga, in April
1997, to examine concerns regarding the
reporting of Meta-analysis Of Observational Studies in Epidemiology
(MOOSE). This article summarizes deliberations of 27 participants (the
MOOSE group) of evidence leading to
recommendations regarding the reporting of meta-analyses. Meta-analysis of individual-level data from different studies, sometimes called pooled analysis
or meta-analysis of individual patient
data,25,26 has unique challenges that we
will not address here. We propose a
checklist of items for reporting that
builds on similar activities for RCTs22
and is intended for use by authors, reviewers, editors, and readers of metaanalyses of observational studies.
METHODS
We conducted a systematic review of
the published literature on the conduct and reporting of meta-analyses
in observational studies. Databases
searched included MEDLINE, Educational Resources Information Center,
PsycLIT (http://www.wesleyan.edu
/libr), and the Current Index to Statistics. In addition, we examined reference lists and contacted experts in the
field. We used the 32 articles retrieved
to generate the conference agenda and
set topics of bias, searching and abstracting, heterogeneity, study categorization, and statistical methods. We invited experts in meta-analysis from the
fields of clinical practice, trials, statistics, epidemiology, social sciences, and
biomedical editing.
The workshop included an overview
of the quality of reporting of metaanalyses in education and the social
sciences. Plenary talks were given on the
topics set by the conference agenda. For
each of 2 sessions, workshop participants were assigned to 1 of 5 small discussion groups, organized around the
topic areas. For each group, 1 of the
authors served as facilitator, and a
recorder summarized points of discussion for issues to be presented to all
participants. Time was provided for the
2 recorders and 2 facilitators for each
topic to meet and prepare plenary presentations given to the entire group.
We proposed a checklist for metaanalyses of observational studies based
on the deliberation of the independent
groups. Finally, we circulated the checklist for comment to all conference attendees and representatives of several constituencies who would use the checklist.
RESULTS
The checklist resulting from workgroup deliberations is organized
around recommendations for reporting background, search strategy,
©2000 American Medical Association. All rights reserved.
methods, results, discussion, and conclusions (TABLE).
Background
Reporting of the background should
include the definition of the problem
under study, statement of hypothesis,
description of the study outcome(s)
considered, type of exposure or intervention used, type of study design used,
and complete description of the study
population. When combining observational studies, heterogeneity of populations (eg, US vs international studies), design (eg, case-control vs cohort
studies), and outcome (eg, different
studies yielding different relative risks
that cannot be accounted for by sampling variation) is expected.8
Search
Reporting of the search strategy should
include qualifications of the searchers, specification of databases used,
search strategy and index terms, use of
any special features (eg, explosion),
search software used, use of hand
searching and contact with authors, use
of materials in languages other than English, use of unpublished material, and
exclusion criteria used. Published research shows that use of electronic databases may find only half of all relevant studies, and contacting authors
may be useful,27 although this result
may not be true for all topic areas.28
For example, a meta-analysis of depression in elderly medical inpatients29
used 2 databases for the search. In
addition, bibliographies of retrieved
papers were searched. However, the authors did not report their search strategy in enough detail to allow replication. An example of a thorough reject
log can be found in the report of a metaanalysis of electrical and magnetic field
exposure and leukemia.30 Examples of
a table characterizing studies included
can be found in Franceschi et al31 and
Saag et al.32 Complete specification of
search strategy is not uniform; a review
of 103 published meta-analyses in education showed that search procedures
were described inadequately in the majority of the articles.10
JAMA, April 19, 2000Vol 283, No. 15
2009
REPORTING META-ANALYSES OF OBSERVATIONAL STUDIES
Methods
Items in this checklist section are concerned with the appropriateness of any
quantitative summary of the data; degree to which coding of data from the articles was specified and objective; assessment of confounding, study quality, and
heterogeneity; use of statistical methods; and display of results. Empirical evidence shows that reporting of procedures for classification and coding and
quality assessment is often incomplete:
fewer than half of the meta-analyses reported details of classifying and coding
the primary study data, and only 22% assessed quality of the primary studies.10
We recognize that the use of quality
scoring in meta-analyses of observa-
tional studies is controversial, as it is for
RCTs,16,33 because scores constructed in
an ad hoc fashion may lack demonstrated validity, and results may not be
associated with quality.34 Nevertheless,
some particular aspects of study quality have been shown to be associated
with effect: eg, adequate concealment of
allocation in randomized trials.35 Thus,
key components of design, rather than
aggregate scores themselves, may be important. For example, in a study of blinding (masking) of readers participating
in meta-analyses, masking essentially
made no difference in the summary
odds ratios across the 5 meta-analyses.36 We recommend the reporting of
quality scoring if it has been done and
Table. A Proposed Reporting Checklist for Authors, Editors, and Reviewers of Meta-analyses
of Observational Studies
Reporting of background should include
Problem definition
Hypothesis statement
Description of study outcome(s)
Type of exposure or intervention used
Type of study designs used
Study population
Reporting of search strategy should include
Qualifications of searchers (eg, librarians and investigators)
Search strategy, including time period included in the synthesis and keywords
Effort to include all available studies, including contact with authors
Databases and registries searched
Search software used, name and version, including special features used (eg, explosion)
Use of hand searching (eg, reference lists of obtained articles)
List of citations located and those excluded, including justification
Method of addressing articles published in languages other than English
Method of handling abstracts and unpublished studies
Description of any contact with authors
Reporting of methods should include
Description of relevance or appropriateness of studies assembled for assessing the hypothesis
to be tested
Rationale for the selection and coding of data (eg, sound clinical principles or convenience)
Documentation of how data were classified and coded (eg, multiple raters, blinding, and
interrater reliability)
Assessment of confounding (eg, comparability of cases and controls in studies where
appropriate)
Assessment of study quality, including blinding of quality assessors; stratification or regression
on possible predictors of study results
Assessment of heterogeneity
Description of statistical methods (eg, complete description of fixed or random effects models,
justification of whether the chosen models account for predictors of study results,
dose-response models, or cumulative meta-analysis) in sufficient detail to be replicated
Provision of appropriate tables and graphics
Reporting of results should include
Graphic summarizing individual study estimates and overall estimate
Table giving descriptive information for each study included
Results of sensitivity testing (eg, subgroup analysis)
Indication of statistical uncertainty of findings
Reporting of discussion should include
Quantitative assessment of bias (eg, publication bias)
Justification for exclusion (eg, exclusion of nonEnglish-language citations)
Assessment of quality of included studies
Reporting of conclusions should include
Consideration of alternative explanations for observed results
Generalization of the conclusions (ie, appropriate for the data presented and within the domain
of the literature review)
Guidelines for future research
Disclosure of funding source
2010 JAMA, April 19, 2000Vol 283, No. 15
also recommend subgroup or sensitivity analysis rather than using quality
scores as weights in the analysis.37,38
While some control over heterogeneity of design may be accomplished
through the use of exclusion rules, we
recommend using broad inclusion criteria for studies, and then performing
analyses relating design features to outcome.8 In cases when heterogeneity of
outcomes is particularly problematic,
a single summary measure may well be
inappropriate.39 Analyses that stratify
by study feature or regression analysis
with design features as predictors can
be useful in assessing whether study
outcomes indeed vary systematically
with these features.40
Investigating heterogeneity was a key
feature of a meta-analysis of observational studies of asbestos exposure and
risk of gastrointestinal cancer.41 The authors of the meta-analysis hypothesized that studies allowing for a latent
period between the initiation of exposure and any increases in risk should
show, on average, appropriately higher
standardized mortality ratios than studies that ignored latency. In other words,
the apparent effect of exposure would
be attenuated by including the latent
period in the calculation of time at risk
(the denominator), since exposurerelated deaths (the numerator) would,
by definition, not occur during that latent period (FIGURE).
In fact, the data suggested that studies allowing for latent periods found
on average somewhat higher standardized mortality ratios than studies
ignoring latency. This example shows
that sources of bias and heterogeneity
can be hypothesized prior to analysis
and subsequently confirmed by the
analysis.
Results
Recommendations for reporting of results include graphical summaries of
study estimates and any combined estimate, a table listing descriptive information for each study, results of sensitivity testing and any subgroup
analysis, and an indication of statistical uncertainty of findings.
©2000 American Medical Association. All rights reserved.
REPORTING META-ANALYSES OF OBSERVATIONAL STUDIES
Discussion
The discussion should include issues related to bias, including publication bias,
confounding, and quality. Bias can occur in the original studies (resulting from
flaws in the study design that tend to distort the magnitude or direction of associations in the data) or from the way in
which studies are selected for inclusion.42 Publication bias, the selective
publication of studies based on the magnitude (usually larger) and direction of
their findings, represents a particular
threat to the validity of meta-analysis of
observational studies.43-45 Thorough
specifications of quality assessment can
contribute to understanding some of the
variations in the observational studies
themselves. Methods should be used to
aid in the detection of publication bias,
eg, fail-safe procedures or funnel plots.46
Schlesselman47 comments on such biases in assessing the possible association between endometrial cancer and
oral contraceptives. This meta-analysis combined both cohort and casecontrol studies and used a sensitivity
analysis to illustrate the influence of
specific studies, such as those published in English.
Conclusion
Due to these biases in observational
studies, the conclusion of the report
should contain consideration of alternative explanations for observed results and appropriate generalizations of
the conclusion. A carefully conducted
meta-analysis can reveal areas warranting further research. Finally, since funding source has been shown to be an important source of heterogeneity,48 the
sponsoring organization should be disclosed and any effect on analysis should
be examined.
COMMENT
Taking stock of what is known in any
field involves reviewing the existing literature, summarizing it in appropriate ways, and exploring the implications of heterogeneity of population and
study for heterogeneity of study results. Meta-analysis provides a systematic way of performing this research
synthesis, while indicating when more
research is necessary.
The application of formal metaanalytic methods to observational studies has been controversial.42 One reason
for this has been that potential biases in
the original studies, relative to the biases
in RCTs, make the calculation of a single
summary estimate of effect of exposure
potentially misleading. Similarly, the
extreme diversity of study designs and
populations in epidemiology makes the
interpretation of simple summaries problematic, at best. In addition, methodologic issues related specifically to metaanalysis, such as publication bias, could
have particular impact when combining results of observational studies.44,47
Despite these challenges, metaanalyses of observational studies continue to be one of the few methods for
assessing efficacy and effectiveness and
are being published in increasing numbers. Our goal is to improve th…
Purchase answer to see full
attachment
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.