Refer to Slide 7 of the Week 3 slide deck for this reply.
If you had to make an informed decision, as a health professional, in moving forward on a treatment, intervention, or policy, which 1 or 2 of these considerations would be important for you, and why?
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
ARI
15 February 2009
12:1
Evidence-Based Public
Health: A Fundamental
Concept for Public
Health Practice
Ross C. Brownson,1 Jonathan E. Fielding,2
and Christopher M. Maylahn3
1
Prevention Research Center in St. Louis, George Warren Brown School of Social Work,
Department of Surgery and Alvin J. Siteman Cancer Center, Washington University School
of Medicine, Washington University in St. Louis, St. Louis, Missouri 63110;
email: rbrownson@wustl.edu
2
Los Angeles Department of Health Services, Los Angeles, California 90012; School of
Public Health, University of California, Los Angeles, California 90095-1772;
email: jfieldin@ucla.edu
3
Office of Public Health Practice, New York State Department of Health, Albany,
New York 12237; email: cmm05@health.state.ny.us
Annu. Rev. Public Health 2009. 30:175–201
Key Words
First published online as a Review in Advance on
January 14, 2009
disease prevention, evidence-based medicine, intervention,
population-based
The Annual Review of Public Health is online at
publhealth.annualreviews.org
This article’s doi:
10.1146/annurev.publhealth.031308.100134
c 2009 by Annual Reviews.
Copyright
All rights reserved
0163-7525/09/0421-0175$20.00
Abstract
Despite the many accomplishments of public health, a greater attention to evidence-based approaches is warranted. This article reviews
the concepts of evidence-based public health (EBPH), on which formal
discourse originated about a decade ago. Key components of EBPH
include making decisions on the basis of the best available scientific
evidence, using data and information systems systematically, applying program-planning frameworks, engaging the community in decision making, conducting sound evaluation, and disseminating what is
learned. Three types of evidence have been presented on the causes of
diseases and the magnitude of risk factors, the relative impact of specific interventions, and how and under which contextual conditions interventions were implemented. Analytic tools (e.g., systematic reviews,
economic evaluation) can be useful in accelerating the uptake of EBPH.
Challenges and opportunities (e.g., political issues, training needs) for
disseminating EBPH are reviewed. The concepts of EBPH outlined in
this article hold promise to better bridge evidence and practice.
175
ANRV370-PU30-10
ARI
15 February 2009
12:1
INTRODUCTION
Public health research and practice are credited
with many notable achievements, including
much of the 30-year gain in life expectancy in
the United States over the twentieth century
(124). A large part of this increase can be
attributed to provision of safe water and
food, sewage treatment and disposal, tobacco
use prevention and cessation, injury prevention, control of infectious diseases through
immunization and other means, and other
population-based interventions (34).
Despite these successes, many additional
opportunities to improve the public’s health
remain. To achieve state and national objectives for improved population health, more
widespread adoption of evidence-based strategies has been recommended (19, 57, 64, 109,
119). Increased focus on evidence-based public health (EBPH) has numerous direct and indirect benefits, including access to more and
higher-quality information on what works, a
higher likelihood of successful programs and
policies being implemented, greater workforce
productivity, and more efficient use of public
and private resources (19, 77, 95).
Ideally, public health practitioners should always incorporate scientific evidence in selecting
and implementing programs, developing policies, and evaluating progress (23, 107). Society pays a high opportunity cost when interventions that yield the highest health return
on an investment are not implemented (55). In
practice, intervention decisions are often based
on perceived short-term opportunities, lacking
systematic planning and review of the best evidence regarding effective approaches. These
concerns were noted two decades ago when
the Institute of Medicine determined that decision making in public health is often driven
by “crises, hot issues, and concerns of organized interest groups” (p. 4) (82). Barriers to
implementing EBPH include the political environment and deficits in relevant and timely
research, information systems, resources, leadership, and the required competencies (4, 7, 23,
78).
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
EBPH: evidencebased public health
176
Brownson
·
Fielding
·
Maylahn
It is difficult to estimate how widely
evidence-based approaches are being applied.
In a survey of 107 U.S. public health practitioners, an estimated 58% of programs in
their agencies were deemed evidence-based
(i.e., using the most current evidence from peerreviewed research) (51). This finding in public health settings appears to mirror the use
of evidence-based approaches in clinical care.
A random study of adults living in selected
metropolitan areas within the United States
found that 55% of overall medical care was
based on what is recommended in the medical literature (108). Thacker and colleagues
(159) found that the preventable fraction (i.e.,
how much of a reduction in the health burden is estimated to occur if an intervention is
carried out) was known for only 4.4% of 702
population-based interventions. Similarly, costeffectiveness data are reported for a low proportion of public health interventions.
Several concepts are fundamental to achieving a more evidence-based approach to public
health practice. First, we need scientific information on the programs and policies that are
most likely to be effective in promoting health
(i.e., undertake evaluation research to generate sound evidence) (14, 19, 45, 77). An array
of effective interventions is now available from
numerous sources including the Guide to Community Preventive Services (16, 171), the Guide
to Clinical Preventive Services (2), Cancer Control PLANET (29), and the National Registry
of Evidence-Based Programs and Practices (142).
Second, to translate science to practice, we need
to marry information on evidence-based interventions from the peer-reviewed literature with
the realities of a specific real-world environment (19, 69, 96). To do so, we need to better define processes that lead to evidence-based
decision making. Finally, wide-scale dissemination of interventions of proven effectiveness
must occur more consistently at state and local
levels (91). This article focuses particularly on
state and local public health departments because of their responsibilities to assess public
health problems, develop appropriate programs
ANRV370-PU30-10
ARI
15 February 2009
12:1
or policies, and assure that programs and policies are effectively implemented in states and
local communities (81, 82).
We review EBPH in four major sections that
describe (a) relevant background issues, including concepts underlying EBPH and definitions
of evidence; (b) key analytic tools to enhance the
adoption of evidence-based decision making;
(c) challenges and opportunities for implementation in public health practice; and (d ) future
issues.
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
EVOLUTION OF THE TENETS
OF EVIDENCE-BASED
PUBLIC HEALTH
Formal discourse on the nature and scope of
EBPH originated about a decade ago. Several
authors have attempted to define EBPH. In
1997, Jenicek defined EBPH as the “conscientious, explicit, and judicious use of current best
evidence in making decisions about the care
of communities and populations in the domain
of health protection, disease prevention, health
maintenance and improvement (health promotion)” (84). In 1999, scholars and practitioners in Australia (64) and the United States (23)
elaborated further on the concept of EBPH.
Glasziou and colleagues posed a series of questions to enhance uptake of EBPH (e.g., “Does
this intervention help alleviate this problem?”)
and identified 14 sources of high-quality evidence (64). Brownson and colleagues described
a six-stage process by which practitioners can
take a more evidence-based approach to decision making (19, 23). Kohatsu and colleagues
broadened earlier definitions of EBPH to include the perspectives of community members,
fostering a more population-centered approach
(96). In 2004, Rychetnik and colleagues summarized many key concepts in a glossary for EBPH
(141). There appears to be a consensus among
investigators and public health leaders that a
combination of scientific evidence and values,
resources, and context should enter into decision making (Figure 1) (19, 119, 141, 151, 152).
In summarizing these various attributes of
EBPH, key characteristics include
Making decisions using the best available
peer-reviewed evidence (both quantitative and qualitative research),
Using data and information systems systematically,
Applying program-planning frameworks
(that often have a foundation in behavioral science theory),
Engaging the community in assessment
and decision making,
Conducting sound evaluation, and
Disseminating what is learned to key
stakeholders and decision makers.
Accomplishing these activities in EBPH is
likely to require a synthesis of scientific skills,
enhanced communication, common sense, and
political acumen.
Defining Evidence
At the most basic level, evidence involves “the
available body of facts or information indicating whether a belief or proposition is true or
valid” (85). The idea of evidence often derives
from legal settings in Western societies. In law,
evidence comes in the form of stories, witness accounts, police testimony, expert opinions, and forensic science (112). For a public health professional, evidence is some form
of data—including epidemiologic (quantitative)
data, results of program or policy evaluations,
and qualitative data—for uses in making judgments or decisions (Figure 2). Public health
evidence is usually the result of a complex cycle of observation, theory, and experiment (114,
138). However, the value of evidence is in the
eye of the beholder (e.g., usefulness of evidence
may vary by stakeholder type) (92). Medical evidence includes not only research but characteristics of the patient, a patient’s readiness to
undergo a therapy, and society’s values (122).
Policy makers seek out distributional consequences (i.e., who has to pay, how much, and
who benefits) (154), and in practice settings,
anecdotes sometimes trump empirical data (26).
Evidence is usually imperfect and, as noted by
Muir Gray, “[t]he absence of excellent evidence
does not make evidence-based decision making
www.annualreviews.org • Evidence-Based Public Health
177
ANRV370-PU30-10
ARI
15 February 2009
12:1
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Best available
research evidence
Environment and
organizational
context
Decision-making
Population
characteristics,
needs, values,
and preferences
Resources,
including
practitioner
expertise
Figure 1
Domains that influence evidence-based decision making [from Spring et al. (151, 152)].
• Scientific literature in systematic
reviews
• Scientific literature in one or more
journal articles
• Public health surveillance data
• Program evaluations
• Qualitative data
Objective
– Community members
– Other stakeholders
• Media/marketing data
• Word of mouth
• Personal experience
Figure 2
Different forms of evidence. Adapted from Chambers & Kerner (37).
178
Brownson
·
Fielding
·
Maylahn
Subjective
ANRV370-PU30-10
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Table 1
ARI
15 February 2009
12:1
Comparison of the types of scientific evidence
Characteristic
Type One
Typical data/
relationship
Size and strength of preventable
risk—disease relationship (measures
of burden, etiologic research)
Relative effectiveness of public
health intervention
Information on the adaptation and
translation of an effective
intervention
Common
setting
Clinic or controlled community
setting
Socially intact groups or
community wide
Socially intact groups or
community wide
Example
Smoking causes lung cancer
Price increases with a targeted
media campaign reduce smoking
rates
Understanding the political
challenges of price increases or
targeting media messages to
particular audience segments
Quantity
More
Less
Less
Action
Something should be done
This particular intervention
should be implemented
How an intervention should be
implemented
impossible; what is required is the best evidence
available not the best evidence possible” (119).
Several authors have defined types of scientific evidence for public health practice
(Table 1) (19, 23, 141). Type 1 evidence defines the causes of diseases and the magnitude, severity, and preventability of risk factors and diseases. It suggests that “something
should be done” about a particular disease or
risk factor. Type 2 evidence describes the relative impact of specific interventions that do
or do not improve health, adding “specifically,
this should be done” (19). There are different
sources of Type 2 evidence (Table 2). These
categories build on work from Canada, the
United Kingdom, Australia, the Netherlands,
and the United States on how to recast the
strength of evidence, emphasizing the weight
of evidence and a wider range of considerations beyond efficacy. We define four categories
within a typology of scientific evidence for
decision making: evidence-based, efficacious,
promising, and emerging interventions. Adherence to a strict hierarchy of study designs may
reinforce an inverse evidence law by which interventions most likely to influence whole populations (e.g., policy change) are least valued
in an evidence matrix emphasizing randomized
designs (125, 127). Type 3 evidence (of which
we have the least) shows how and under which
contextual conditions interventions were implemented and how they were received, thus
Type Two
Type Three
informing “how something should be done”
(141). Studies to date have tended to overemphasize internal validity (e.g., well-controlled
efficacy trials) while giving sparse attention to
external validity (e.g., the translation of science to the various circumstances of practice)
(62, 71).
Understanding the context for evidence.
Type 3 evidence derives from the context of
an intervention (141). Although numerous authors have written about the role of context in
informing evidence-based practice (32, 60, 77,
90, 92, 93, 140, 141), there is little consensus
on its definition. When moving from clinical
interventions to population-level and policy interventions, context becomes more uncertain,
variable, and complex (49). One useful definition of context highlights information needed
to adapt and implement an evidence-based intervention in a particular setting or population
(141). The context for Type 3 evidence specifies five overlapping domains (Table 3). First,
characteristics of the target population for an
intervention are defined such as education level
and health history (104). Next, interpersonal
variables provide important context. For example, a person with a family history of cancer
might be more likely to undergo cancer screening. Third, organizational variables should be
considered. For example, whether an agency
is successful in carrying out an evidence-based
www.annualreviews.org • Evidence-Based Public Health
179
ANRV370-PU30-10
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Table 2
ARI
15 February 2009
12:1
Typology for classifying interventions by level of scientific evidence
Considerations for the level of scientific
evidence
Category
How established
Evidencebased
Peer review via systematic or
narrative review
Based on study design and execution
External validity
Potential side benefits or harms
Costs and cost-effectiveness
Community Guide
Cochrane reviews
Narrative reviews based on published
literature
Data source examples
Effective
Peer review
Based on study design and execution
External validity
Potential side benefits or harms
Costs and cost-effectiveness
Articles in the scientific literature
Research-tested intervention
programs (123)
Technical reports with peer review
Promising
Written program evaluation
without formal peer review
Summative evidence of effectiveness
Formative evaluation data
Theory-consistent, plausible, potentially
high-reach, low-cost, replicable
State or federal government reports
(without peer review)
Conference presentations
Emerging
Ongoing work, practicebased summaries, or
evaluation works in progress
Formative evaluation data
Theory-consistent, plausible, potentially
high-reaching, low-cost, replicable
Face validity
Evaluability assessmentsa
Pilot studies
NIH CRISP database
Projects funded by health foundations
a
A preevaluation activity that involves an assessment is an assessment prior to commencing an evaluation to establish whether a program or policy can be
evaluated and what might be the barriers to its evaluation (145).
program will be influenced by its capacity (e.g.,
a trained workforce, agency leadership) (51, 77).
Fourth, social norms and culture are known to
shape many health behaviors. Finally, larger political and economic forces affect context. For
example, a high rate for a certain disease may
influence a state’s political will to address the
issue in a meaningful and systematic way. Particularly for high-risk and understudied populations, there is a pressing need for evidence
on contextual variables and ways of adapting
programs and policies across settings and population subgroups. Contextual issues are being
addressed more fully in the new realist review,
which is a systematic review process that seeks
to examine not only whether an intervention
works but also how interventions work in realworld settings (134).
Triangulating evidence. Triangulation involves the accumulation of evidence from a variety of sources to gain insight into a particular
topic (164) and often combines quantitative and
qualitative data (19). It generally uses multiple
180
Brownson
·
Fielding
·
Maylahn
methods of data collection and/or analysis to
determine points of commonality or disagreement (47, 153). Triangulation is often beneficial because of the complementary nature of
information from different sources. Although
quantitative data provide an excellent opportunity to determine how variables are related
for large numbers of people, these data provide
little understanding of why these relationships
exist. Qualitative data, on the other hand, can
help provide information to explain quantitative findings, or what has been called “illuminating meaning” (153). One can find many examples of the use of triangulation of qualitative
and quantitative data to evaluate health programs and policies including AIDS-prevention
programs (50), occupational health programs
and policies (79), and chronic disease prevention programs in community settings (66).
Audiences for EBPH
There are four overlapping user groups for
EBPH (56). The first includes public health
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
ARI
15 February 2009
12:1
practitioners with executive and managerial responsibilities who want to know the scope and
quality of evidence for alternative strategies
(e.g., programs, policies). In practice, however,
public health practitioners frequently have a
relatively narrow set of options. Funds from
federal, state, or local sources are most often
earmarked for a specific purpose (e.g., surveillance and treatment of sexually transmitted diseases, inspection of retail food establishments).
Still, the public health practitioner has the opportunity, even the obligation, to carefully review the evidence for alternative ways to achieve
the desired health goals. The next user group
is policy makers at local, regional, state, national, and international levels. They are faced
with macrolevel decisions on how to allocate
the public resources of which they are stewards.
This group has the additional responsibility of
making policies on controversial public issues.
The third group is composed of stakeholders
who will be affected by any intervention. This
includes the public, especially those who vote,
as well as interest groups formed to support or
oppose specific policies, such as the legality of
abortion, whether the community water supply
should be fluoridated, or whether adults must
be issued handgun licenses if they pass background checks. The final user group is composed of researchers on population health issues, such as those who evaluate the impact of a
specific policy or program. They both develop
and use evidence to answer research questions.
Similarities and Differences between
EBPH and Evidence-Based Medicine
The concept of evidence-based practice is well
established in numerous disciplines including psychology (136), social work (58), and
nursing (115). It is probably best established
in medicine. The doctrine of evidence-based
medicine (EBM) was formally introduced in
1992 (53). Its origins can be traced back to
the seminal work of Cochrane that noted many
medical treatments lacked scientific effectiveness (41). A basic tenet of EBM is to deempha-
Table 3 Contextual variables for intervention
design, implementation, and adaptation
Category
Examples
Individual
Education level
Basic human needsa
Personal health history
Interpersonal
Family health history
Support from peers
Social capital
Organizational
Staff composition
Staff expertise
Physical infrastructure
Organizational culture
Sociocultural
Social norms
Values
Cultural traditions
History
Political and economic
Political will
Political ideology
Lobbying and special interests
Costs and benefits
a
Basic human needs include food, shelter, warmth, safety (104).
size unsystematic clinical experience and place
greater emphasis on evidence from clinical research. This approach requires new skills, such
as efficient literature searching and an understanding of types of evidence in evaluating the
clinical literature (73). The literature on EBM
has grown rapidly, contributing to the formal
recognition of EBM. Using the search term
“evidence-based medicine” there were 0 citations in 1991, rising to 4040 citations in 2007
(Figure 3). Even though the formal terminology of EBM is relatively recent, its concepts
are embedded in earlier efforts such as the
Canadian Task Force for the Periodic Health
Examination (28) and the Guide to Clinical Preventive Services (167).
Important distinctions can be made between
evidence-based approaches in medicine and
public health. First, the type and volume of evidence differ. Medical studies of pharmaceuticals and procedures often rely on randomized controlled trials of individuals, the most
www.annualreviews.org • Evidence-Based Public Health
181
ARI
15 February 2009
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
12:1
Figure 3
Citations for evidence-based medicine.
scientifically rigorous of epidemiologic studies. In contrast, public health interventions
usually rely on cross-sectional studies, quasiexperimental designs, and time-series analyses. These studies sometimes lack a comparison
group and require more caveats when interpreting the results. Over the past 50 years, there
have been more than one million randomized
controlled trials of medical treatments (157).
Many fewer studies have been performed on
the effectiveness of public health interventions
(19, 128) because they are difficult to design,
and often results derive from natural experiments (e.g., a state adopting a new policy compared with other states). EBPH has borrowed
the term intervention from clinical disciplines,
insinuating specificity and discreteness. However, in public health, we seldom have a single
“intervention,” but rather a program that involves a blending of several interventions within
a community. Large community-based trials
can be more expensive to conduct than randomized experiments in a clinic. Populationbased studies generally require a longer time
182
Brownson
·
Fielding
·
Maylahn
period between intervention and outcome. For
example, a study on the effects of smoking cessation on lung cancer mortality would require
decades of data collection and analysis. Contrast that with treatment of a medical condition (e.g., an antibiotic for symptoms of pneumonia), which is likely to produce effects in
days or weeks, or even a surgical trial for cancer with endpoints of mortality within a few
years.
The formal training of persons working in
public health is much more variable than that
in medicine or other clinical disciplines (161).
Unlike medicine, public health relies on a variety of disciplines, and there is not a single academic credential that certifies a public health
practitioner, although efforts to establish credentials (via an exam) are now underway. Fewer
than 50% of public health workers have any formal training in a public health discipline such
as epidemiology or health education (166). This
higher level of heterogeneity means that multiple perspectives are involved in a more complicated decision-making process. It also suggests
ANRV370-PU30-10
ARI
15 February 2009
12:1
that effective public health practice places a premium on routine, on-the-job training.
ANALYTIC TOOLS AND
APPROACHES TO ENHANCE
THE UPTAKE OF EBPH
Several analytic tools and planning approaches
can help practitioners answer questions such as
the following:
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
What is the size of the public health problem?
Are there effective interventions for addressing the problem?
What information about the local context
and this particular intervention is helpful
in deciding its potential use in the situation at hand?
Is a particular program or policy worth
doing (i.e., is it better than alternatives?),
and will it provide a satisfactory return on
investment, measured in monetary terms
or in health impacts?
Public Health Surveillance
Public health surveillance is a critical tool for
those using EBPH. This process involves the
ongoing systematic collection, analysis, and
interpretation of specific health data, closely
integrated with the timely dissemination of
these data to those responsible for preventing
and controlling disease or injury (158). Public health surveillance systems should be able
to collect and analyze data, disseminate data
to public health programs, and regularly evaluate the effectiveness of the use of the disseminated data (160). For example, documentation
of the prevalence of elevated levels of lead (a
known toxicant) in blood in the U.S. population
was used as the justification for eliminating lead
from paint and then gasoline and for documenting the effects of these actions (5). In tobacco
control, agreement on a common metric for tobacco use enabled comparisons across the states
and an early recognition of the doubling and
then tripling of the rates of decrease in smoking in California after passage of its Proposition
99 (163), as well as a subsequent quadrupling of
the rate of decline in Massachusetts compared
with the other 48 states (11).
Systematic Reviews and
Evidence-Based Guidelines
Systematic reviews are syntheses of comprehensive collections of information on a particular topic (see examples in Table 4). Reading
a good review can be one of the most efficient
ways to become familiar with state-of-the-art
research and practice on many specific topics
in public health (80, 117, 121). The use of explicit, systematic methods (i.e., decision rules)
in reviews limits bias and reduces chance effects,
thus providing more reliable results upon which
to make decisions (132). One of the most useful
sets of reviews for public health interventions is
the Guide to Community Preventive Services (the
Community Guide) (120, 171), which provides an
overview of current scientific literature through
a well-defined, rigorous method in which available studies themselves are the units of analysis. The Community Guide seeks to answer the
following: (a) Which interventions have been
evaluated, and what have been their effects?
(b) Which aspects of interventions can help
Guide users select from among the set of interventions of proven effectiveness? And finally,
(c) What might this intervention cost, and how
do these costs compare with the likely health
impacts?
Several authors have provided checklists for
assessing the quality of a systematic review article (Table 5) (74, 88, 131). A good systematic
review should allow the practitioner to understand the local contextual conditions necessary
for successful implementation (168).
Economic Evaluation
Economic evaluation is an important component of evidence-based practice (65). It can provide information to help assess the relative value
www.annualreviews.org • Evidence-Based Public Health
183
ANRV370-PU30-10
Table 4
ARI
15 February 2009
12:1
Examples of systematic reviews and evidence-based guidelines
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Title
Description
Web site
Guide to
Community
Preventive
Services
The Guide to Community Preventive Services (the Community Guide) summarizes
what is known about the effectiveness, economic efficiency, and feasibility of
population-based interventions. The Task Force on Community Preventive
Services makes recommendations for the use of various interventions on the
basis of evidence gathered in the rigorous and systematic scientific reviews of
published studies conducted by the review teams of the Community Guide.
The findings from the reviews are published in peer-reviewed journals and are
also made available on the Web site.
http://www.
thecommunityguide.org
Guide to
Clinical
Preventive
Services
The U.S. Preventive Services Task Force (USPSTF) conducts rigorous and
systematic reviews of the scientific evidence for the effectiveness of a broad
range of clinical preventive services, including screening, counseling, and
preventive medications. The mission of the USPSTF is to evaluate the benefits
of individual services on the basis of age, gender, and risk factors for disease;
make recommendations about which preventive services should be
incorporated routinely into primary medical care and for which populations;
and identify a research agenda for clinical preventive care.
http://www.ahrq.gov/clinic/
prevenix.htm
Cochrane
Collaboration
The Cochrane Collaboration is an international organization dedicated to
making up-to-date, accurate information about the effects of health care readily
available. It produces and disseminates systematic reviews of health care
interventions and promotes the search for evidence in the form of clinical trials
and other studies of interventions. The Cochrane Collaboration was founded in
1993 and named after the British epidemiologist Archie Cochrane. The major
product of the Collaboration is the Cochrane Database of Systematic Reviews,
which is published quarterly as part of the Cochrane Library.
http://www.cochrane.org/
Cochrane
Public Health
Group
The Cochrane Public Health Group (PHRG), formerly the Health Promotion
and Public Health Field, aims to work with contributors to produce and publish
Cochrane reviews of the effects of population-level public health interventions.
The PHRG undertakes systematic reviews of the effects of public health
interventions to improve health and other outcomes at the population level, not
those targeted at individuals. Thus, it covers interventions seeking to address
macroenvironmental and distal social environmental factors that influence
health. In line with the underlying principles of public health, these reviews
seek to have a significant focus on equity and aim to build the evidence to
address the social determinants of health.
http://www.ph.cochrane.org/
Center for
Reviews and
Dissemination
The Center for Reviews and Dissemination (CRD) is part of the National
Institute for Health Research and is a department of the University of York.
CRD, which was established in 1994, is one of the largest groups in the world
engaged exclusively in evidence synthesis in the health field. CRD undertakes
systematic reviews evaluating the research evidence on health and public health
questions of national and international importance.
http://www.york.ac.uk/inst/
crd/index.htm
Campbell
Collaboration
The Campbell Collaboration, named after Donald Campbell, was founded on
the principle that systematic reviews on the effects of interventions will inform
and help improve policy and services. The Collaboration strives to make the
best social science research available and accessible. Campbell reviews provide
high-quality evidence of what works to meet the needs of service providers,
policy makers, educators and their students, professional researchers, and the
general public. Areas of interest include crime, justice, education, and social
welfare.
http://www.
campbellcollaboration.org/
184
Brownson
·
Fielding
·
Maylahn
ANRV370-PU30-10
ARI
15 February 2009
12:1
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Table 5 Checklist for evaluating the methodologic quality of a systematic review. Adapted from
Kelsey et al. (88), Oxman et al. (131), Guyatt & Rennie (74), and Briss et al. (16, 17)
What are the methods?
• Are decision rules for the systematic review explicit, transparent, and clearly described?
• Do the methods account for study design?
• Is study execution considered?
Are the results valid?
• Were the results similar from study to study?
• How precise were the results?
• Do the pooled results allow me to examine subgroup differences?
• Did the review explicitly address a focused and answerable question?
• On the basis of the search process, is it likely that important, relevant studies were missed?
• Were the primary studies of high methodologic quality?
• Were assessments of studies reproducible?
• Can a causal association be inferred from the available data?
How can I apply the results to population health and/or patient care?
• How can I best interpret the results to apply them to the populations that I serve in my public health
agency or to the care of patients in my practice?
• Were all outcomes of clinical and public health importance considered?
• Are the benefits worth the costs and potential risks?
• Did the authors provide explicit consideration of external validity?
of alternative expenditures on public health
programs and policies. In cost-benefit analysis,
all the costs and consequences of the decision
options are valued in monetary terms. More often, the economic investment associated with
an intervention is compared with the health impacts, such as cases of disease prevented or years
of life saved. This technique, cost-effectiveness
analysis (CEA), can suggest the relative value of
alternative interventions (i.e., health return on
dollars invested) (65). CEA has become an increasingly important tool for researchers, practitioners, and policy makers. However, relevant
data to support this type of analysis are not
always available, especially for possible public
policies designed to improve health (26, 30).
Health Impact Assessment
Health impact assessment (HIA) is a relatively
new method that seeks to estimate the probable
impact of a policy or intervention in nonhealth
sectors, such as agriculture, transportation, and
economic development, on population health
(76). Some HIAs have focused on ensuring the
involvement of relevant stakeholders in the de-
velopment of a specific project. This latter approach, the basis of environmental impact assessment required by law for many large placebased projects, is similar to the nonregulatory
approach that has been adopted for some HIAs.
Overall, HIA, in both its forms, has been gaining acceptance as a tool because of mounting evidence that social and physical environments are
important determinants of population health
and health disparities. It is now being used to
help assess the potential effects of many policies
and programs on health status and outcomes
(44, 89, 118).
Recently, Dannenberg and colleagues (46)
reviewed 27 HIAs completed in the United
States from 1999 to 2007. Topics studied ranged
from policies about living wages and afterschool programs to projects about power plants
and public transit. Within this group of 27
HIAs, an excellent illustration is the assessment
of a Los Angeles living wage ordinance (43). Researchers used estimates of the effects of health
insurance and income on mortality to project
and compare potential mortality reductions attributable to wage increases and changes in
health insurance status among workers covered
www.annualreviews.org • Evidence-Based Public Health
HIA: health impact
assessment
185
ANRV370-PU30-10
ARI
15 February 2009
D&I: dissemination
and implementation
12:1
by the Los Angeles City living wage ordinance
(43). Estimates demonstrated that the health insurance provisions of the ordinance would have
a much larger health benefit than the wage increases, thus providing valuable information for
policy makers who may consider adopting living wage ordinances in other jurisdictions or
modifying existing ordinances.
Participatory Approaches
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Participatory approaches that actively involve
community members in research and intervention projects (31, 70, 83) show promise in
engaging communities in EBPH (96). Practitioners, academicians, and community members collaboratively define issues of concern,
develop strategies for intervention, and evaluate
the outcomes. This approach relies on stakeholder input (72), builds on existing resources,
facilitates collaboration among all parties, and
integrates knowledge and action that hopefully
will lead to a fair distribution of the benefits of
an intervention or project for all partners (83,
99). Stakeholders, or key players, are individuals or agencies that have a vested interest in
the issue at hand (150). In the development of
health policies, for example, policy makers are
especially important stakeholders (144). Stakeholders should include those who would potentially receive, use, and benefit from the program
or policy being considered. In particular, three
groups of stakeholders are relevant (36):
1. Those involved in program operations,
such as sponsors, coalition partners, administrators, and staff;
2. Those served or affected by the program,
including clients, family members, neighborhood organizations, and elected officials; and
3. Primary users of the evaluation—that is,
people who are in a position to do or decide something regarding the program.
Participatory approaches may also present
challenges in adhering to EBPH principles, especially in reaching agreement on which approaches are most appropriate for addressing a
particular health problem (75).
186
Brownson
·
Fielding
·
Maylahn
DISSEMINATION AND
IMPLEMENTATION OF EBPH
Although the concept of EBPH is likely to resonate with most public health professionals, the
dissemination and implementation (D&I) of effective intervention strategies remains a significant challenge (61, 67). Drawing on experience
in clinical practice, D&I of evidence-based clinical guidelines using passive methods (e.g., publication of consensus statements in professional
journals, mass mailings) has been largely ineffective, resulting in only small changes in the
uptake of a new practice (10), and single-source
prevention messages are generally ineffective
(100).
Effective D&I of an evidence-based program often calls for time-efficient approaches,
ongoing training, and placement of high organizational value on research-informed practice (48). Furthermore, translation of research
to practice among organizations, practitioner
groups, or the general public is likely to occur in stages (139), suggesting that the decision to adopt, accept, and utilize an innovation in EBPH is a process rather than a single
act.
Active Ingredients
EBPH relies on the transferability of evidence
about effective interventions to new community settings. Practitioners need to identify the
most important components or “active ingredients” of an intervention. The active ingredients of an effective intervention are the essential elements that produce the desired results.
The concept of active ingredients in clinical interventions is exemplified by mental health interventions (116) and smoking cessation counseling (87). This is analogous to the concept
of best processes needed when generalizing research to other populations, places, and times
(68). Understanding these essential factors and
how the context for a proposed replication may
differ from the original is critical. Often, constraints require some modification of the original intervention. In these situations there is an
inherent tension between fidelity (maintaining
ANRV370-PU30-10
ARI
15 February 2009
12:1
the original program design) and reinvention
(changes needed for replication or adoption in
a new setting or for a different population) (9).
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Organizational Culture
EBPH often relies on strong advocates of
the evidence or evidence champions, who are
willing to challenge the status quo within an
organization and promote new ways of making decisions. Governmental institutions, including public health agencies, are key users of
EBPH, yet they are not known for their organizational or budgetary flexibility. These agencies are typically bound to rigid civil service and
union-bargained requirements about how staff
can be hired, remunerated, evaluated, and terminated, as well as how money can be spent.
As an example, in Los Angeles County, the pay
scale for nutritionists and health educators is so
low that it is very difficult to attract even entrylevel individuals. Once hired by the county, they
are often attracted to higher-paying administrative positions that do not use their primary
expertise.
An organizational climate that supports
changes is required for innovation (148). Rigid
personnel systems often make it difficult to effectively implement new programs and keep up
with rapidly evolving technology. For example,
in many health agencies, there are no suitable
job classifications for a health economist or for
a Web designer, making it virtually impossible
to hire at competitive salaries. Relatively secure
employment and attractive rewards for longterm service (e.g., pensions, other retirement
benefits) also tend to attract individuals who
value job security more than the excitement of
new ideas and approaches. Within a hierarchical bureaucracy, few incentives exist to press superiors for changes in programs or approaches
even when the evidence is compelling. This
self-selecting candidate pool and stable employment environment often result in the attitude
that the key to a successful career is to stay under the radar to avoid possible negative performance evaluations or jeopardize advancement
opportunities. In short, unlike in some private-
sector organizations that encourage risk taking
and provide substantial monetary rewards for
success, most public-sector organizations have
a culture that discourages out of the box thinking and entrepreneurship (42).
The tendency to continue doing what has
been done in the past is a powerful impediment to change. In many bureaucracies, when
change occurs, it is usually in small incremental
steps (130). Continuing past practices requires
less effort than working through all the implications of a different approach based on newer evidence. Public health agency staff who propose
new policies or programs can encounter opposition from colleagues who may feel threatened
by the unfamiliar or from supervisors who feel
a challenge to their authority to decide on program directions.
Leadership
The attitude toward EBPH among agency leadership is important because it helps to determine the organizational culture and use of finite resources. In a survey of 152 city and
county health departments in the United States,
one of the main predictors of strong public health system performance was the attention of organizational leadership to the science
base, quality, and performance (143). However, even public health leaders who understand and embrace EBPH have challenges
in choosing and implementing innovative approaches. How should they choose priority opportunities for programs and policies among
all those recommended based on evidence reviews? As in clinical medicine (102), there are
more recommendations than are practical for
any department to introduce. Which criteria
should leaders consider when selecting among
options? Some worthy considerations include
population-attributable disease/illness burden,
preventable fraction, relative cost-effectiveness,
skills of key staff, prior experience with other
approaches, opportunities for leverage through
partnerships with other stakeholders, and consistency with an agency’s strategic plan, goals,
and objectives.
www.annualreviews.org • Evidence-Based Public Health
187
ANRV370-PU30-10
ARI
15 February 2009
12:1
Political Challenges
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Having good scientific evidence is often insufficient to convince policy makers (e.g., Congress,
state governors, boards of county supervisors,
city councils) to initiate changes based on
EBPH (39, 40). Researchers rely on experimental and observational studies to test specific hypotheses in a deliberate and systematic way (94,
97), and their influence derives from having
specialized knowledge. However, policy making happens quickly and is built on generalized
knowledge and demands from stakeholders (10,
40). Policy makers have to sell, argue, advocate,
and get reelected in light of their available political capital (26). The evidence for a particular
action does not necessarily lead to policy change
(3, 147). Public health agencies often face obstacles from other stakeholders in proposing or
implementing new evidence-based practices.
Programmatic and policy changes often result in winners and losers who can be at odds
in the EBPH process (1). A contractor who financially supports an elected decision maker
may have more clout than the agency, regardless of its merits. Public health agencies, because
of their mission to improve the population’s
health, often seek to advance measures that
expand the power and reach of government,
raising objections from those who want less
government. For example, in the debates surrounding public smoking ban proposals, public
health agencies were forced to combat arguments that the smoking bans were simply a way
for the government to limit personal freedoms.
Overcoming this resistance often requires that
public health leaders create coalitions of partners that extend well beyond public health.
The prevailing political ideology may be
contrary to what science recommends, such as
for water fluoridation or needle exchange programs. In other cases, those without a background in scientific methods may be skeptical
that a systematic review process yields a better
idea of what to do about a problem and may simply follow advice of a trusted individual, even
when the trusted advice contradicts the best
available evidence (98). Lack of skill in forming
coalitions of partners who support a particular
188
Brownson
·
Fielding
·
Maylahn
EBPH intervention can also reduce the likelihood of convincing policy makers to act.
Public health leaders occasionally encounter
situations in which the political will to implement a particular intervention exists before
there is evidence to support it. A prime example is the Drug Abuse Resistance Education (D.A.R.E.) program, which is the most
widely used school-based drug use prevention program in the United States, reaching
more than 70% of elementary school children
(52). The program costs ∼$130 per student
(in 2004 dollars) to implement. Systematic reviews of methodologically sound D.A.R.E. program evaluations have shown the program to be
ineffective (169).
Funding Challenges
Another challenge to implementing EBPH is
the need to adhere to the requirements of the
funding agencies. Most public health funding
at all levels of government is categorical and
restricted with respect to how the money may
be spent. This was described over a decade ago
as “hardening of the categories” (170) and limits the flexible use of funds to implement new
evidence-based programs. Public health leaders are beginning to recognize the benefits to
program integration and have articulated principles to enhance integration efforts (149). In
addition, appropriating legislation or voter initiatives may contain explicit language about restrictions, which is in turn often influenced by
key stakeholders. For example, in California,
no more than 20% of funding coming from
voter-initiated Proposition 99 can be used for
antitobacco education in schools and communities (15). We are not aware of any legislation or executive branch guidance that limits
expenditures to evidence-based recommendations or that requires that these expenditures be
used whenever available. However, more governmental agencies appear to be referencing the
best sources of evidence-based recommendations, including the Community Guide (171), as
important inputs into the state and local planning processes (21).
ANRV370-PU30-10
ARI
15 February 2009
12:1
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Workforce Training Needs
and Approaches
Strengthening EBPH competencies needs to
take into account the diverse education and
training backgrounds of the workforce. The
emphasis on principles of EBPH is not uniformly taught in all the disciplines represented
in the public health workforce. For example,
a public health nurse is likely to have had less
training than an epidemiologist in how to locate
the most current evidence and interpret alternatives. A recently graduated health educator with
an MPH is more likely to have gained an understanding of the importance of EBPH than an
environmental health specialist holding a bachelor’s degree. Probably fewer than 50% of public health workers have any formal training in a
public health discipline such as epidemiology or
health education (166). Even fewer of these professionals have formal graduate training from a
school of public health or other public health
program. Currently, it appears that few public
health departments have made continuing education about EBPH mandatory.
Although the formal concept of EBPH is relatively new, the underlying skills are not. For example, reviewing the scientific literature for evidence or evaluating a program intervention are
skills often taught in graduate programs in public health or other academic disciplines and are
building blocks of public health practice. The
most commonly applied framework in EBPH
is probably that of Brownson and colleagues
(Figure 4), which uses a seven-stage process
(19, 22, 51). The process used in applying this
framework is nonlinear and entails numerous
Figure 4
Training approach for evidence-based public health (19, 22).
www.annualreviews.org • Evidence-Based Public Health
189
ARI
15 February 2009
12:1
iterations (165). Competencies for more effective public health practice are becoming
clearer (12, 13, 59). For example, to carry out
the EBPH process, the skills needed to make
evidence-based decisions require a specific set
of competencies (Table 6).
To address these and similar competencies,
EBPH training programs have been developed
in the United States for public health professionals in state health agencies (6, 51), local
health departments, and community-based organizations (105, 106), and similar programs
have been developed in other countries (22,
129, 133). Some programs show evidence of
effectiveness (51, 106). The most common
format uses didactic sessions, computer labs,
and scenario-based exercises taught by a faculty team with expertise in EBPH. The reach
of these training programs can be increased
by emphasizing a train-the-trainer approach
(22). Other formats have been used including Internet-based self-study (101, 105), CDROMs (20), distance and distributed learning networks, and targeted technical assistance.
Training programs may have greater impact
when delivered by change agents, who are perceived as experts yet share common characteristics and goals with trainees (137). A commitment from leadership and staff to life-long
learning is also an essential ingredient for success in training (38).
Implementation of training to address
EBPH competencies should take into account
principles of adult learning. These issues were
recently articulated by Bryan and colleagues
(27), who highlighted the need to (a) know why
the audience is learning; (b) tap into an underlying motivation to learn by the need to solve
problems; (c) respect and build on previous experience; (d ) design learning approaches that
match the background and diversity of recipients; and (e) actively involve the audience in the
learning process.
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
Cultural and Geographic Differences
The tenets of EBPH have largely been developed in a western, European-American con190
Brownson
·
Fielding
·
Maylahn
text (111, 113). The conceptual approach arises
from the epistemological underpinnings of logical positivism (156), which finds meaning
through rigorous observation and measurement. This is reflected in a professional preference among clinicians for research designs
such as the randomized controlled trial. In addition, most studies in the EBPH literature are
academic-based research, usually with external
funding for well-established investigators. In
contrast, in developing (110) countries and in
impoverished areas of developed countries, the
evidence base for how best to address common
public health problems is often limited, even
though the scope of the problem may be enormous. Cavill compared evidence-based interventions across countries, showing that much
of the evidence base in several areas is limited to
empirical observations (33). Even in more developed countries (including the United States),
information published in peer-reviewed journals or data available through Web sites and
official organizations may not adequately represent all populations of interest.
THE FUTURE
The United States spends nearly $30 billion annually on health-related research (126). A small
portion of these expenditures is dedicated to research relevant to the practice of public health.
Nonetheless, evidence for addressing a number
of priority public health problems now exists.
Unfortunately, the translation from research to
clinical or community applications often occurs
only after a delay of many years (8, 19, 91). Accelerating the production of new evidence and
the adoption of evidence-based interventions
to protect and improve health requires several
actions.
Expanding the Evidence Base
The growing literature on the effectiveness of
preventive interventions in clinical and community settings (2, 171) does not provide equal
coverage of health problems. For example, the
evidence base on how to increase immunization
ANRV370-PU30-10
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Table 6
ARI
15 February 2009
12:1
Competencies in evidence-based public health. Adapted from Brownson et al. (18)
Category
Domaina
Levelb
1. Community input
C
B
Understand the importance of obtaining community input before planning
and implementing evidence-based interventions.
2. Etiologic knowledge
E
B
Understand the relationship between risk factors and diseases.
3. Community assessment
C
B
Understand how to define the health issue according to the needs and assets
of the population/community of interest.
4. Partnerships at multilevels
P/C
B
Understand the importance of identifying and developing partnerships to
address the issue with evidence-based strategies at multiple levels.
5. Development of a concise
statement of the issue
EBP
B
Understand the importance of developing a concise statement of the issue
to build support for it.
6. Grant writing need
T/T
B
Recognize the importance of grant-writing skills including the steps
involved in the application process.
7. Literature searching
EBP
B
Understand the process for searching the scientific literature and
summarizing search-derived information on the health issue.
8. Leadership and evidence
L
B
Recognize the importance of strong leadership from public health
professionals regarding the need and importance of evidence-based public
health interventions.
9. Role of behavioral science
theory
T/T
B
Understand the role of behavioral science theory in designing,
implementing, and evaluating interventions.
10. Leadership at all levels
L
B
Understand the importance of commitment from all levels of public health
leadership to increase the use of evidence-based interventions.
11. Evaluation in plain English
EV
I
Recognize the importance of translating the impacts of programs or
policies in language that can be understood by communities, practice
sectors, and policy makers.
12. Leadership and change
L
I
Recognize the importance of effective leadership from public health
professionals when making decisions in the midst of ever-changing
environments.
13. Translating evidence-based
interventions
EBP
I
Recognize the importance of translating evidence-based interventions to
unique real-world settings.
14. Quantifying the issue
T/T
I
Understand the importance of descriptive epidemiology (concepts of
person, place, time) in quantifying the public health issue.
15. Developing an action plan
for program or policy
EBP
I
Understand the importance of developing a plan of action that describes
how the goals and objectives will be achieved, which resources are
required, and how responsibility of achieving objectives will be assigned.
16. Prioritizing health issues
EBP
I
Understand how to choose and implement appropriate criteria and
processes for prioritizing program and policy options.
17. Qualitative evaluation
EV
I
Recognize the value of qualitative evaluation approaches including the steps
involved in conducting qualitative evaluations.
18. Collaborative partnerships
P/C
I
Understand the importance of collaborative partnerships between
researchers and practitioners when designing, implementing, and
evaluating evidence-based programs and policies.
19. Non-traditional partnerships
P/C
I
Understand the importance of traditional partnerships as well as those that
have been considered nontraditional such as those with planners,
departments of transportation, and others.
20. Systematic reviews
T/T
I
Understand the rationale, uses, and usefulness of systematic reviews that
document effective interventions.
Competency
(Continued )
www.annualreviews.org • Evidence-Based Public Health
191
ANRV370-PU30-10
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Table 6
ARI
15 February 2009
12:1
(Continued )
Category
Domaina
Levelb
21. Quantitative evaluation
EV
I
Recognize the importance of quantitative evaluation approaches including
the concepts of measurement validity and reliability.
22. Grant-writing skills
T/T
I
Demonstrate the ability to create a grant, including an outline of the steps
involved in the application process.
23. Role of economic evaluation
T/T
A
Recognize the importance of using economic data and strategies to evaluate
costs and outcomes when making public health decisions.
24. Creating policy briefs
P
A
Understand the importance of writing concise policy briefs to address the
issue using evidence-based interventions.
25. Evaluation designs
EV
A
Comprehend the various designs useful in program evaluation with a
particular focus on quasi-experimental (nonrandomized) designs.
26. Transmitting evidence-based
research to policy makers
P
A
Understand the importance of developing creative ways to transmit what
we know works (evidence-based interventions) to policy makers to gain
interest, political support, and funding.
Competency
a
C, community-level planning; E, etiology; P/C, partnerships and collaboration; EBP, evidence-based process; T/T, theory and analytic tools;
L, leadership; EV, evaluation; P, policy.
b
B, beginner; I, intermediate; A, advanced.
levels is much stronger than how to prevent HIV infection or reduce alcohol abuse.
A greater investment of resources to expand
the evidence base is therefore essential. Even
where we have interventions of proven effectiveness, the populations in which they have
been tested often do not include subpopulations with the greatest disease and injury burden. Expanding the evidence base requires reliance on well-tested conceptual frameworks,
especially those that pay close attention to D&I.
For example, RE-AIM helps program planners and evaluators to pay explicit attention
to Reach, Efficacy/Effectiveness, Adoption,
Implementation, and Maintenance (63, 86).
Overcoming Barriers to
Dissemination and Implementation
More knowledge is needed on effective mechanisms to translate evidence-based practice to
public health settings. Several important questions deserve answers:
192
Why have some types of evidence languished while others have been quickly
adopted?
Which D&I strategies appear to be most
cost-effective?
Brownson
·
Fielding
·
Maylahn
How can funding agencies accelerate the
replication and adaptation of evidencebased interventions in a variety of settings
and populations?
Which specific processes best integrate
community health assessment and improvement activities into health system
planning efforts?
How can we harness new tools, such as
the Internet, to improve intervention effectiveness and dissemination?
Which changes in organizational culture
that promote innovation and adoption of
EBPH are feasible?
How can we increase attention on external validity in the production and systematic reviews of evidence?
Engaging Leadership
As noted earlier, leadership is essential to promote adoption of EBPH as a core part of public health practice (143). This includes an expectation that decisions will be made on the
basis of the best science, the needs of the target population, and what will work locally. In
some cases, additional funding may be required;
however, in many circumstances, not having the
will to change (rather than dollars) is the major
ANRV370-PU30-10
ARI
15 February 2009
12:1
impediment. Use of EBPH should be incorporated as part of performance reviews for key
public health personnel and as part of explicit
goals and objectives for all program directors.
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
Expanding Training Opportunities
More practitioner-focused training is needed
on the rationale for EBPH: how to select interventions, how to adapt them to particular
circumstances, and how to monitor their implementation. The Task Force on Workforce Development has recommended that the essential
public health services (35) be used as a framework to build the basic cross-cutting and technical competencies required to address public
health problems. As outlined in this article, we
would supplement this recommendation by including an EBPH framework and competencies (18, 19). Because many of the health issues
needing urgent attention in local communities
will require the involvement of other organizations (e.g., nonprofit groups, hospitals, employers), their participation in training efforts
is essential.
Enhancing Accountability
for Public Expenditures
Public funds should be targeted to support
evidence-based strategies. Grants made by public health agencies to outside organizations
should contain language explicitly requiring use
of such strategies, when they exist, to justify expenditure of funds. Although the science base
for many topics is still evolving, it is irresponsible not to use existing evidence when designing
and implementing proven public health interventions. Evaluations of such efforts can thus
contribute to a better understanding of what
works in different settings. At the same time,
the adoption of EBPH by the public health
system as a whole and its impact on the community’s health should be tracked. A central criterion in the accreditation of public health departments, soon to be implemented (162), must
be the use of best evidence in every effort to improve health and health equity.
Understanding How to Use EBPH
Better to Address Disparities
To what degrees do specific evidence-based
approaches reduce disparities while improving overall current and/or future health? For
many interventions, there is not a clear answer
to this question. Despite the Healthy People
2010 goal of eliminating health disparities, recent data show large and growing differences
in disease burden and health outcomes between high- and low-income groups (54). Most
of the existing intervention research has been
conducted among higher-income populations,
and programs focusing on eliminating health
disparities have often been short-lived (146).
Yet, in both developed and developing countries, poverty is strongly correlated with poor
health outcomes (155). When enough evidence
exists, systematic reviews should focus specifically on interventions that show promise in
eliminating health disparities (103, 135). Policy interventions hold the potential to influence health determinants more broadly and
could significantly reduce the growing disparities across a wide range of health problems
(24).
CONCLUSION
The successful implementation of EBPH in
public health practice is both a science and
an art. The science is built on epidemiologic,
behavioral, and policy research showing the
size and scope of a public health problem and
which interventions are likely to be effective
in addressing the problem. The art of decision making often involves knowing which information is important to a particular stakeholder at the right time. Unlike solving a math
problem, significant decisions in public health
must balance science and art because rational,
evidence-based decision making often involves
choosing one alternative from among a set of
rational choices. By applying the concepts of
EBPH outlined in this article, decision making
and, ultimately, public health practice can be
improved.
www.annualreviews.org • Evidence-Based Public Health
193
ANRV370-PU30-10
ARI
15 February 2009
12:1
SUMMARY POINTS
1. To achieve state and national objectives for improved population health, more widespread
adoption of evidence-based strategies is recommended.
2. Key components of evidence-based public health (EBPH) include making decisions on
the basis of the best available, peer-reviewed evidence, using data and information systems systematically, applying program-planning frameworks, engaging the community
in decision making, conducting sound evaluation, and disseminating what is learned.
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
3. Three types of evidence focus on the causes of diseases and the magnitude of risk factors, the relative impact of specific interventions, and how and under which contextual
conditions interventions were implemented.
4. Evidence is imperfect, and practitioners should seek the best evidence available not the
best evidence possible.
5. Audiences for EBPH are public health practitioners, policy makers, stakeholders affected
by a health issue, and researchers.
6. Several important distinctions between EBPH and evidence-based medicine include the
volume of evidence, study designs used to inform research and practice, the setting
or context in which the intervention is applied, and the training and certification of
professionals.
7. Numerous analytic tools and approaches that can enhance the greater use of EBPH
include public health surveillance, systematic reviews, economic evaluation, health impact
assessment, and participatory approaches.
8. To increase the dissemination and implementation of EBPH in practice settings (e.g.,
health departments), several important issues should be considered: organizational culture, the role of leadership, political challenges, funding challenges, workforce training
needs, culture, and geographic differences. Any of these could justify or demand some
adaptation of evidence-based interventions to fit contextual conditions.
DISCLOSURE STATEMENT
C.M. has received nominal payment as a consultant to the St. Louis University School of Public
Health.
ACKNOWLEDGMENTS
This work was partially funded through the Centers for Disease Control and Prevention award
U48 DP00060 (Prevention Research Centers Program) and the National Association of Chronic
Disease Directors. The authors appreciate input from Dr. C. Tracy Orleans of the Robert Wood
Johnson Foundation and Dr. Laura Brennan-Ramirez of Transtria LLC.
LITERATURE CITED
1. Abney G. 1988. Lobbying by the insiders: parallels of state agencies and interest groups. Public Adm. Rev.
48:911–17
194
Brownson
·
Fielding
·
Maylahn
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
ARI
15 February 2009
12:1
2. Agency for Healthc. Res. and Quality. 2005. Guide to Clinical Preventive Services, 3rd Ed., Period. Updates.
Agency for Healthc. Res. and Quality
3. Almeida C, Bascolo E. 2006. Use of research results in policy decision-making, formulation, and implementation: a review of the literature. Cad. Saude Publica 22(Suppl.):S7–19; discussion S20–33
4. Anderson J. 1999. “Don’t confuse me with facts. . .”: evidence-based practice confronts reality. Med. J.
Aust. 170:465–66
5. Annest JL, Pirkle JL, Makuc D, Neese JW, Bayse DD, Kovar MG. 1983. Chronological trend in blood
lead levels between 1976 and 1980. N. Engl. J. Med. 308:1373–77
6. Baker E, Brownson R, Dreisinger M, McIntosh L, Karamehic A. 2008. Examining the role of training
in evidence-based public health: a qualitative study in the United States. Health Promot. Practice. In press
7. Baker EL, Potter MA, Jones DL, Mercer SL, Cioffi JP, et al. 2005. The public health infrastructure and
our nation’s health. Annu. Rev. Public Health 26:303–18
8. Balas EA. 1998. From appropriate care to evidence-based medicine. Pediatr. Ann. 27:581–84
9. Bauman LJ, Stein RE, Ireys HT. 1991. Reinventing fidelity: the transfer of social technology among
settings. Am. J. Community Psychol. 19:619–39
10. Bero LA, Jadad AR. 1998. How consumers and policy makers can use systematic reviews for decision
making. In Systematic Reviews. Synthesis of Best Evidence for Health Care Decisions, ed. C Mulrow, D Cook,
pp. 45–54. Philadelphia, PA: Am. Coll. Phys.
11. Biener L, Harris JE, Hamilton W. 2000. Impact of the Massachusetts tobacco control programme:
population based trend analysis. BMJ 321:351–54
12. Birkhead GS, Davies J, Miner K, Lemmings J, Koo D. 2008. Developing competencies for applied
epidemiology: from process to product. Public Health Rep. 123(Suppl. 1):67–118
13. Birkhead GS, Koo D. 2006. Professional competencies for applied epidemiologists: a roadmap to a more
effective epidemiologic workforce. J. Public Health Manag. Pract. 12:501–4
14. Black BL, Cowens-Alvarado R, Gershman S, Weir HK. 2005. Using data to motivate action: the need for
high quality, an effective presentation, and an action context for decision-making. Cancer Causes Control
16(Suppl. 1):15–25
15. Breslow L, Johnson M. 1993. California’s Proposition 99 on tobacco, and its impact. Annu. Rev. Public
Health 14:585–604
16. Briss PA, Brownson RC, Fielding JE, Zaza S. 2004. Developing and using the Guide to Community
Preventive Services: lessons learned about evidence-based public health. Annu. Rev. Public Health 25:281–
302
17. Briss PA, Zaza S, Pappaioanou M, Fielding J, Wright-De Aguero L, et al. 2000. Developing an evidencebased Guide to Community Preventive Services—methods. The Task Force on Community Preventive
Services. Am. J. Prev. Med. 18:35–43
18. Brownson R, Ballew P, Kittur N, Elliott M, Haire-Joshu D, et al. 2009. Developing competencies for
training practitioners in evidence-based cancer control. J. Cancer Educ. In press
19. Brownson RC, Baker EA, Leet TL, Gillespie KN. 2003. Evidence-Based Public Health. New York: Oxford
Univ. Press
20. Brownson RC, Ballew P, Brown KL, Elliott MB, Haire-Joshu D, et al. 2007. The effect of disseminating
evidence-based interventions that promote physical activity to health departments. Am. J. Public Health
97:1900–7
21. Brownson RC, Ballew P, Dieffenderfer B, Haire-Joshu D, Heath GW, et al. 2007. Evidence-based
interventions to promote physical activity: what contributes to dissemination by state health departments.
Am. J. Prev. Med. 33:S66–73; quiz S4–8
22. Brownson RC, Diem G, Grabauskas V, Legetic B, Potemkina R, et al. 2007. Training practitioners in
evidence-based chronic disease prevention for global health. Promot. Educ. 14:159–63
23. Brownson RC, Gurney JG, Land G. 1999. Evidence-based decision making in public health. J. Public
Health Manag. Pract. 5:86–97
24. Brownson RC, Haire-Joshu D, Luke DA. 2006. Shaping the context of health: a review of environmental
and policy approaches in the prevention of chronic diseases. Annu. Rev. Public Health 27:341–70
25. Brownson RC, Petitti DB, eds. 1998. Applied Epidemiology: Theory to Practice. New York: Oxford Univ.
Press
www.annualreviews.org • Evidence-Based Public Health
195
ARI
15 February 2009
12:1
26. Brownson RC, Royer C, Ewing R, McBride TD. 2006. Researchers and policymakers: travelers in parallel
universes. Am. J. Prev. Med. 30:164–72
27. Bryan RL, Kreuter MW, Brownson RC. 2008. Integrating adult learning principles into training for
public health practice. Health Promot Pract. In press
28. Can. Task Force on the Period. Health Exam. 1979. The periodic health examination. Canadian Task
Force on the Periodic Health Examination. Can. Med. Assoc. J. 121:1193–254
29. Cancer Control PLANET. 2008. Cancer Control PLANET. Links resources to comprehensive cancer control.
Atlanta, GA: Natl. Cancer Inst./Cent. Dis. Control Prev./Am. Cancer Soc./Subst. Abuse Mental Health
Serv./Agency for Healthc. Res. Quality
30. Carande-Kulis VG, Maciosek MV, Briss PA, Teutsch SM, Zaza S, et al. 2000. Methods for systematic
reviews of economic evaluations for the Guide to Community Preventive Services. Task Force on Community
Preventive Services. Am. J. Prev. Med. 18:75–91
31. Cargo M, Mercer SL. 2008. The value and challenges of participatory research: strengthening its practice.
Annu. Rev. Public Health 29:325–50
32. Castro FG, Barrera M Jr, Martinez CR Jr. 2004. The cultural adaptation of prevention interventions:
resolving tensions between fidelity and fit. Prev. Sci. 5:41–45
33. Cavill N, Foster C, Oja P, Martin BW. 2006. An evidence-based approach to physical activity promotion
and policy development in Europe: contrasting case studies. Promot. Educ. 13:104–11
34. Cent. Dis. Control Prev. 1993. Public Health in the New American Health System. Discussion Paper. Atlanta,
GA: Cent. Dis. Control Prev.
35. Cent. Dis. Control Prev. 1999. CDC Taskforce on Public Health Workforce Development. Atlanta, GA: Cent.
Dis. Control Prev.
36. Cent. Dis. Control Prev. 1999. Framework for program evaluation in public health. MMWR 48:1–40
37. Chambers D, Kerner J. 2007. Closing the gap between discovery and delivery. In Dissemination and
Implementation Research Workshop: Harnessing Science to Maximize Health. Rockville, MD: Natl. Inst.
Health
38. Chambers LW. 1992. The new public health: do local public health agencies need a booster (or organizational “fix”) to combat the diseases of disarray? Can. J. Public Health 83:326–28
39. Choi BC. 2005. Twelve essentials of science-based policy. Prev. Chronic. Dis. 2:A16
40. Choi BC, Pang T, Lin V, Puska P, Sherman G, et al. 2005. Can scientists and policy makers work
together? J. Epidemiol. Community Health 59:632–37
41. Cochrane A. 1972. Effectiveness and Efficiency: Random Reflections on Health Services. London: Nuffield
Provincial Hospital Trust
42. Cohen S, Eimicke W. 1996. Understanding and applying innovation strategies in the public sector. Presented
at Annu. Natl. Conf. Am. Soc. Public Admin., 57th, Atlanta, Ga.
43. Cole B, Shimkhada R, Morgenstern H, Kominski G, Fielding J, Wu S. 2005. Projected health impact of
the Los Angeles City living wage ordinance. J. Epidemiol. Commun. Health 59:645–50
44. Cole BL, Wilhelm M, Long PV, Fielding JE, Kominski G, Morgenstern H. 2004. Prospects for health
impact assessment in the United States: new and improved environmental impact assessment or something different? J. Health Polit. Policy Law 29:1153–86
45. Curry S, Byers T, Hewitt M, eds. 2003. Fulfilling the Potential of Cancer Prevention and Early Detection.
Washington, DC: Natl. Acad. Press
46. Dannenberg AL, Bhatia R, Cole BL, Heaton SK, Feldman JD, Rutt CD. 2008. Use of health impact
assessment in the U.S.: 27 case studies, 1999–2007. Am. J. Prev. Med. 34:241–56
47. Denzin NK. 1970. The Research Act in Sociology. London, UK: Butterworth
48. Dobbins M, Cockerill R, Barnsley J, Ciliska D. 2001. Factors of the innovation, organization, environment, and individual that predict the influence five systematic reviews had on public health decisions.
Int. J. Technol. Assess. Health Care 17:467–78
49. Dobrow MJ, Goel V, Upshur RE. 2004. Evidence-based health policy: context and utilisation. Soc. Sci.
Med. 58:207–17
50. Dorfman LE, Derish PA, Cohen JB. 1992. Hey girlfriend: an evaluation of AIDS prevention among
women in the sex industry. Health Educ. Q. 19:25–40
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
196
Brownson
·
Fielding
·
Maylahn
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
ARI
15 February 2009
12:1
51. Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC. 2008. Improving the public
health workforce: evaluation of a training course to enhance evidence-based decision making. J. Public
Health Manag. Pract. 14:138–43
52. Drug Policy Found. 2003. Public Policy: Youth Drug Education/The D.A.R.E. Program. Washington, DC:
Drug Policy Found.
53. Evidence-Based Med. Work. Group. 1992. Evidence-based medicine. A new approach to teaching the
practice of medicine. JAMA 17:2420–25
54. Ezzati M, Friedman AB, Kulkarni SC, Murray CJ. 2008. The reversal of fortunes: trends in county
mortality and cross-county mortality disparities in the United States. PLoS Med. 5:e66
55. Fielding JE. 2001. Where is the evidence? Annu. Rev. Public Health 22:v–vi
56. Fielding JE. 2003. Foreword. In Evidence-Based Public Health, ed. RC Brownson, EA Baker, TL Leet,
KN Gillespie, pp. v–vii. New York: Oxford Univ. Press
57. Fielding JE, Briss PA. 2006. Promoting evidence-based public health policy: Can we have better evidence
and more action? Health Aff. (Millwood) 25:969–78
58. Gambrill E. 2003. Evidence-based practice: sea change or the emperor’s new clothes? J. Social Work Educ.
39:3–23
59. Gebbie K, Merrill J, Hwang I, Gupta M, Btoush R, Wagner M. 2002. Identifying individual competency
in emerging areas of practice: an applied approach. Qual. Health Res. 12:990–99
60. Glasgow RE. 2008. What types of evidence are most needed to advance behavioral medicine? Ann. Behav.
Med. 35:19–25
61. Glasgow RE, Emmons KM. 2007. How can we increase translation of research into practice? Types of
evidence needed. Annu. Rev. Public. Health. 28:413–33
62. Glasgow RE, Green LW, Klesges LM, Abrams DB, Fisher EB, et al. 2006. External validity: We need
to do more. Ann. Behav. Med. 31:105–8
63. Glasgow RE, Vogt TM, Boles SM. 1999. Evaluating the public health impact of health promotion
interventions: the RE-AIM framework. Am. J. Public Health 89:1322–27
64. Glasziou P, Longbottom H. 1999. Evidence-based public health practice. Aust. N. Z. J. Public Health
23:436–40
65. Gold MR, Siegel JE, Russell LB, Weinstein MC. 1996. Cost-Effectiveness in Health and Medicine. New
York: Oxford Univ. Press
66. Goodman RM, Wheeler FC, Lee PR. 1995. Evaluation of the Heart to Heart Project: lessons from a
community-based chronic disease prevention project. Am. J. Health Promot. 9:443–55
67. Green L, Ottoson J, Hiatt R, Garcia C. 2009. Diffusion, dissemination and implementation of evidencebased public health. Annu. Rev. Public Health 30:151–74
68. Green LW. 2001. From research to “best practices” in other settings and populations. Am. J. Health
Behav. 25:165–78
69. Green LW. 2006. Public health asks of systems science: To advance our evidence-based practice, can you
help us get more practice-based evidence? Am. J. Public Health 96:406–9
70. Green LW, George MA, Daniel M, Fankish CJ, Herbert CJ, et al. 1995. Review and Recommendations for
the Development of Participatory Research in Health Promotion in Canada. Vancouver, BC: R. Soc. Can.
71. Green LW, Glasgow RE. 2006. Evaluating the relevance, generalization, and applicability of research:
issues in external validation and translation methodology. Eval. Health Prof. 29:126–53
72. Green LW, Mercer SL. 2001. Can public health researchers and agencies reconcile the push from funding
bodies and the pull from communities? Am. J. Public Health 91:1926–29
73. Guyatt G, Cook D, Haynes B. 2004. Evidence based medicine has come a long way. BMJ 329:990–91
74. Guyatt G, Rennie D, eds. 2002. Users’ Guides to the Medical Literature. A Manual for Evidence-Based Clinical
Practice. Chicago, IL: Am. Med. Assoc. Press. 706 pp.
75. Hallfors D, Cho H, Livert D, Kadushin C. 2002. Fighting back against substance abuse: Are community
coalitions winning? Am. J. Prev. Med. 23:237–45
76. Harris P, Harris-Roxas B, Harris E, Kemp L. 2007. Health Impact Assessment: A Practical Guide. Sydney,
Aust. Cent. Health Equity Train., Res. Eval. (CHETRE)
77. Hausman AJ. 2002. Implications of evidence-based practice for community health. Am. J. Community
Psychol. 30:453–67
www.annualreviews.org • Evidence-Based Public Health
197
ARI
15 February 2009
12:1
78. Haynes B, Haines A. 1998. Barriers and bridges to evidence based clinical practice. BMJ 317:273–76
79. Hugentobler M, Israel BA, Schurman SJ. 1992. An action research approach to workplace health: integrating methods. Health Educ. Q. 19:55–76
80. Hutchison BG. 1993. Critical appraisal of review articles. Can. Fam. Phys.ician 39:1097–102
81. Inst. of Med. 2003. The Future of the Public’s Health in the 21st Century. Washington, DC: Natl. Acad.
Press
82. Inst. Med. Comm. for the Study of the Fut. Public Health. 1988. The Future of Public Health. Washington,
DC: Natl. Acad. Press
83. Israel BA, Schulz AJ, Parker EA, Becker AB. 1998. Review of community-based research: assessing
partnership approaches to improve public health. Annu. Rev. Public Health 19:173–202
84. Jenicek M. 1997. Epidemiology, evidence-based medicine, and evidence-based public health. J. Epidemiol.
Commun. Health 7:187–97
85. Jewell EJ, Abate F, eds. 2001. The New Oxford American Dictionary. New York: Oxford Univ. Press
86. Jilcott S, Ammerman A, Sommers J, Glasgow RE. 2007. Applying the RE-AIM framework to assess the
public health impact of policy change. Ann. Behav. Med. 34:105–14
87. Kelley K, Bond R, Abraham C. 2001. Effective approaches to persuading pregnant women to quit
smoking: a meta-analysis of intervention evaluation studies. Br. J. Health Psychol. 6:207–28
88. Kelsey JL, Petitti DB, King AC. 1998. Key methodologic concepts and issues. See Ref. 25, pp. 35–69
89. Kemm J. 2001. Health impact assessment: a tool for healthy public policy. Health Promot. Int. 16:79–85
90. Kemm J. 2006. The limitations of ‘evidence-based’ public health. J. Eval. Clin. Pract. 12:319–24
91. Kerner J, Rimer B, Emmons K. 2005. Introduction to the special section on dissemination: dissemination
research and research dissemination: How can we close the gap? Health Psychol. 24:443–46
92. Kerner JF. 2008. Integrating research, practice, and policy: What we see depends on where we stand.
J. Public Health Manag. Pract. 14:193–98
93. Kerner JF, Guirguis-Blake J, Hennessy KD, Brounstein PJ, Vinson C, et al. 2005. Translating research
into improved outcomes in comprehensive cancer control. Cancer Causes Control 16(Suppl. 1):27–40
94. Koepsell TD, Weiss NS. 2003. Epidemiologic Methods. Studying the Occurrence of Illness. New York: Oxford
Univ. Press
95. Kohatsu ND, Melton RJ. 2000. A health department perspective on the Guide to Community Preventive
Services. Am. J. Prev. Med. 18:3–4
96. Kohatsu ND, Robinson JG, Torner JC. 2004. Evidence-based public health: an evolving concept. Am.
J. Prev. Med. 27:417–21
97. Last JM, ed. 2001. A Dictionary of Epidemiology. New York: Oxford Univ. Press. 196 pp.
98. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J. 2003. How can research organizations
more effectively transfer research knowledge to decision makers? Milbank Q. 81:221–48, 171–72
99. Leung MW, Yen IH, Minkler M. 2004. Community based participatory research: a promising approach
for increasing epidemiology’s relevance in the 21st century. Int. J. Epidemiol. 33:499–506
100. Lewin Group I. 2001. Factors Influencing Effective Dissemination of Prevention Research Findings by the
Department of Health and Human Services. Final Report. Washington, DC: Lewin Group
101. Linkov F, LaPorte R, Lovalekar M, Dodani S. 2005. Web quality control for lectures: Supercourse and
Amazon.com. Croat. Med. J. 46:875–78
102. Maciosek MV, Coffield AB, Edwards NM, Flottemesch TJ, Goodman MJ, Solberg LI. 2006. Priorities
among effective clinical preventive services: results of a systematic review and analysis. Am. J. Prev. Med.
31:52–61
103. Masi CM, Blackman DJ, Peek ME. 2007. Interventions to enhance breast cancer screening, diagnosis,
and treatment among racial and ethnic minority women. Med. Care Res. Rev. 64:195S–242
104. Maslov A. 1943. A theory of human motivation. Psychol. Rev. 50:370–96
105. Maxwell ML, Adily A, Ward JE. 2007. Promoting evidence-based practice in population health at the
local level: a case study in workforce capacity development. Aust. Health Rev. 31:422–29
106. Maylahn C, Bohn C, Hammer M, Waltz E. 2008. Strengthening epidemiologic competencies among
local health professionals in New York: teaching evidence-based public health. Public Health Rep. 123:35–
43
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
198
Brownson
·
Fielding
·
Maylahn
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
ARI
15 February 2009
12:1
107. McGinnis JM. 2001. Does proof matter? Why strong evidence sometimes yields weak action. Am. J.
Health Promot. 15:391–96
108. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, et al. 2003. The quality of health care delivered to
adults in the United States. N. Engl. J. Med. 348:2635–45
109. McMichael C, Waters E, Volmink J. 2005. Evidence-based public health: What does it offer developing
countries? J. Public Health (Oxf.) 27:215–21
110. McQueen D. 2000. Strengthening the evidence base for health promotion. Presented at Global Conf. Health
Promot. Health Promot.: Bridging the Equity Gap, 5th, Mexico
111. McQueen D. 2007. Evidence and theory. Continuing debates on evidence and effectiveness. In Global
Perspectives on Health Promotion Effectiveness, ed. D McQueen, C Jones, pp. 281–303. New York, NY:
Springer
112. McQueen DV. 2001. Strengthening the evidence base for health promotion. Health Promot. Int. 16:261–
68
113. McQueen DV. 2002. The evidence debate. J. Epidemiol. Community Health 56:83–84
114. McQueen DV, Anderson LM. 2001. What counts as evidence? Issues and debates. In Evaluation in
Health Promotion: Principles and Perspectives, ed. Rootman, pp. 63–81. Copenhagen, Denmark: World
Health Organ.
115. Melnyk BM, Fineout-Overholt E, Stone P, Ackerman M. 2000. Evidence-based practice: the past, the
present, and recommendations for the millennium. Pediatr. Nurs. 26:77–80
116. Miller CL, Druss BG, Rohrbaugh RM. 2003. Using qualitative methods to distill the active ingredients
of a multifaceted intervention. Psychiatr. Serv. 54:568–71
117. Milne R, Chambers L. 1993. Assessing the scientific quality of review articles. J. Epidemiol. Community
Health 47:169–70
118. Mindell J, Sheridan L, Joffe M, Samson-Barry H, Atkinson S. 2004. Health impact assessment as an
agent of policy change: improving the health impacts of the mayor of London’s draft transport strategy.
J. Epidemiol. Community Health 58:169–74
119. Muir Gray JA. 1997. Evidence-Based Healthcare: How to Make Health Policy and Management Decisions. New
York/Edinburgh: Churchill Livingstone
120. Mullen PD, Ramirez G. 2006. The promise and pitfalls of systematic reviews. Annu. Rev. Public Health
27:81–102
121. Mulrow CD. 1987. The medical review article: state of the science. Ann. Intern. Med. 106:485–88
122. Mulrow CD, Lohr KN. 2001. Proof and policy from medical research evidence. J. Health Polit. Policy
Law 26:249–66
123. Natl. Cancer Inst. 2008. Research-tested intevention programs (RTIPs). Atlanta, GA: Natl. Cancer
Inst./Cent. Dis. Control Prev./Am. Cancer Soc./Subst. Abuse Mental Health Serv./Agency for Healthc.
Res. Quality
124. Natl. Cent. for Health Stat. 2000. Health, United States, 2000 With Adolescent Health Chartbook. Hyattsville,
MD: Cent. Dis. Control Prev., Natl. Cent. Health Stat.
125. Nutbeam D. 2003. How does evidence influence public health policy? Tackling health inequalities in
England. Health Promot. J. Aust. 14:154–58
126. Off. of Manag. and Budget. 2008. Budget: Department of Health and Human Services. Washington, DC:
Exec. Off. Pres.
127. Ogilvie D, Egan M, Hamilton V, Petticrew M. 2005. Systematic reviews of health effects of social
interventions: 2. Best available evidence: How low should you go? J. Epidemiol. Community Health 59:886–
92
128. Oldenburg BF, Sallis JF, French ML, Owen N. 1999. Health promotion research and the diffusion and
institutionalization of interventions. Health Educ. Res. 14:121–30
129. Oliver KB, Dalrymple P, Lehmann HP, McClellan DA, Robinson KA, Twose C. 2008. Bringing evidence
to practice: a team approach to teaching skills required for an informationist role in evidence-based clinical
and public health practice. J. Med. Libr. Assoc. 96:50–57
130. Oliver TR. 2006. The politics of public health policy. Annu. Rev. Public Health 27:195–233
131. Oxman AD, Cook DJ, Guyatt GH. 1994. Users’ guides to the medical literature. VI. How to use an
overview. Evidence-Based Medicine Working Group. JAMA 272:1367–71
www.annualreviews.org • Evidence-Based Public Health
199
ARI
15 February 2009
12:1
132. Oxman AD, Guyatt GH. 1993. The science of reviewing research. Ann. N. Y. Acad. Sci. 703:125–33;
discussion 33–34
133. Pappaioanou M, Malison M, Wilkins K, Otto B, Goodman RA, et al. 2003. Strengthening capacity in
developing countries for evidence-based public health: the data for decision-making project. Soc. Sci.
Med. 57:1925–37
134. Pawson R, Greenhalgh T, Harvey G, Walshe K. 2005. Realist review—a new method of systematic
review designed for complex policy interventions. J. Health Serv. Res. Policy 10(Suppl 1):21–34
135. Peek ME, Cargill A, Huang ES. 2007. Diabetes health disparities: a systematic review of health care
interventions. Med. Care Res. Rev. 64:101S–56
136. Presidential Task Force on Evidence-Based Practice. 2006. Evidence-based practice in psychology. Am.
Psychol. 61:271–85
137. Proctor EK. 2004. Leverage points for the implementation of evidence-based practice. Brief Treat. Crisis
Interv. 4:227–42
138. Rimer BK, Glanz DK, Rasband G. 2001. Searching for evidence about health education and health
behavior interventions. Health Educ. Behav. 28:231–48
139. Rogers EM. 2003. Diffusion of Innovations. New York: Free Press
140. Rychetnik L, Frommer M, Hawe P, Shiell A. 2002. Criteria for evaluating evidence on public health
interventions. J. Epidemiol. Community Health 56:119–27
141. Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M. 2004. A glossary for evidence based public
health. J. Epidemiol. Community Health 58:538–45
142. SAMHSA. 2008. SAMHSA’s National Registry of Evidence-Based Programs and Practices. Washington, DC:
US Dep. Health Hum. Serv. Subst. Abuse Mental Health Serv. Admin.
143. Scutchfield FD, Knight EA, Kelly AV, Bhandari MW, Vasilescu IP. 2004. Local public health agency
capacity and its relationship to public health system performance. J. Public Health Manag. Pract. 10:204–15
144. Sederburg WA. 1992. Perspectives of the legislator: allocating resources. MMWR 41(Suppl.):37–48
145. Shadish W, Cook T, Leviton L. 1995. Foundations of Program Evaluation: Theories of Practice. Newbury
Park, CA: Sage
146. Shaya FT, Gu A, Saunders E. 2006. Addressing cardiovascular disparities through community interventions. Ethn. Dis. 16:138–44
147. Shulock N. 1999. The paradox of policy analysis: If it is not used, why do we produce so much? J. Policy
Anal. Manage. 18:226–44
148. Simpson DD. 2002. A conceptual framework for transferring research to practice. J. Subst. Abuse Treat.
22:171–82
149. Slonim AB, Callaghan C, Daily L, Leonard BA, Wheeler FC, et al. 2007. Recommendations for integration of chronic disease programs: Are your programs linked? Prev. Chronic Dis. 4:A34
150. Soriano FI. 1995. Conducting Needs Assessments. A Multdisciplinary Approach. Thousand Oaks, CA: Sage
151. Spring B. 2007. OBSSR award supports training in evidence-based behavioral practice. Outlook: Q. Newsl.
Soc. Behav. Med. Winter, p. 10
152. Spring B, Walker B, Brownson R, Mullen E, Newhouse R, et al. 2008. Definition and competencies for
evidence-based behavioral practice. White paper prepared by the Counc. on Evidence-Based Behav. Pract.,
Northwestern Univ., Chicago, IL
153. Steckler A, McLeroy KR, Goodman RM, Bird ST, McCormick L. 1992. Toward integrating qualitative
and quantitative methods: an introduction. Health Educ. Q. 19:1–8
154. Sturm R. 2002. Evidence-based health policy versus evidence-based medicine. Psychiatr. Serv. 53:1499
155. Subramanian SV, Belli P, Kawachi I. 2002. The macroeconomic determinants of health. Annu. Rev.
Public Health 23:287–302
156. Suppe F. 1977. The Structure of Scientific Theories. Urbana, IL: Univ. Ill. Press
157. Taubes G. 1996. Looking for the evidence in medicine. Science 272:22–24
158. Thacker SB, Berkelman RL. 1988. Public health surveillance in the United States. Epidemiol. Rev. 10:164–
90
159. Thacker SB, Ikeda RM, Gieseker KE, Mendelsohn AB, Saydah SH, et al. 2005. The evidence base for
public health informing policy at the Centers for Disease Control and Prevention. Am. J. Prev. Med.
29:227–33
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
200
Brownson
·
Fielding
·
Maylahn
Annu. Rev. Public Health 2009.30:175-201. Downloaded from www.annualreviews.org
Access provided by University of Cincinnati on 08/20/22. For personal use only.
ANRV370-PU30-10
ARI
15 February 2009
12:1
160. Thacker SB, Stroup DF. 2006. Public health surveillance. See Ref. 25, pp. 30–67
161. Tilson H, Gebbie KM. 2004. The public health workforce. Annu. Rev. Public Health 25:341–56
162. Tilson HH. 2008. Public health accreditation: progress on national accountability. Annu. Rev. Public
Health 29:xv–xxii
163. Tobacco Educ. Res. Oversight Comm. Calif. 2006. Toward a tobacco-free California, 2006–2008:
confronting a relentless adversary, a plan for success. http://www.cdph.ca.gov/services/boards/teroc/
Documents/TEROCMasterPlan06-08.pdf
164. Tones K. 1997. Beyond the randomized controlled trial: a case for ‘judicial review.’ Health Educ. Res.
12:i–iv
165. Tugwell P, Bennett KJ, Sackett DL, Haynes RB. 1985. The measurement iterative loop: a framework
for the critical appraisal of need, benefits and costs of health interventions. J. Chronic Dis. 38:339–51
166. Turnock BJ. 2001. Public Health: What It Is and How It Works. Gaithersburg, MD: Aspen. 354 pp.
167. U.S. Prev. Services Task Force. 1989. Guide to Clinical Preventive Services: An Assessment of the Effectiveness
of 169 Interventions. Baltimore: Williams & Wilkins
168. Waters E, Doyle J. 2002. Evidence-based public health practice: improving the quality and quantity of
the evidence. J. Public Health Med. 24:227–29
169. West SL, O’Neal KK. 2004. Project D.A.R.E. outcome effectiveness revisited. Am. J. Public Health
94:1027–29
170. Wiesner PJ. 1993. Four diseases of disarray in public health. Ann. Ep…