Está en la página 1de 20

Evidence-based medicine - UpToDate 29/04/19 12(16

Official reprint from UpToDate®


www.uptodate.com ©2019 UpToDate, Inc. and/or its affiliates. All Rights Reserved.

Evidence-based medicine
Authors: Arthur T Evans, MD, MPH, Gregory Mints, MD, FACP
Section Editor: Mark D Aronson, MD
Deputy Editor: Carrie Armsby, MD, MPH

All topics are updated as new evidence becomes available and our peer review process is complete.

Literature review current through: Mar 2019. | This topic last updated: Jul 06, 2018.

INTRODUCTION

Evidence-based medicine (EBM) is the care of patients using the best available research evidence to guide clinical decision making
(figure 1) [1,2]. The value of EBM is heightened in light of the following considerations:

● The volume of evidence available to guide clinical decisions continues to grow at a rapid pace (figure 2).

● Improvements in research design, clinical measurements, and methods for analyzing data have led to a better understanding of
how to produce valid clinical research.

● Despite advances in research methods, many published study results are false or draw misleading conclusions [3].

● Many clinicians, even those in good standing, do not practice medicine according to the best current research evidence.

The basic elements of EBM are reviewed here. They include [1]:

● Formulating a clinical question


● Finding the best available evidence
● Assessing the validity of the evidence (including internal and external validity)
● Applying the evidence in practice, in conjunction with clinical expertise and patient preferences

The focus is upon applying the results of research involving patients and clinical outcomes, such as death, disease, symptoms, and
loss of function. Other kinds of evidence, such as those obtained by personal experience and laboratory studies of the pathogenesis of
disease, are also useful in the care of patients but are not usually included under "evidence-based medicine." EBM is meant to
complement, not replace, clinical judgment in tailoring care to individual patients. Similarly, EBM and the delivery of culturally, socially,
and individually sensitive and effective care are complementary, not contradictory (figure 1).

FORMULATING A CLINICAL QUESTION

Clinical questions are frequently complex, but it is usually wise to sharpen the focus by answering more simple questions (table 1). The
question must be explicitly defined before searching for the answer [4].

The search for the best answers to clinical questions begins with a tight, explicit formulation of the question [4]. For example, the
question "what is the best treatment for type 2 diabetes?" is too general and broad to be answered well. For evaluating the
effectiveness of an intervention, four questions should be considered (commonly referred to as "PICO") (table 2):

● What is the relevant patient population?


● What intervention is being considered?
● What is the comparison intervention or patient population?
● What outcomes are of interest?

https://www.uptodate.com/contents/evidence-based-medicine/pr…csi=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 1 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

For example, an answerable relevant question may be: "Among obese adults with type 2 diabetes, is metformin more effective than
sulfonylurea drugs in preventing death?"

The approach is similar for clinical questions involving diagnosis or prognosis (table 2).

Patient population — The ultimate goal of EBM is to inform clinical decisions regarding individual patients. Ideally, therefore, one
would seek answers from studies that enrolled research subjects who were very similar to one's patient. If the target population is
defined too broadly, the study results may not apply to patients whose characteristics differ substantially from the typical study subject.

However, there is also some danger in defining the target population too narrowly. High-quality research of very specific groups of
patients is often unavailable, and the alternative, subgroup analysis of larger, more inclusive studies can be problematic because of
serious methodologic concerns [5-14]. (See 'External validity' below.)

Intervention — In formulating the PICO question, it is important to specify the intervention being considered. A similar approach is
used to evaluate questions regarding diagnosis or prognosis, in which case the question ought to clearly specify the specific diagnostic
test or risk factor of interest.

As with the patient population, it is important to avoid overly narrow or broad definitions of the intervention (or test or risk factor). For
questions that involve drug therapy, the dose, timing, and duration of treatment need to be considered. For example, for a middle-aged
man with hypertension, one may want to know whether 81 mg of aspirin taken daily and indefinitely prevents strokes. However, good
data for narrowly defined treatment schedules may be unavailable, leading to a perilous reliance on subgroup analyses. Under such
circumstances it may be worthwhile relaxing the definition of intervention to something broader, for example, "low-dose aspirin."

Comparison — In randomized treatment trials, the comparison group can be a placebo, usual care, or active treatment. Placebo-
controlled trials have two distinct advantages: they facilitate blinding and control for the placebo effect (non-specific treatment effect).
However, they do not allow one to compare the effects among real-world choices [15]. It is important that the comparison intervention
be clinically appropriate (ie, an alternative intervention that would realistically be under consideration).

Outcomes — It is important to consider all patient-important outcomes (including benefits and harms). It is not sufficient to think of
benefit (or harm) in general terms; one must be specific about the outcomes of interest. In particular, outcomes should be well-defined,
measurable, reliable, sensitive to change, and should actually assess clinically relevant aspects of a patient's health.

Particular issues related to the types of outcomes measured in clinical studies include:

● Composite endpoints – The use of a composite of multiple combined endpoints has the advantage of increasing the study's
statistical power but can be difficult to interpret. Interpretation is easy if the intervention affects all component outcomes to the
same extent. But when intervention effects are not consistent across the different outcomes, and the outcomes are valued
differently, then interpretation of the composite is difficult. For this reason, studies that have a composite endpoint for the primary
outcome should also report the results for each of the individual outcomes that make up the composite.

For example, in many cardiovascular treatment studies, the outcome of death is combined with other adverse outcomes (eg,
myocardial infarction, need for a revascularization procedure). Since these outcomes may be valued differently, the use of a
composite outcome is confusing and inappropriate when the intervention's effect is inconsistent across the various endpoints,
especially if the effects are in opposite directions. In a study comparing coronary bypass surgery with percutaneous angioplasty
and stenting for severe coronary artery disease, the main study outcome was a composite of death, stroke, myocardial infarction,
or need for repeat revascularization [16]. Compared with bypass surgery, percutaneous intervention had a significantly lower risk
of stroke but a significantly higher risk of repeat revascularization. The use of a combined endpoint under these circumstances
would be nonsensical.

● "Soft" outcomes – Much of clinical research focuses on objective outcomes, which include the "hard" outcomes of death and
disease (for example, myocardial infarction, stroke, and loss of limb). The "softer" outcomes that measure function, pain, and
quality of life are less common, but, for many questions, are the key outcomes of interest. It is usually easy to measure the hard
outcomes without the need for special instruments. On the other hand, outcomes that require subjective interpretation by patients

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 2 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

or clinicians demand a carefully developed and validated measurement tool. Subjective outcomes are usually more susceptible to
the placebo effect or expectation bias. Strategies to mitigate these errors, such as proper blinding, become critically important. But
even the hard, objective outcomes are prone to bias.

● Surrogate outcomes – Sometimes the most clinically important outcomes are difficult to measure and a surrogate outcome
becomes an easier and cheaper substitute. Surrogate outcomes are expected to predict clinical benefit or harm based on
epidemiologic, pathophysiologic, or other scientific evidence [17]. The advantages of using surrogate outcomes rather than clinical
outcomes is that studies can generally be done with fewer subjects and completed more quickly at lower cost. These advantages
account for the prevalent use of surrogate outcomes in clinical research (45 percent of the new medications approved by the US
Food and Drug Administration [FDA] between 2005 and 2012 were based on studies with surrogate outcomes) [18]. Common
examples include blood pressure in trials evaluating antihypertensives and hemoglobin A1c level in trials evaluating diabetes
medications.

However, the use of surrogate endpoints can lead to erroneous conclusions [19]. Furthermore, research using surrogates can be
difficult to incorporate into an overall assessment of risks and benefits because these outcomes, by definition, are only indirectly
important to patients. The 2010 Institute of Medicine (IOM) recommendations state that surrogate endpoints should only be used if
their ability to predict clinically important outcomes is conclusively documented [17].

Even well-qualified surrogates that appear to meet the IOM standards can be problematic. A sobering example is the use of
hemoglobin A1c as a surrogate, or substitute, for the outcomes of diabetes treatment that are clinically important (death, disease,
and dysfunction). Several therapies that demonstrated impressive reductions in hemoglobin A1c were later found to have no
effect, or harm, on clinically relevant outcomes. (See "Glycemic control and vascular complications in type 2 diabetes mellitus".)

FINDING THE EVIDENCE

EBM resources — Most medical information is now rapidly accessible from computers and hand-held devices. However, skill is
required to quickly find the desired information, while limiting irrelevant "noise." Different approaches are required depending on the
reason for seeking the information:

● Rapidly answering a specific clinical question, a cornerstone of EBM, requires a strategy that is fast and accurate and can be
mastered by most physicians without the need for technical sophistication.

● Keeping current with developments in one's field ("knowledge management") is challenging and is generally not feasible without
the use of a curated resource. Answering all important clinical questions by reading, appraising, and summarizing evidence would
be overwhelming and simply impossible for the individual clinician. Therefore, the bulk of these tasks must be delegated to
trustworthy sources. UpToDate is a resource for this purpose; many other resources are available online. However, the fact that a
resource is electronic and easily accessible does not mean it is evidence-based. Wikipedia, for example, is commonly used to
answer clinical questions [20]. However, Wikipedia entries can have major omissions and have been judged inadequate for the
practice of EBM [21-24].

● Conducting a systematic review requires an exhaustive search of the primary data using multiple search tools. This is discussed
separately. (See "Systematic review and meta-analysis".)

Qualities of useful information sources for clinicians include:

● Rapid access (within minutes) so the information can guide clinical decisions as they arise
● Targeted to the specific clinical question
● Evidence-based, current research information
● Portable
● Easy to use

Within the domain of information technology, a distinction is made between a database, which is a collection of bibliographic

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 3 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

references to medical articles (eg, Medical Literature Analysis and Retrieval System Online [MEDLINE], Cumulative Index to Nursing
and Allied Health Literature [CINAHL], Excerpta Medica database [EMBASE], Cochrane databases) and an access portal, which is a
user interface with a built-in search engine (eg, PubMed, Ovid).

Each access portal may have access to more than one database. Access portals also may provide options for citation management
and citation maps. Citation maps are networks of citation links between various articles in a database. These may be outgoing (articles
cited in the bibliography of a particular paper) or incoming (other, more recent, reports that cite the index article). Exploring citation
maps is thus a legitimate method of searching the literature, occasionally producing novel and helpful results.

Search filters (also called "hedges," "limits," "strategies," and "clinical queries") are predefined search terms designed for a specific
purpose (eg, limiting searches to guidelines, or randomized controlled trials). These are both portal- and database-specific. Because
the filters are platform-specific, results may be very different for seemingly identical searches.

Categories of evidence — Evidence can be summarized at three levels of complexity (figure 3) [25]:

● Primary (original) research – Primary research involves data that are collected from individuals or clusters of individuals, with
clusters defined by physician, clinic, geographic region, or other factors. Within primary research, EBM practitioners should
consider the hierarchy of evidence to minimize the risk of bias (figure 3). For studies evaluating therapy or harm, well-conducted
randomized clinical trials are superior to observational studies, which are superior to unsystematic clinical observations [25].
Appropriate study design depends on the question being investigated (figure 4). Questions regarding benefits (and harms) of an
intervention are best answered with randomized controlled trials; whereas questions regarding risk factors for disease and
prognosis are best answered with prospective cohort studies.

● Systematic reviews – Systematic reviews are best for answering single questions (eg, the effectiveness of tight glucose control
on microvascular complications of diabetes). They are more scientifically structured than traditional reviews, being explicit about
how the authors attempted to find all relevant articles, judge the scientific quality of each study, and weigh evidence from multiple
studies with conflicting results. These reviews pay particular attention to including all strong research, whether or not it has been
published, to avoid publication bias (positive studies are preferentially published). Systemic reviews and meta-analyses are
discussed in greater detail separately. (See "Systematic review and meta-analysis".)

● Summaries and guidelines – Summaries and guidelines represent the highest level of complexity. Ideally, guidelines are a
synthesis of systematic reviews, original research, clinical expertise, and patient preferences. At their best, summaries and
guidelines are a comprehensive synthesis of the best available evidence, from which the guidelines themselves follow. Guidelines
should therefore be based on a critical appraisal of the relevant original research and systematic reviews. The quality of published
guidelines are highly variable, even among those sponsored by professional organizations, with several examples of multiple
guidelines on the same topic making contradictory recommendations [26]. Standards for guideline development have been put
forth by several organizations including the Grading of Recommendations Assessment, Development, and Evaluation (GRADE)
Working Group, the Institute of Medicine, and the Appraisal of Guidelines, Research, and Evaluation (AGREE) Collaboration [27-
32]. These standards are endorsed by numerous organizations, including the United States National Heart, Lung, and Blood
Institute (NHLBI); the British National Institute for Health and Care Excellence (NICE) [33]; the American College of Physicians;
the Cochrane Collaboration; and UpToDate [34].

The accepted standards for guideline development include:

• Rely on systematic reviews


• Grade the quality of available evidence
• Grade the strength of recommendations
• Make an explicit connection between evidence and recommendations

UpToDate uses the GRADE Working Group's approach to making recommendations. Further details are provided on our Editorial
Policies Webpage.

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 4 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

ASSESSING THE VALIDITY OF THE EVIDENCE

Clinicians should have the skills necessary to critically evaluate research articles that are important to their practice. Critical appraisal
skills enhance mastery and autonomy in the practice of medicine. In addition, critical appraisal skills can help clinicians choose more
wisely which information sources they use, favoring sources with explicit standards for weighing evidence. These skills can also make
informal reading more efficient by making it easier to concentrate on especially strong articles and to skip weak ones. There are many
opportunities to learn critical reading skills from books [35], journal articles, courses, and special sessions of professional meetings.

A number of guidelines are available that describe standards for conducting and reporting different types of studies. The set of
guidelines endorsed by the International Committee of Medical Journal Editors can facilitate the critical appraisal of individual studies
based on the type of study:

● Systematic reviews and meta-analyses – Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) [36] and
PRISMA Protocols (PRISMA-P) [37]

● Randomized controlled trials – Consolidated Standards of Reporting Trials (CONSORT) [38] and Standard Protocol Items:
Recommendations for Interventional Trials (SPIRIT) [39]

● Observational studies – Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) [40]

● Diagnostic and prognostic studies – Standards for Reporting of Diagnostic Accuracy (STARD) [41] and Transparent Reporting of a
multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) [42,43]

The focus of critical appraisal is judging both internal validity and generalizability (external validity) (figure 5).

Internal validity — Internal validity refers to the question of whether the results of clinical research are correct for the patients studied.
Threats to internal validity include bias and chance:

● Bias – Bias is any systematic error that can produce a misleading impression of the true effect. Randomized trials are performed
with the aim of reducing bias and well-conducted trials usually have a low risk of bias. However, flaws in the conduct of clinical
trials can produce biased results.

Potential sources of bias in randomized trials include:

• Failure to conceal random assignment to those enrolling study subjects

• Failure to blind relevant individuals (including study participants, clinicians, data collectors, outcome adjudicators, and data
analysts) to group assignment

• Loss to follow-up (missing outcome data)

• Failure of adhere to assigned intervention

• Stopping early for benefit

• Preferentially publishing small (underpowered) studies with statistically significant results (publication bias)

● Chance – Chance is random error, inherent in all observations. The probability of chance producing erroneous results can be
minimized by studying a large number of patients. P-values are commonly misinterpreted as the probability that the findings are
merely due to chance. Instead, p-values describe the probability that if the null hypothesis were true, the study would find a
difference as large, or larger, than the one found. (See "Proof, p-values, and hypothesis testing".)

External validity — External validity refers to the question of whether the results of the study apply to patients outside of the study,
particularly the specific patient (or population) being considered by the EBM practitioner. Study patients are typically highly selected,
unlike patients in usual practice. Often, they have been referred to academic medical centers, meet stringent inclusion criteria, are free

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 5 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

of potentially confounding conditions or disorders, and are willing to countenance the rigorous demands of study protocols. As a result,
they may be systematically different from the patients most doctors see in practice. In particular, study subjects in treatment trials are
often at low risk for the adverse study outcome of interest (death, disease, dysfunction, dissatisfaction). Because treatment benefits
are typically confined to patients at higher risk, it is not unusual, therefore, for study results to apply to only a minority of study subjects
who have a sufficiently high baseline risk [44,45]. Although treatment effect size is often related to baseline risk, many studies do not
measure this relationship, making it more difficult to judge whether, and how, study results apply to a particular individual patient.

Indirect evidence — When a study involves a somewhat different population than is of interest to the EBM practitioner (eg, older,
younger, sicker, healthier), some may be inclined reject the evidence altogether, claiming that it "doesn't apply to my patient." In reality,
this type of indirect evidence can help inform medical decision-making, particularly in the absence of direct evidence. Our confidence
in the expected results, however, is generally lower than it would be with direct evidence.

Subgroup analyses — When the study does not address the specific patient population of interest, one strategy is to rely on
subgroup analyses that evaluate results according to different patient characteristics (eg, age, sex, severity of illness). However,
caution should be used in interpreting the results of subgroup analyses to avoid drawing false conclusions. Potential problems include:

● Reporting bias – Subgroup analyses that are included in published reports may represent a select subset of all the analyses
performed. The "interesting" subgroup analyses are preferentially presented, producing a positive reporting bias [9,46].

● Multiple comparisons – Whether examining a multiplicity of different outcomes or different patient subgroups, the probability of
finding at least one spurious statistically significant finding increases as the number of analyses increases [47]. Perhaps the most
celebrated illustration of this effect was a report of a randomized trial comparing streptokinase, aspirin, both, or neither in the
treatment of acute myocardial infarction [48]. Authors, tongue-in-cheek, reported subgroup analysis by astrologic birth sign.
Subjects who were born under the signs of Gemini or Libra experienced slightly higher mortality from aspirin, whereas subjects
born under the other astrologic signs enjoyed a large reduction in mortality (p<0.00001).

● Lower statistical power – Subgroup analyses always involve fewer subjects than the main analysis, meaning that many are
underpowered to detect a true effect (false-negative results). When underpowered subgroup analyses are coupled with selective
reporting, it becomes more likely that positive subgroup results are false positives, and that the magnitude of effect is exaggerated
[3,49-52]. (See "Proof, p-values, and hypothesis testing", section on 'Power in a negative study'.)

To minimize the risk of drawing false conclusions from subgroup analyses, EBM practitioners should ask the following questions
[11,12,14]:

● Were the subgroup analyses specified a priori, including hypotheses for the direction of the differences?

● Were subgroups defined by baseline risk (an approach that is frequently useful) or by post-randomization events, such as
treatment adherence or changes in variables that might be effected by treatment (an approach that is usually misleading)?

● Was the number of subgroup analyses limited to only a few?

● Were subgroup differences analyzed by testing for effect modification (interaction) rather than separate statistical tests for each
subgroup?

● If multiple analyses were performed, was an appropriate statistical technique used to account for multiple comparisons (Bayesian
techniques or raising the threshold for statistical significance)?

● Were subgroup analyses limited to primary outcomes? Exploring subgroup differences across all secondary outcomes
exacerbates the multiplicity problem.

● Was an effect seen only in the subgroup analyses when the main study results were negative? This is particularly likely to
represent a false-positive finding and should be regarded with great skepticism.

Subgroup differences reported in randomized controlled trials often have deficiencies in one or more of these categories, particularly

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 6 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

failing to specify subgroup analyses a priori and failing to test for effect modification (interaction), and very few are corroborated in
subsequent meta-analysis or randomized controlled trials [53]. Nevertheless, for treatment studies, high-value clinical research should
almost always include a prespecified subgroup analysis of how absolute treatment benefit varies along a continuum of a risk score
determined by multiple baseline variables considered together [54]. This is because, for many treatments, the absolute treatment effect
is much bigger in a high-risk minority and smaller in the lower-risk majority.

APPLYING THE EVIDENCE IN PRACTICE

There is often a gap between recommendations from the best available evidence and actual practice. The reasons for the gap are
numerous, including uncertainty whether results of large studies apply to individual patients, lack of awareness or misunderstanding of
the evidence, and failure to organize care in a way that fosters use of evidence [55].

EBM is not intended to replace clinical judgment [56]. Individual patients should be cared for in light of the best available research
evidence, but with care tailored to their individual circumstances, including genetic makeup, past and concurrent illnesses, health-
related behaviors, and personal preferences. Several studies clearly demonstrate that many clinical decisions are not made based on
the best research evidence or on relevant individual patient characteristics, but seem most consistent with the practice habits or
practice style of the clinicians.

A substantial body of research, as well as practical experience, has demonstrated that all of us, as we care for patients, engage in
systematic errors of omission or commission, relative to the best available research evidence. Prominent examples are the widespread
prescription of antibiotics for acute cough, or the use of radiologic tests for uncomplicated acute low back pain.

In some cases, failure to practice according to the best current evidence is due to a knowledge deficit. But knowledge alone rarely
changes behavior [56]. Behavior change usually requires a combination of interventions and influences, including time for rethinking
practice habits. The table lists the possible influences on clinicians' behavior, roughly in descending order of strength, based on a
growing research literature on physician behavior change and on common sense (table 3). Usually, no single influence is strong
enough to make important changes; combinations are necessary.

SUMMARY AND RECOMMENDATIONS

● Evidence-based medicine (EBM) is the care of patients using the best available research evidence to guide clinical decision
making (figure 1).The basic elements of evidence-based medicine include (see 'Introduction' above):

• Formulating a clinical question


• Finding the best available evidence
• Assessing the validity of the evidence (including internal and external validity)
• Applying the evidence in practice, in conjunction with clinical expertise and patient preferences

● The search for the best answer to a clinical question begins with a tight definition of the question. In formulating questions
regarding the effectiveness on an intervention, four components (PICO: patient population, intervention, comparison, outcomes)
should be considered (table 2). (See 'Formulating a clinical question' above.)

● Evidence can be summarized at three levels of complexity: primary research, systematic reviews, and summaries and guidelines
(figure 3). Appropriate research study design depends on the question being investigated (figure 4). (See 'Categories of evidence'
above.)

● Clinicians should have the skills necessary to critically evaluate research articles that are important to their practice. Critical
appraisal skills enhance mastery and autonomy in the practice of medicine. The focus of critical appraisal is judging both internal
validity and generalizability (external validity) (figure 5). (See 'Assessing the validity of the evidence' above.)

● Full implementation of EBM should include a realistic plan for changing clinical behavior as needed (table 3). This implementation

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 7 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

must include, but is not limited to, access to information. (See 'Applying the evidence in practice' above.)

ACKNOWLEDGMENT

The editorial staff at UpToDate would like to acknowledge Robert H Fletcher, MD, MSc, who contributed to an earlier version of this
topic review.

Use of UpToDate is subject to the Subscription and License Agreement.

REFERENCES

1. Sackett DL, Straus SE, Richardson WS, et al. Evidence-based medicine: How to practice and teach EBM, 2nd ed, Churchill Livin
gstone, Edinburgh 2000.

2. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn't. BMJ 1996; 312:71.

3. Ioannidis JP. Why most published research findings are false. PLoS Med 2005; 2:e124.

4. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP
J Club 1995; 123:A12.

5. Assmann SF, Pocock SJ, Enos LE, Kasten LE. Subgroup analysis and other (mis)uses of baseline data in clinical trials. Lancet
2000; 355:1064.

6. Sun X, Briel M, Busse JW, et al. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ
2012; 344:e1553.

7. Fernandez Y Garcia E, Nguyen H, Duan N, et al. Assessing Heterogeneity of Treatment Effects: Are Authors Misinterpreting
Their Results? Health Serv Res 2010; 45:283.

8. Head SJ, Kaul S, Tijssen JG, et al. Subgroup analyses in trial reports comparing percutaneous coronary intervention with
coronary artery bypass surgery. JAMA 2013; 310:2097.

9. Kasenda B, Schandelmaier S, Sun X, et al. Subgroup analyses in randomised controlled trials: cohort study on trial protocols and
journal publications. BMJ 2014; 349:g4539.

10. Zhang S, Liang F, Li W, Hu X. Subgroup Analyses in Reporting of Phase III Clinical Trials in Solid Tumors. J Clin Oncol 2015;
33:1697.

11. Sun X, Ioannidis JP, Agoritsas T, et al. How to use a subgroup analysis: users' guide to the medical literature. JAMA 2014;
311:405.

12. Fletcher J. Subgroup analyses: how to avoid being misled. BMJ 2007; 335:96.

13. Aronson D. Subgroup analyses with special reference to the effect of antiplatelet agents in acute coronary syndromes. Thromb
Haemost 2014; 112:16.

14. Rothwell PM. Treating individuals 2. Subgroup analysis in randomised controlled trials: importance, indications, and
interpretation. Lancet 2005; 365:176.

15. Vickers AJ, de Craen AJ. Why use placebos in clinical trials? A narrative review of the methodological literature. J Clin Epidemiol
2000; 53:157.

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 8 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

16. Serruys PW, Morice MC, Kappetein AP, et al. Percutaneous coronary intervention versus coronary-artery bypass grafting for
severe coronary artery disease. N Engl J Med 2009; 360:961.

17. Institute of Medicine (US) Committee on Qualification of Biomarkers and Surrogate Endpoints in Chronic Disease; Editors Miche
el CM and Ball JR. National Academies Press, Washington, DC. 2010. Available at: https://www.ncbi.nlm.nih.gov/pubmedhealth/
PMH0079490/ (Accessed on September 26, 2016).

18. Downing NS, Aminawung JA, Shah ND, et al. Clinical trial evidence supporting FDA approval of novel therapeutic agents, 2005-
2012. JAMA 2014; 311:368.

19. Yudkin JS, Lipska KJ, Montori VM. The idolatry of the surrogate. BMJ 2011; 343:d7995.

20. Allahwala UK, Nadkarni A, Sebaratnam DF. Wikipedia use amongst medical students - new insights into the digital revolution.
Med Teach 2013; 35:337.

21. Azer SA, AlSwaidan NM, Alshwairikh LA, AlShammari JM. Accuracy and readability of cardiovascular entries on Wikipedia: are
they reliable learning resources for medical students? BMJ Open 2015; 5:e008187.

22. Kräenbring J, Monzon Penza T, Gutmann J, et al. Accuracy and completeness of drug information in Wikipedia: a comparison
with standard textbooks of pharmacology. PLoS One 2014; 9:e106930.

23. Kupferberg N, Protus BM. Accuracy and completeness of drug information in Wikipedia: an assessment. J Med Libr Assoc 2011;
99:310.

24. Hasty RT, Garbalosa RC, Barbato VA, et al. Wikipedia vs peer-reviewed medical literature for information about the 10 most
costly medical conditions. J Am Osteopath Assoc 2014; 114:368.

25. Agoritsas T, Vandvik PO, Neumann I, et al. Finding Current Best Evidence. In: Users' Guides to the Medical Literature: A Manual
for Evidence-Based Clinical Practice, 3rd Ed, Guyatt G, Rennie D, Meade MO, Cook DJ (Eds), McGraw-Hill Education, 2015. p.2
9.

26. Burda BU, Norris SL, Holmer HK, et al. Quality varies across clinical practice guidelines for mammography screening in women
aged 40-49 years as assessed by AGREE and AMSTAR instruments. J Clin Epidemiol 2011; 64:968.

27. Clinical Practice Guidelines We Can Trust. In: Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E, eds. Washington,
D.C.: The National Academies Press. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Board on
Health Care Services; Institute of Medicine of the National Academy of Sciences.; 2011. Available at: https://www.ncbi.nlm.nih.go
v/pubmedhealth/PMH0079468/ (Accessed on September 28, 2016).

28. Brouwers MC, Kho ME, Browman GP, et al. Development of the AGREE II, part 1: performance, usefulness and areas for
improvement. CMAJ 2010; 182:1045.

29. Brouwers MC, Kho ME, Browman GP, et al. Development of the AGREE II, part 2: assessment of validity of items and tools to
support application. CMAJ 2010; 182:E472.

30. Neumann I, Santesso N, Akl EA, et al. A guide for health professionals to interpret and use recommendations in guidelines
developed with the GRADE approach. J Clin Epidemiol 2016; 72:45.

31. Guyatt G, Oxman AD, Akl EA, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings
tables. J Clin Epidemiol 2011; 64:383.

32. Andrews JC, Schünemann HJ, Oxman AD, et al. GRADE guidelines: 15. Going from evidence to recommendation-determinants
of a recommendation's direction and strength. J Clin Epidemiol 2013; 66:726.

33. Thornton J, Alderson P, Tan T, et al. Introducing GRADE across the NICE clinical guideline program. J Clin Epidemiol 2013;

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 9 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

66:124.

34. GRADE working group. Organizations that have endorsed or that are using GRADE. Available at: http://www.gradeworkinggroup.
org (Accessed on September 28, 2016).

35. Users' Guides to the Medical Literature: A manual for evidence-based clinical practice, 3rd Ed, Guyatt G, Drummond R, Meade M
O, Cook DJ (Eds), McGraw-Hill Education, 2015.

36. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.
Ann Intern Med 2009; 151:264.

37. Moher D, Shamseer L, Clarke M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P)
2015 statement. Syst Rev 2015; 4:1.

38. Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group
randomized trials. Ann Intern Med 2010; 152:726.

39. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern
Med 2013; 158:200.

40. Vandenbroucke JP, von Elm E, Altman DG, et al. Strengthening the Reporting of Observational Studies in Epidemiology
(STROBE): explanation and elaboration. Ann Intern Med 2007; 147:W163.

41. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The
STARD Initiative. Ann Intern Med 2003; 138:40.

42. Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent Reporting of a multivariable prediction model for Individual
Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med 2015; 162:55.

43. Moons KG, Altman DG, Reitsma JB, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or
Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med 2015; 162:W1.

44. Kent DM, Hayward RA. Limitations of applying summary results of clinical trials to individual patients: the need for risk
stratification. JAMA 2007; 298:1209.

45. Vickers AJ, Kent DM. The Lake Wobegon Effect: Why Most Patients Are at Below-Average Risk. Ann Intern Med 2015; 162:866.

46. Dwan K, Altman DG, Clarke M, et al. Evidence for the selective reporting of analyses and discrepancies in clinical trials: a
systematic review of cohort studies of clinical trials. PLoS Med 2014; 11:e1001666.

47. Schulz KF, Grimes DA. Multiplicity in randomised trials II: subgroup and interim analyses. Lancet 2005; 365:1657.

48. Randomised trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial
infarction: ISIS-2. ISIS-2 (Second International Study of Infarct Survival) Collaborative Group. Lancet 1988; 2:349.

49. Wittes J. On looking at subgroups. Circulation 2009; 119:912.

50. Brookes ST, Whitley E, Peters TJ, et al. Subgroup analyses in randomised controlled trials: quantifying the risks of false-positives
and false-negatives. Health Technol Assess 2001; 5:1.

51. Reinhart A. Statistics Done Wrong: The Woefully Complete Guide., No Starch Press, San Francisco, CA 2015.

52. Lauer MS. From hot hands to declining effects: the risks of small numbers. J Am Coll Cardiol 2012; 60:72.

53. Wallach JD, Sullivan PG, Trepanowski JF, et al. Evaluation of Evidence of Statistical Support and Corroboration of Subgroup
Claims in Randomized Clinical Trials. JAMA Intern Med 2017; 177:554.

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 10 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

54. Kent DM, Nelson J, Dahabreh IJ, et al. Risk and treatment effect heterogeneity: re-analysis of individual participant data from 32
large clinical trials. Int J Epidemiol 2016; 45:2075.

55. Haynes B, Haines A. Barriers and bridges to evidence based clinical practice. BMJ 1998; 317:273.

56. Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance. A systematic review of the effect of
continuing medical education strategies. JAMA 1995; 274:700.

Topic 2763 Version 23.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 11 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

GRAPHICS

Schematic diagram of components of evidence-based


medicine

Evidence-based medicine is the care of patients using the best available


research evidence to guide clinical decision making. The focus is upon
applying the results of research involving patients and important clinical
outcomes (eg, death, symptoms). Evidence-based medicine is meant to
complement, not replace, clinical judgment tailored to individual patients.
Similarly, evidence-based medicine and the delivery of culturally, socially,
and individually sensitive and effective care are complementary, not
contradictory.

EBM: evidence-based medicine.

Graphic 110429 Version 1.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 12 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Exponential growth of the medical literature from 1946 to 2015

(Panel A) Total number of new publications listed in MEDLINE by year.


(Panel B) Number of publications of randomized trials listed in MEDLINE by year. Data were derived
from searching Ovid MEDLINE® 1946 to February Week 4 2015 (searched on March 2, 2015) using
the "Find Citation" tab for each year 1946 to 2013. The total number of citations was recorded.
This was then limited to randomized controlled trials.

Graphic 110428 Version 1.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 13 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Clinical questions in daily practice

Is the finding abnormal?

What is the diagnosis?

How often does it occur?

What are the risk factors for the disease?

What is the pathogenesis?

What is the natural history?

How effective (and harmful) is treatment?

How effective (and harmful) are preventive interventions?

Graphic 67164 Version 1.0

https://www.uptodate.com/contents/evidence-based-medicine/p…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 14 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Components of the "PICO" question

Topic of the clinical question

Intervention Diagnosis Prognosis

P Relevant Patient population Relevant Patient population Relevant Patient population

I Intervention being considered Test being applied Risk factor being evaluated

C Comparison intervention – Comparison patient population

O Outcomes Outcome/diagnosis Outcomes

PICO: Patient population, Intervention, Comparison, Outcome.

Graphic 110430 Version 1.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 15 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

From evidence to evidence-based resources

Hierarchy of evidence and level of processing that contribute to evidence-based resources.

EBM: evidence-based medicine.

Reproduced with permission from: Agoritsas T, Vandvik PO, Neumann I, et al. Finding Current Best Evidence. In: Users' Guides to the Medical Literature: A Manual for
Evidence-Based Clinical Practice, 3rd Ed, Guyatt G, Rennie D, Meade MO, Cook DJ. (Eds), McGraw-Hill Education, New York 2015. Copyright © 2015 McGraw-Hill
Education. McGraw-Hill Education makes no representations or warranties as to the accuracy of any information contained in the McGraw-Hill Education Material, including
any warranties of merchantability or fitness for a particular purpose. In no event shall McGraw-Hill Education have any liability to any party for special, incidental, tort, or
consequential damages arising out of or in connection with the McGraw-Hill Education Material, even if McGraw-Hill Education has been advised of the possibility of such
damages.

Graphic 110435 Version 3.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 16 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Clinical research designs, questions and measures

Adapted from Fletcher SW. Principles of Epidemiology. In: Textbook of Internal Medicine, Kelley
WN (Ed), JB Lippincott, Philadelpia 1988.

Graphic 60667 Version 3.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 17 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Internal and external validity

Internal validity addresses the question of whether the results of clinical research are
correct for the patients in the study and is threatened by bias and chance. External validity
addresses the question of whether the results of the research apply to patients outside of
the study population.

Reproduced with permission from: Fletcher RH, Fletcher SW. Clinical Epidemiology: The Essentials,
4th ed, Lippincott Williams & Wilkins, Philadelphia 2005. Copyright © 2005 Lippincott Williams &
Wilkins.

http://www.lww.com
Graphic 62952 Version 4.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 18 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Options for changing clinicians' practice behavior*

Education
Passive: Providing information by CME, journals, books, electronic media

Active: Provide answers to questions as they arise ("just-in-time learning")

Change the work environment


Make it easier to do the right thing

Case managers

Feedback on performance
Practice pattern reports for physician's use only

Public report cards: HEDIS and others

Clinical practice guidelines


Authoritative, recommendations, tailored to local situation

Should include an evidence-based rationale

Local opinion leaders


Informal, disease-specific network of respected colleagues

Persuasion
"Academic detailing" methods similar to advertisements

Economic incentives/disincentives
Capitation, risk contracts

Tying compensation to performance

Change patients' expectations


Popular media, books, patient help hotlines

Direct-to-patient marketing

CME: Continuing Medical Education; HEDIS: Healthcare Effectiveness Data and Information Set.
* In descending order of strength.

Graphic 58403 Version 2.0

https://www.uptodate.com/contents/evidence-based-medicine/pr…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 19 de 20
Evidence-based medicine - UpToDate 29/04/19 12(16

Contributor Disclosures
Arthur T Evans, MD, MPH Nothing to disclose Gregory Mints, MD, FACP Nothing to disclose Mark D Aronson, MD Nothing to disclose Carrie
Armsby, MD, MPH Nothing to disclose

Contributor disclosures are reviewed for conflicts of interest by the editorial group. When found, these are addressed by vetting through a multi-level
review process, and through requirements for references to be provided to support the content. Appropriately referenced content is required of all
authors and must conform to UpToDate standards of evidence.

Conflict of interest policy

https://www.uptodate.com/contents/evidence-based-medicine/p…si=42c64bbd-ecef-45e5-965e-9035eecc8009&source=contentShare Página 20 de 20

También podría gustarte