Está en la página 1de 17

Internet-Based Learning in the Health Professions: A

Meta-analysis
David A. Cook; Anthony J. Levinson; Sarah Garside; et al.
Online article and related content
current as of October 21, 2009. JAMA. 2008;300(10):1181-1196 (doi:10.1001/jama.300.10.1181)

http://jama.ama-assn.org/cgi/content/full/300/10/1181

Supplementary material eTables


http://jama.ama-assn.org/cgi/content/full/300/10/1181/DC1

Correction Contact me if this article is corrected.

Citations This article has been cited 14 times.


Contact me when this article is cited.

Topic collections Informatics/ Internet in Medicine; Internet; Medical Practice; Medical Education;
Quality of Care; Evidence-Based Medicine; Review
Contact me when new articles are published in these topic areas.
CME course Online CME course available.

CME course Online CME course available.

Related Letters Internet-Based Education for Health Professionals


Geoff Wong. JAMA. 2009;301(6):598.
Rita Banzi et al. JAMA. 2009;301(6):599.

Subscribe Email Alerts


http://jama.com/subscribe http://jamaarchives.com/alerts

Permissions Reprints/E-prints
permissions@ama-assn.org reprints@ama-assn.org
http://pubs.ama-assn.org/misc/permissions.dtl

Downloaded from www.jama.com at University of Florida on October 21, 2009


REVIEW CLINICIANS CORNER

Internet-Based Learning
in the Health Professions
A Meta-analysis
David A. Cook, MD, MHPE Context The increasing use of Internet-based learning in health professions education
Anthony J. Levinson, MD, MSc may be informed by a timely, comprehensive synthesis of evidence of effectiveness.
Sarah Garside, MD, PhD Objectives To summarize the effect of Internet-based instruction for health profes-
Denise M. Dupras, MD, PhD sions learners compared with no intervention and with non-Internet interventions.

Patricia J. Erwin, MLS Data Sources Systematic search of MEDLINE, Scopus, CINAHL, EMBASE, ERIC,
TimeLit, Web of Science, Dissertation Abstracts, and the University of Toronto Re-
Victor M. Montori, MD, MSc search and Development Resource Base from 1990 through 2007.

T
HE ADVENT OF THE W ORLD Study Selection Studies in any language quantifying the association of Internet-
Wide Web in 1991 greatly fa- based instruction and educational outcomes for practicing and student physicians, nurses,
cilitated the use of the Inter- pharmacists, dentists, and other health care professionals compared with a no-
net 1 and its potential as an intervention or non-Internet control group or a preintervention assessment.
instructional tool was quickly recog- Data Extraction Two reviewers independently evaluated study quality and ab-
nized.2,3 Internet-based education per- stracted information including characteristics of learners, learning setting, and interven-
mits learners to participate at a time and tion (including level of interactivity, practice exercises, online discussion, and duration).
place convenient to them, facilitates in- Data Synthesis There were 201 eligible studies. Heterogeneity in results across stud-
structional methods that might be dif- ies was large (I2 79%) in all analyses. Effect sizes were pooled using a random effects
ficult in other formats, and has the po- model. The pooled effect size in comparison to no intervention favored Internet-based
tential to tailor instruction to individual interventions and was 1.00 (95% confidence interval [CI], 0.90-1.10; P .001; n=126
studies) for knowledge outcomes, 0.85 (95% CI, 0.49-1.20; P .001; n=16) for skills,
learners needs.4-6 As a result, Internet- and 0.82 (95% CI, 0.63-1.02; P .001; n=32) for learner behaviors and patient ef-
based learning has become an increas- fects. Compared with non-Internet formats, the pooled effect sizes (positive numbers fa-
ingly popular approach to medical edu- voring Internet) were 0.10 (95% CI, 0.12 to 0.32; P=.37; n=43) for satisfaction, 0.12
cation.7,8 (95% CI, 0.003 to 0.24; P=.045; n=63) for knowledge, 0.09 (95% CI, 0.26 to 0.44;
However, concerns about the effec- P=.61; n=12) for skills, and 0.51 (95% CI, 0.24 to 1.25; P=.18; n=6) for behaviors or
tiveness of Internet-based learning have patient effects. No important treatment-subgroup interactions were identified.
stimulated a growing body of re- Conclusions Internet-based learning is associated with large positive effects compared
search. In the first decade of the Webs with no intervention. In contrast, effects compared with non-Internet instructional meth-
existence 35 evaluative articles on Web- ods are heterogeneous and generally small, suggesting effectiveness similar to tradi-
based learning were published, 9 tional methods. Future research should directly compare different Internet-based
whereas at least 32 were published in interventions.
2005 alone.10 Synthesis of this evi- JAMA. 2008;300(10):1181-1196 www.jama.com

dence could inform educators and


learners about the extent to which these Since 2001, several reviews (some of Author Affiliations: College of Medicine (Drs Cook, Du-
pras, and Montori and Ms Erwin), Office of Education
products are effective and what makes which also included nonInternet- Research (Dr Cook), and Knowledge and Encounter Re-
them more or less effective.6 based computer-assisted instruction) search Unit (Dr Montori), Mayo Clinic, Rochester, Min-
nesota, and McMaster University, Hamilton, Ontario
have offered such summaries.9-17 How- (Drs Levinson and Garside).
CME available online at ever, each had important methodologi- Corresponding Author: David A. Cook, MD, MHPE,
www.jamaarchivescme.com cal limitations, including incomplete ac- Division of General Internal Medicine, Mayo Clinic Col-
and questions on p 1245. lege of Medicine, Baldwin 4-A, 200 First St SW, Roch-
counting of existing studies, limited ester, MN 55905 (cook.david33@mayo.edu).

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1181

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

assessment of study quality, and no tween Internet and non-Internet in- comparative study, evaluative study, pre-
quantitative pooling to derive best es- structional modalities, provided in- test, or program evaluation), and par-
timates of these interventions effect on structional methods were similar ticipant characteristics (such as educa-
educational outcomes. between interventions. tion, professional; students, health
We sought to identify and quantita- occupations; internship and residency;
tively summarize all studies of Internet- Study Eligibility and specialties, medical). eTable 1 (http:
based instruction involving health pro- We developed intentionally broad in- //www.jama.com) describes the com-
fessions learners. We conducted 2 clusion criteria in order to present a com- plete search strategy. We restricted our
systematic reviews with meta-analyses prehensive overview of Internet-based search to articles published in or after
addressing this topic, the first explor- learning in health professions educa- 1990 because the World Wide Web was
ing Internet-based instruction com- tion. We included studies in any lan- first described in 1991. The last date of
pared with no intervention and the sec- guage if they reported evaluation of the search was January 17, 2008. Addi-
ond summarizing studies comparing Internet to teach health professions tional articles were identified by hand-
Internet-based and non-Internet instruc- learners at any stage in training or prac- searching reference lists of all in-
tional methods (media-comparative tice compared with no intervention (ie, cluded articles, previous reviews, and
studies). a control group or preintervention as- authors files.
sessment) or a non-Internet interven-
METHODS tion, using any of the following out- Study Selection
These reviews were planned, con- comes30: reaction or satisfaction (learner Working independently and in dupli-
ducted, and reported in adherence to satisfaction with the course), learning cate, reviewers (D.A.C., A.J.L., S.G., and
standards of quality for reporting meta- (knowledge, attitudes, or skills in a test D.M.D.) screened all titles and ab-
analyses (Quality of Reporting of Meta- setting), behaviors (in practice), or ef- stracts, retrieving in full text all poten-
analyses and Meta-analysis of Ob- fects on patients (BOX). We included tially eligible abstracts, abstracts in which
servational Studies in Epidemiology single-group pretest-posttest, 2-group reviewers disagreed, or abstracts with in-
standards).18,19 randomized and nonrandomized, par- sufficient information. Again indepen-
allel-group and crossover designs, and dently and in duplicate, reviewers con-
Questions studies of adjuvant instruction, in sidered the eligibility of studies in full
We sought to answer (1) to what ex- which an Internet-based intervention text, with adequate chance-adjusted in-
tent is Internet-based instruction asso- is added to other instruction common terrater agreement (0.71 by intraclass
ciated with improved outcomes in to all learners. correlation coefficient31 [ICC], esti-
health professions learners compared Studies were excluded if they re- mated using SAS 9.1 [SAS Institute Inc,
with no intervention, and (2) how does ported no outcomes of interest, did not Cary, North Carolina]). Reviewers re-
Internet-based instruction compare compare Internet-based instruction with solved conflicts by consensus.
with non-Internet instructional meth- no intervention or a non-Internet inter-
ods? We also sought to determine fac- vention, used a single-group posttest- Data Extraction
tors that could explain differences in only design, or evaluated a computer in- Reviewers abstracted data from each eli-
effect across participants, settings, in- tervention that resided only on the client gible study using a standardized data
terventions, outcomes, and study de- computer or CD-ROM or in which the abstraction form that we developed, it-
signs for each of these questions. use of the Internet was limited to ad- eratively refined, and implemented elec-
Based on existing theories and evi- ministrative or secretarial purposes. tronically. Data for all variables where
dence,20-24 we hypothesized that cog- Meeting abstracts were also excluded. reviewer judgment was required (in-
nitive interactivity, peer discussion, on- cluding quality criteria and all charac-
going access to instructional materials, Study Identification teristics used in meta-analytic sub-
and practice exercises would improve A senior reference librarian with ex- group analyses) were abstracted
learning outcomes. We also antici- pertise in systematic reviews (P.J.E.) de- independently and in duplicate, and in-
pated, based on evidence25 and argu- signed a strategy to search MEDLINE, terrater reliability was determined using
ment,26 that Internet-based instruc- Scopus, CINAHL, EMBASE, ERIC, ICC. Conflicts were resolved by con-
tion in comparison to no intervention TimeLit, Web of Science, Dissertation sensus. When more than 1 compari-
would have the greatest effect on Abstracts, and the University of Toronto son intervention was reported (eg, both
knowledge, a smaller but significant Research and Development Resource lecture and paper interventions), we
effect on skills, and a yet smaller effect Base for relevant articles. Search terms evaluated the comparison most closely
on behaviors in practice and patient- included delivery concepts (such as In- resembling the Internet-based course
related outcomes. Finally, based on ternet, Web, computer-assisted instruc- (ICC, 0.77).
previous reviews and discussions,27-29 tion, e-learning, online, virtual, and dis- We abstracted information on the
we expected no overall difference be- tance), study design concepts (such as number and training level of learners,
1182 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

Box. Definitions of Study Variables


Participants Tutorial
Tutorials were the online equivalent of a lecture
Health professions learners
and typically involved learners studying and com-
Students, postgraduate trainees, or practitioners in
pleting assignments alone. These often comprised
a profession directly related to human or animal health;
stand-alone Internet-based applications with vary-
for example physicians, nurses, pharmacists, dentists,
ing degrees of interactivity and multimedia.
veterinarians, and physical and occupational therapists.
Synchronous or asynchronous communication
Interventions Synchronous communication involved simultan-
Internet-based instruction eous interaction between 2 or more course partici-
Computer-assisted instructioninstruction in which com- pants over the Internet, using methods such as
puters play a central role as the means of information de- online chat, instant messaging, or 2-way video-
livery and direct interaction with the learner (in contrast conferencing.
to the use of computer applications such as PowerPoint), Internet conferencing
and to some extent replace the human instructor.6
Internet conferencing involved the simultaneous
using the Internet or a local intranet as the means of de-
transmission of both audio and video informa-
livery. This included Web-based tutorials, virtual pa-
tion. Video information could comprise an image
tients, discussion boards, e-mail, and Internet-mediated
of the instructor, other video media, or shared
videoconferencing. Applications linked to a specific com-
projection of the computer screen (ie, white-
puter (including CD-ROM) were excluded unless they also
board).
used the Internet.
Repetition (single-instance vs ongoing access)
Learning environment (classroom vs practice setting)
Repetition evaluated the availability of interven-
Classroom-type settings were those in which most learn-
tions over time; coded as single instance (learning
ers would have attended had the course not used the
materials available only once during the course)
Internet (ie, the Internet-based course replaced a class-
and ongoing access (learning materials accessible
room course or supplemented a classroom course, or
throughout the duration of the course).
other concurrent courses were in a classroom). Practice-
type settings were those in which learners were seeing Duration
patients or had a primary patient care responsibility (ie, The time over which learners participated in the in-
students in clinical years, postgraduate trainees, or tervention.
on-the-job training).

Practice exercises Outcomes


Practice exercises included cases, self-assessment ques- Satisfaction (reaction)
tions, and other activities requiring learners to apply in- Learners reported satisfaction with the course.
formation they had learned.
Knowledge
Cognitive interactivity Subjective (eg, learner self-report) or objective (eg,
Cognitive interactivity rated the level of cognitive multiple-choice question knowledge test) assess-
engagement required for course participation. Multiple ments of factual or conceptual understanding.
practice exercises typically justified moderate or high
interactivity, although exercises for which questions Skills
and answers were provided together (ie, on the same Subjective (eg, learner self-report) or objective (eg,
page) were rated low. Essays and group collaborative faculty ratings, or objective tests of clinical skills
projects also supported higher levels of cognitive inter- such as interpretation of electrocardiograms or
activity. radiographs) assessments of learners ability to
demonstrate a procedure or technique.
Discussion
Face-to-face discussion required dedicated time for in- Behaviors and patient effects
structor-student or peer-peer interaction, above and be- Subjective (eg, learner self-report) or objective (eg,
yond the questions that might arise in a typical lecture. chart audit) assessments of behaviors in practice
Online discussion required provision for such interac- (such as test ordering) or effects on patients (such
tions using synchronous or asynchronous online commu- as medical errors).
nication such as discussion board, e-mail, chat, or Inter-
net conferencing.

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1183

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

learning setting (classroom vs practice means pooled across each interven- tially relevant articles were identified
setting; ICC, 0.81), study design tion.36,37 For 2-group pretest-posttest from author files and review of refer-
(pretest-posttest vs posttest-only, studies we used posttest means or ex- ence lists. From these we identified 288
number of groups, and method of act statistical test results adjusted for potentially eligible articles (FIGURE 1).
group assignment; ICC range, 0.88- pretest or, if these were not reported, Following a single qualitative study re-
0.95), topic, instructional modalities we used differences in change scores ported in 1994, the number of com-
used, length of course (ICC, 0.85), standardized using pretest variance. If parative or qualitative studies of Inter-
online tutorial (ICC, 0.68) or video- neither P values nor any measure of net-based learning increased from 2
conference (ICC, 0.96) format, level of variance was reported, we used the av- articles published in 1996, to 16 pub-
cognitive interactivity (ICC, 0.70), erage standard deviation from all other lications in 2001, to 56 publications in
quantity of practice exercises (ICC, included studies. 2006. We contacted authors of 113 ar-
0.70), repetition (ICC, 0.65), presence To quantify inconsistency (hetero- ticles for additional outcomes informa-
of online discussion (ICC, 0.85) and geneity) across studies we used the I2 tion and received information from 45.
face-to-face discussion (ICC, 0.58), statistic,38 which estimates the percent- Thirteen otherwise eligible articles con-
synchronous learning (ICC, 0.95), and age of variability across studies not due tained insufficient data to calculate an
each outcome (subjective or objective to chance. I2 values greater than 50% effect size (ie, sample size or both means
[ICC range, 0.63-1.0] and descriptive indicate large inconsistency. Because we and statistical tests absent) and were ex-
statistics). When outcomes data were found large inconsistency (I2 79% in cluded from the meta-analyses. Ulti-
missing, we requested this information all analyses), we used random-effects mately we analyzed 201 articles, 5 of
from authors by e-mail and paper models to pool weighted effect sizes which contributed to both analyses,
letter. across studies using StatsDirect 2.6.6 representing 214 interventions. TABLE 1
Recognizing that many nonrandom- (StatsDirect Ltd, Altrincham, En- summarizes key study features and
ized and observational studies would gland, http://www.statsdirect.com). eTable 2 (http://www.jama.com) pro-
be included, we abstracted informa- We performed subgroup analyses to vides detailed information.
tion on methodological quality using an explore heterogeneity and to investi-
adaptation of the Newcastle-Ottawa gate the questions noted above regard- Study Characteristics
scale for grading the quality of cohort ing differences in participants, interven- Internet-based instruction addressed a
studies.32 We rated each study in terms tions, design, and quality. We used a wide range of medical topics. In addi-
of representativeness of the interven- 2-sided level of .05. We grouped stud- tion to numerous diagnostic and thera-
tion group (ICC, 0.63), selection of the ies with active comparison interven- peutic content areas, courses ad-
control group (ICC, 0.75), compara- tions according to relative between- dressed topics such as ethics, histology,
bility of cohorts (statistical adjust- intervention differences in instructional anatomy, evidence-based medicine,
ment for baseline characteristics in non- methods; namely, did the comparison conduct of research, biostatistics, com-
randomized studies [ICC, 0.49], or intervention have more, less, or the same munication skills, interpretation of elec-
randomization [ICC, 0.93] and alloca- amount of interactivity, practice exer- trocardiograms and pulmonary func-
tion concealment [ICC, 0.48] for ran- cises, discussion (face-to-face and In- tion tests, and systems-based practice.
domized studies), blinding of out- ternet-based discussion combined), and Most interventions involved tutorials
come assessment (ICC 0.74), and repetition. for self-study or virtual patients, while
completeness of follow-up (ICC, 0.37 We conducted sensitivity analyses to over a quarter required online discus-
to 0.79 depending on outcome). explore the robustness of findings to sion with peers, instructors, or both.
synthesis assumptions, with analyses These modalities were often mixed in
Data Synthesis excluding low-quality studies, studies the same course. Twenty-nine studies
We analyzed studies separately for out- with effect size estimated from inex- (14.4%) blended Internet-based and
comes of satisfaction, knowledge, skills, act tests of significance or imputed stan- face-to-face instruction. Non-Internet
and behaviors or patient effects. For dard deviations, 1 study39 that contrib- comparison interventions most often in-
each outcome class we converted means uted up to 14 distinct Internet-based volved face-to-face courses or paper
and standard deviations to standard- interventions, studies of blended (In- modules but also included satellite-
ized mean differences (Hedges g effect ternet and non-Internet) interven- mediated videoconferences, standard-
sizes).33-35 When insufficient data were tions, and studies with major design ized patients, and slide-tape self-study
available, we used reported tests of sig- flaws (described below). modules.
nificance (eg, P values) to estimate the The vast majority of knowledge out-
effect size. For crossover studies we RESULTS comes consisted of multiple-choice
used means or exact statistical test re- Trial Flow tests, a much smaller number com-
sults adjusted for repeated measures or, The search strategy identified 2045 ci- prised other objectively scored meth-
if these were not reported, we used tations, and an additional 148 poten- ods, and 18 of 177 studies assessing
1184 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

knowledge (10.2%) used self-report sults of the meta-analyses comparing In- Knowledge. One hundred seven-
measures of knowledge, confidence, or ternet-based instruction with no teen studies reported on 126 interven-
attitudes. Skills outcomes included intervention. Satisfaction outcomes are tions using knowledge as the out-
communication with patients, critical difficult to define in comparison to no come. The pooled effect size for these
appraisal, medication dosing, cardio- intervention, and no studies reported interventions was 1.00 (95% confi-
pulmonary resuscitation, and lumbar meaningful outcomes of this type. We dence inter val [CI], 0.90-1.10;
puncture. These were most often used inexact P values to estimate 17 of P .001). Because effect sizes larger
assessed using objective instructor or 174 effect sizes (9.8%), and we im- than 0.8 are considered large,40 this sug-
standardized patient observations. Skills puted standard deviations to estimate gests that Internet-based instruction
outcomes were self-reported or the 10 effect sizes (5.7%). eTable 4 (http: typically has a substantial benefit on
method could not be determined for 7 //www.jama.com) contains detailed learners knowledge compared with no
of 26 studies (26.9%). Behavior and results of the main analysis and sensi- intervention. However, we also found
patient effects included osteoporosis tivity analyses for each outcome. Sen- large inconsistency across studies
screening rates, cognitive behavioral sitivity analyses did not affect conclu- (I2 =93.6%), and individual effect sizes
therapy implementation, workplace vio- sions. ranged from 0.30 to 6.69. One of the
lence events, incidence of postpartum
depression, and various perceived
changes in practice. Ten of 23 articles Figure 1. Trial Flow
(43.5%; representing nearly two-
2193 Studies identified and screened
thirds of the interventions) used self- for retrieval
reported behavior or patient effects out- 2045 Database search
148 Article reference lists
comes. Most objective assessments used
chart review, although 1 study used 1256 Excluded
incognito standardized patients. 408 Not original research
736 Instruction not offered
predominantly via the Internet
Study Quality 52 No quantitative comparison or
qualitative data
TABLE 2 summarizes the methodological 60 No health professions learners
quality of included studies, and eTable 3
(http://www.jama.com) contains de- 937 Retrieved for more detailed evaluation
tails on the quality scale and individual
study quality. Nine of 61 (14.8%) no- 649 Excluded
45 Not original research
intervention 2-group comparison stud-
403 Instruction not offered
ies determined groups by completion predominantly via the Internet
or noncompletion of elective or re- 175 No quantitative comparison or
qualitative data
quired Internet-based instruction. Al- 21 No health professions learners
though such groupings are susceptible 5 Meeting abstract

to bias, sensitivity analyses showed simi-


288 Identified as potentially appropriate
lar results when these studies were ex- for inclusion
cluded. Eight of 43 studies (18.6%) as-
sessing satisfaction, 42 of 177 (23.7%) 74 Excluded
34 No comparison with no intervention
assessing knowledge, 4 of 26 (15.4%) as- or non-Internet intervention
sessing skills, and 5 of 23 (21.7%) as- 24 Qualitative outcomes only
7 No relevant quantitative outcomes
sessing behaviors and patient effects lost 9 Duplicate publication
more than 25% of participants from time
of enrollment or failed to report follow- 214 Considered appropriate for inclusion
up. The mean (SD) quality score (6
points indicating highest quality) was 2.5
138 Making comparison to no intervention 81 Making comparison to non-Internet
(1.3) for no-intervention controlled stud- 8 Withdrawn for insufficient data for intervention
ies, and 3.5 (1.4) for non-Internet com- coding outcomes 5 Withdrawn for insufficient data for
coding outcomes
parison studies.
130 Included in no-intervention control 76 Included in non-Internet comparison
Quantitative Data Synthesis: meta-analysis meta-analysis
Comparisons With No Intervention
FIGURES 2, 3, 4, and eTable 4 (http: Five studies compared the Internet-based intervention with both no intervention and a non-Internet compari-
son intervention.
//www.jama.com) summarize the re-
2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1185

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

2 interventions yielding a negative effect ship to students at a different hospital hypotheses that high interactivity, on-
size41 was an adjunct to an existing in- without access to these order sets, going access to course materials, on-
tensive and well-planned course on which could arguably be construed as line discussion, or the presence of prac-
lung cancer. The other42 compared In- an active comparison intervention. tice exercises would yield larger effect
ternet-based educational order sets for In subgroup analyses exploring this sizes (P for interaction.15) (Figure 2).
medical students on a surgery clerk- inconsistency, we failed to confirm our However, we found a significant inter-
action with study quality, with studies
scoring low on the modified Newcastle-
Table 1. Description of Included Studies a
Ottawa scale showing a greater effect
No Intervention Non-Internet
Comparison Comparison than high-quality studies (mean score,
1.07; 95% CI, 0.96-1.18 vs mean score,
No. (%) of No. of No. (%) of No. of
Study Characteristic Studies Participants b Studies Participants b
0.71; 95% CI, 0.51-0.92; P for interac-
All studies 130 19 234 76 7218
tion=.003).
Study design Skills. Sixteen interventions used
Posttest-only 2-group 33 (25.4) 5565 47 (61.8) 4516 skills as an outcome. The pooled effect
Pretest-posttest 2-group 28 (21.5) 4107 29 (38.2) 2702 size of 0.85 (95% CI, 0.49-1.20;
Pretest-posttest 1-group 69 (53.1) 9562 0 (0) 0 P .001) reflects a large effect. There
Setting was large inconsistency across trials
Classroom 38 (29.2) 5702 46 (60.5) 4166
(I2 =92.7%), and effect sizes ranged from
Practice 90 (69.2) 13 414 29 (38.2) 3014
0.02 to 2.50.
Undefined 2 (1.6) 118 1 (1.3) 38
The pooled effect size for interven-
Participants c
Medical students 40 (30.8) 5851 20 (26.3) 2491 tions with practice exercises was sig-
Physicians in postgraduate training 31 (23.9) 4376 5 (6.6) 413 nificantly higher than those without
Physicians in practice 27 (20.8) 4824 5 (6.6) 443 (pooled effect size, 1.01; 95% CI, 0.60-
Nursing students 8 (6.2) 673 15 (19.7) 1312 1.43 vs pooled effect size, 0.21; 95% CI,
Nurses in practice 20 (15.4) 967 8 (10.5) 612 0.04-0.38; P for interaction .001), but
Dental students 2 (1.6) 148 3 (4.0) 126 once again interactivity, repetition, and
Dentists in practice 1 (0.8) 17 0 (0) 0 discussion did not affect outcomes (P
Pharmacy students 11 (8.5) 646 5 (6.6) 324 for interaction .30) (Figure 3).
Pharmacists in practice 6 (4.6) 142 1 (1.3) 4 Behaviors and Effects on Patient
Other 21 (16.2) 2820 21 (27.6) 1736 Care. Nineteen studies reported 32 in-
Interventions d terventions evaluating learner behav-
Interactivity high 79 (60.8) 12 541 50 (65.8) 4584 iors and effects on patient care. These
Practice exercises present 78 (60.0) 11 576 46 (60.5) 4313 studies demonstrated a large pooled
Repetition ongoing access 72 (55.4) 8208 35 (46.0) 3364 effect size of 0.82 (95% CI, 0.63-1.02;
Duration 1 wk 56 (43.1) 10 057 37 (52.9) 2909 P .001) and large inconsistency
Tutorial 112 (86.2) 15 957 63 (82.9) 6035
(I2 = 79.1%). Effect sizes ranged from
Discussion 28 (21.5) 3906 33 (43.4) 3314
0.06 to 7.26.
Synchronous 5 (3.9) 84 14 (18.4) 1169
In contrast to skills outcomes, prac-
Comparison vs face-to-face NA NA 57 (75) 5723
tice exercises were negatively associ-
Comparison vs paper NA NA 14 (18.4) 1303
ated with behavior outcomes (0.44; 95%
Outcomes c
Satisfaction 0 0 43 (56.6) 4370 CI, 0.33-0.55 if present; 2.09; 95% CI,
Knowledge 117 (90.0) 18 053 63 (82.9) 5781 1.38-2.79 if absent; P for interaction
Skills 16 (12.3) 1708 12 (15.8) 1029 .001) (Figure 4). We also found sta-
Behaviors and patient effects 19 (14.6) 2159 6 (7.9) 822 tistically significant differences favor-
Quality e ing tutorials, longer-duration courses,
Newcastle-Ottawa 4 points 22 (16.9) 3343 38 (50.0) 3362 and online peer discussion.
Abbreviation: NA, not applicable.
a A total of 201 studies representing 214 interventions were included in the meta-analysis. This table presents data with
studies as the unit of analysis. Five studies compared the Internet-based intervention with both no intervention and Quantitative Data Synthesis:
a non-Internet comparison intervention and are counted separately. See eTable 2 online (http://www.jama.com) for Comparisons With Non-Internet
details on individual studies.
b Numbers reflect the number of students enrolled. The number of participants for subgroups may total more than the Interventions
number for all studies when characteristics are not mutually exclusive.
c Percentages total more than 100% because several studies included more than 1 learner group or reported multiple FIGURES 5, 6, 7, and 8 and eTable 4
outcomes.
d Interventions refer to Internet-based intervention except when noted otherwise. Numbers total more than 100% be- (http://www.jama.com) summarize the
cause these characteristics are not mutually exclusive.
e See text and eTable 3 (http://www.jama.com) for details on the modified Newcastle-Ottawa scale.
results of the meta-analyses compar-
ing Internet-based instruction with
1186 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

Table 2. Quality of Included Studies a


Comparability b
No. of Blinded
Study Characteristics Studies Representativeness Selection 1 Point 2 Point Outcome c Follow-up c
No-Intervention Controlled Studies
All studies 130 45 45 20 11 98 97
Study design
Posttest-only 2-group 33 19 21 9 3 24 29
Pretest-posttest 2-group 28 11 24 11 8 22 18
Pretest-posttest 1-group 69 15 0 0 0 52 50
Setting
Classroom 38 19 12 4 2 25 30
Practice 90 25 31 15 9 71 65
Undefined 2 1 2 1 0 2 2
Participants d
Medical students 40 22 14 6 2 32 33
Physicians 52 12 17 11 7 40 39
Nurses 27 5 8 2 2 19 15
Other 36 9 11 5 4 27 28
Interventions
Interactivity high 79 32 35 18 5 56 58
Practice exercises present 78 32 34 14 8 59 61
Repetition ongoing access 72 29 32 15 8 49 50
Duration 1 wk 56 18 16 10 3 47 46
Tutorial 112 38 37 19 9 85 85
Discussion 28 6 12 4 2 17 22
Synchronous 5 1 1 1 0 2 5
Outcomes
Knowledge 117 39 37 17 9 89 86
Skills 16 6 10 6 2 7 13
Behaviors and patient effects 19 6 10 6 2 7 15
Non-Internet Comparison Studies
All studies 76 45 57 27 10 51 61
Study design
Posttest-only 2-group 47 26 32 12 1 24 39
Pretest-posttest 2-group 29 19 25 15 9 27 22
Setting
Classroom 46 29 35 19 1 27 37
Practice 29 16 22 8 9 24 24
Undefined 1 0 0 0 0 0 0
Participants d
Medical students 20 12 15 8 2 16 17
Physicians 10 8 8 3 5 10 8
Nurses 22 11 14 6 0 13 18
Other 30 16 23 11 3 16 24
Interventions
Interactivity high 50 25 37 21 5 37 41
Practice exercises present 46 23 33 20 6 35 38
Repetition ongoing access 35 21 25 10 4 22 27
Duration 1 wk 37 23 32 18 7 30 30
Tutorial 63 37 47 25 10 45 53
Discussion 33 19 22 7 5 18 26
Synchronous 14 6 9 2 3 9 11
Comparison vs face-to-face 57 36 40 16 6 36 45
Outcomes
Satisfaction 43 26 33 12 8 0 35
Knowledge 63 39 46 24 9 48 51
Skills 12 6 9 6 1 9 11
Behaviors and patient effects 6 2 4 2 2 3 5
a Data presented as number of studies. Quality was assessed using a modification of the Newcastle-Ottawa scale32 that rated each study in terms of representativeness of the
intervention group (1 point), selection of the control group (1 point), comparability of cohorts (2 points), blinding of assessment (1 point), and completeness of follow-up (1 point).
See text and eTable 3 (http://www.jama.com) for details regarding this scale.
b The columns for comparability of cohorts are additive, eg, 31 no-intervention controlled studies had at least 1 point.
c Except for the Outcomes categories, blinding and completeness of follow-up were counted as present if this was done for any reported outcome. Responses for Outcomes
categories are specific to that outcome.
d Number of studies do not appear to match those in Table 1 because several studies included learners at more than 1 level (eg, both medical students and physicians, or nurses
in training and in practice).

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1187

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

non-Internet instruction. We used in- was 0.10 (95% CI, 0.12 to 0.32), with single-instance rather than ongoing-
exact P values to estimate 1 of 124 effect I2 = 92.2%. This effect is considered access Internet-based interventions
sizes (0.8%), and we imputed stan- small40 and was not significantly dif- (Figure 5).
dard deviations to estimate 5 effect sizes ferent from 0 (P=.37). Individual effect Knowledge. Sixty-three non-Internet-
(4.0%). Sensitivity analyses did not al- sizes ranged 1.90 to 1.77. controlled studies reported knowledge
ter conclusions except as noted. We had no a priori hypotheses re- outcomes. Effect sizes ranged from 0.98
Satisfaction. Forty-three studies re- garding subgroup comparisons for to 1.74. The pooled effect size of 0.12
ported satisfaction outcomes compar- satisfaction outcomes, but we found (95% CI, 0.003 to 0.24) was statisti-
ing Internet-based instruction to non- statistically significant treatment- cally significantly different from 0
Internet formats. The pooled effect size subgroup interactions favoring short (P = .045) but small and inconsistent
(positive numbers favoring Internet) courses, high-quality studies, and (I2 =88.1%). A sensitivity analysis ex-

Figure 2. Random-Effects Meta-analysis of Internet-Based Learning vs No Intervention: Knowledge Outcomes

No. of Pooled Effect Size Favors No Favors


Subgroup Interventions (95% CI) Intervention Internet P for Interaction
All Interventions 126 1.00 (0.90-1.10)
Design
Posttest-only 25 0.66 (0.53-0.78)
Pretest-posttest, 2-group 25 0.88 (0.67-1.08) <.001
Pretest-posttest, 1-group 76 1.18 (1.04-1.32)
Design
Randomized 17 0.83 (0.62-1.04)
.53
Nonrandomized 2-group 33 0.75 (0.59-0.90)
Setting
Classroom 35 0.91 (0.74-1.08)
.29
Practice 90 1.02 (0.91-1.13)
Participants
Medical students 36 0.97 (0.78-1.16)
Physicians 58 1.02 (0.88-1.15)
Nurses 21 0.88 (0.69-1.06)
Other 32 1.20 (0.98-1.43)
Interactivity
High 79 0.98 (0.85-1.12)
.65
Low 44 1.03 (0.89-1.17)
Practice exercises
Present 69 0.95 (0.82-1.09)
.34
Absent 54 1.05 (0.91-1.18)
Repetition
Ongoing access 77 0.95 (0.82-1.07)
.19
Single instance 49 1.08 (0.93-1.23)
Duration
1 wk 48 1.00 (0.85-1.16)
.65
>1 wk 61 1.05 (0.90-1.21)
Tutorials
Yes 108 1.06 (0.95-1.17)
<.001
No 18 0.65 (0.51-0.80)
Online discussion
Present 34 1.12 (0.94-1.30)
.15
Absent 91 0.96 (0.85-1.08)
Synchronous
Yes 5 1.32 (0.61-2.02)
.38
No 121 0.99 (0.89-1.09)
Outcome assessment
Objective 110 1.01 (0.90-1.11)
.45
Subjective 16 0.91 (0.69-1.14)
Quality Newcastle-Ottawa scale rating
High (4) 20 0.71 (0.51-0.92)
.003
Low (3) 106 1.07 (0.96-1.18)

0.5 0 1 2 3
Pooled Effect Size (95% Confidence Interval)

Boxes represent the pooled effect size (Hedges g). P values reflect paired or 3-way comparisons among bracketed subgroups. Participant groups are not mutually
exclusive; thus, no statistical comparison is made. There are 126 interventions because the report by Curran et al39 contributed 10 separate interventions to this analysis.
I2 for pooling all interventions is 93.6%.

1188 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

cluding blended interventions yielded a tice exercises, and repetition did not and peer discussion (Figure 7). How-
pooled effect size of 0.065 (95% CI, find support. ever, these analyses were limited by very
0.062 to 0.19; P=.31). Skills. The twelve studies report- small samples (in some cases only 1
In accord with our hypothesis, effect ing skills outcomes demonstrated a study in a group). Contrary to our ex-
sizes were significantly higher for In- small pooled effect size of 0.09 (95% CI, pectation, single-instance interven-
ternet-based courses using discussion 0.26 to 0.44; P = .61). As with other tions yielded higher effect sizes than
vs no discussion (P for interac- outcomes, heterogeneity was large those with ongoing access (P for inter-
tion = .002) (Figure 6). A statistically (I2 = 89.3%). Effect sizes ranged from action=.02).
significant interaction favoring longer 1.47 to 0.93. Behaviors and Effects on Patient
courses was also found (P for interac- We found statistically significant Care. Six studies reported outcomes of
tion = .03). However, our hypotheses treatment-subgroup interactions (P for behaviors and effects on patient care.
regarding treatment-subgroup interac- interaction.04) favoring higher lev- The pooled effect size of 0.51 (95% CI,
tions across levels of interactivity, prac- els of interactivity, practice exercises, 0.24 to 1.25) was moderate in size, but

Figure 3. Random-Effects Meta-analysis of Internet-Based Learning vs No Intervention: Skills Outcomes

No. of Pooled Effect Size Favors No Favors


Subgroup Interventions (95% CI) Intervention Internet P for Interaction
All Interventions 16 0.85 (0.49-1.20)
Design
Posttest-only 8 0.84 (0.26 to 1.42)
Pretest-posttest, 2-group 5 1.11 (0.46 to 1.76) .45
Pretest-posttest, 1-group 3 0.40 (0.07 to 0.73)
Design
Randomized 6 0.84 (0.35 to 1.34)
.65
Nonrandomized 2-group 7 1.04 (0.34 to 1.75)
Setting
Classroom 4 0.28 (0.01 to 0.57)
.004
Practice 10 1.11 (0.63 to 1.59)
Participants
Medical students 8 0.94 (0.32 to 1.56)
Physicians 3 1.19 (0.19 to 2.19)
Nurses 3 0.72 (0.03 to 1.47)
Other 6 0.87 (0.43 to 1.30)
Interactivity
High 13 0.72 (0.37 to 1.08)
.30
Low 3 1.34 (0.23 to 2.46)
Practice exercises
Present 13 1.01 (0.60 to 1.43)
<.001
Absent 3 0.21 (0.04 to 0.38)
Repetition
Ongoing access 9 0.95 (0.42 to 1.49)
.47
Single instance 7 0.70 (0.24 to 1.15)
Duration
1 wk 8 0.92 (0.39 to 1.45)
.88
>1 wk 5 0.85 (0.08 to 1.61)
Online discussion
Present 4 0.88 (0.22 to 1.97)
.94
Absent 12 0.84 (0.47 to 1.20)
Synchronous
Yes 1 0.85 (0.36 to 1.35)
.98
No 15 0.85 (0.48 to 1.22)
Outcome assessment
Objective 10 0.60 (0.28 to 0.93)
.23
Subjective 4 1.15 (0.32 to 1.97)
Quality Newcastle-Ottawa scale rating
High (4) 6 0.61 (0.14 to 1.08)
.29
Low (3) 10 0.99 (0.48 to 1.50)

0.5 0 1 2 3
Pooled Effect Size (95% Confidence Interval)

For a definition of figure elements, see the legend to Figure 2. All interventions were tutorials; hence, no contrast is reported for this characteristic. I2 for pooling all
interventions is 92.7%.

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1189

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

not statistically significant (P=.18). In- COMMENT parisons only partially explained these
consistency was large (I2 = 94.6%) and We found that Internet-based learning differences.
individual effect sizes ranged from 0.84 compared with no intervention has a The effect of Internet-based instruc-
to 1.66. consistent positive effect. The pooled tion in comparison to non-Internet for-
We again found a statistically signifi- estimate of effect size was large across mats was likewise inconsistent across
cant treatment-subgroup interaction fa- all educational outcomes.40 Further- studies.Incontrast,thepooledeffectsizes
voring discussion (P for interac- more, we found a moderate or large were generally small (0.12 for all but
tion=.02); (Figure 8) but as with skills effect for nearly all subgroup analyses behavior or patient effects) and nonsig-
outcomes, the results are tempered by exploring variations in learning set- nificant (CIs encompassing 0 [no effect]
very small samples. Once again, single- ting, instructional design, study for all outcomes except knowledge).
instance interventions yielded higher design, and study quality. However, Heterogeneity may arise from varia-
effect sizes than those with ongoing ac- studies yielded inconsistent (hetero- tion in learners, instructional meth-
cess (P for interaction = .006). geneous) results, and subgroup com- ods, outcome measures, and other as-

Figure 4. Random-Effects Meta-analysis of Internet-Based Learning vs No Intervention: Behaviors in Practice and Effects on Patients

No. of Pooled Effect Size Favors No Favors


Subgroup Interventions (95% CI) Intervention Internet P for Interaction
All Interventions 32 0.82 (0.63 to 1.02)
Design
Posttest-only 4 0.51 (0.14 to 0.88)
Pretest-posttest, 2-group 9 0.65 (0.38 to 0.92) .04
Pretest-posttest, 1-group 19 1.45 (0.98 to 1.91)
Design
Randomized 4 0.41 (0.33 to 0.49)
.04
Nonrandomized 2-group 9 0.83 (0.45 to 1.22)
Participants
Medical students 3 0.41 (0.25 to 0.56)
Physicians 24 1.16 (0.87 to 1.45)
Nurses 5 0.56 (0.30 to 0.82)
Other 6 0.41 (0.30 to 0.52)
Interactivity
High 24 0.92 (0.66 to 1.18)
.78
Low 7 0.84 (0.38 to 1.30)
Practice exercise
Present 13 0.44 (0.33 to 0.55)
<.001
Absent 18 2.09 (1.38 to 2.79)
Repetition
Ongoing access 26 0.90 (0.66 to 1.14)
.24
Single instance 6 0.66 (0.35 to 0.98)
Duration
1 wk 6 0.47 (0.24 to 0.70)
.004
>1 wk 25 0.97 (0.72 to 1.22)
Tutorials
Yes 29 1.00 (0.73 to 1.26)
<.001
No 3 0.47 (0.32 to 0.62)
Online discussion
Present 20 1.14 (0.82 to 1.47)
.04
Absent 11 0.66 (0.34 to 0.98)
Synchronous
Yes 1 0.06 (0.36 to 0.48)
<.001
No 31 0.87 (0.66 to 1.07)
Outcome assessment
Objective 9 0.86 (0.41 to 1.31)
.79
Subjective 23 0.79 (0.57 to 1.02)
Quality Newcastle-Ottawa scale rating
High (4) 6 0.78 (0.37 to 1.18)
.66
Low (3) 26 0.88 (0.64 to 1.13)

0.5 0 1 2 3
Pooled Effect Size (95% Confidence Interval)

For a definition of figure elements, see the legend to Figure 2. All interventions occurred in a practice setting; hence, no contrast is reported for this characteristic. There
are 32 interventions because the report by Curran et al39 contributed 14 separate interventions to this analysis. I2 for pooling all interventions is 79.1%.

1190 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

pects of the educational context. For Our hypotheses regarding changes in tion opposite to our hypotheses. These
example, only 2 no-intervention con- the magnitude of benefit for varia- findings were not consistent across out-
trolled studies41,42 had negative effect tions in instructional design were gen- comes or study types. Unexplained in-
sizes, and in both instances the lack of erally not supported by subgroup analy- consistencies would allow us to draw
benefit could be ascribed to an educa- ses, and in some cases significant only weak inferences if not for the pre-
tionally rich baseline or comparison. differences were found in the direc- ponderance of positive effects on all out-

Figure 5. Random-Effects Meta-analysis of Internet-Based Learning vs Alternate Instructional Media: Satisfaction Outcomes

No. of Pooled Effect Size Favors Alternate Favors


Subgroup Interventions (95% CI) Instructional Media Internet P for Interaction
All Interventions 43 0.10 (0.12 to 0.32)
Design
Crossover 8 0.53 (0.05 to 1.00)
.049
Noncrossover 35 0.01 (0.23 to 0.23)
Randomization
Yes 16 0.41 (0.04 to 0.78)
.03
No 27 0.08 (0.35 to 0.19)
Setting
Classroom 28 0.12 (0.17 to 0.42)
.74
Practice 15 0.05 (0.26 to 0.36)
Participants
Medical students 14 0.22 (0.09 to 0.53)
Physicians 5 0.16 (0.51 to 0.83)
Nurses 11 0.35 (0.71 to 0.01)
Other 15 0.15 (0.32 to 0.62)
Interactivity
Comparison > Internet 2 0.21 (0.38 to 0.80)
Equal 24 0.12 (0.41 to 0.17) .06
Comparison < Internet 11 0.52 (0.10 to 0.95)
Practice exercise
Equal 30 0.10 (0.15 to 0.35)
.39
Comparison < Internet 7 0.39 (0.22 to 0.99)
Discussion
Comparison > Internet 4 0.60 (0.08 to 1.28)
Equal 29 0.06 (0.23 to 0.34) .38
Comparison < Internet 6 0.24 (0.14 to 0.62)
Repetition
Equal 28 0.40 (0.17 to 0.63)
<.001
Comparison < Internet 14 0.48 (0.92 to 0.04)
Duration
1 wk 22 0.36 (0.08 to 0.63)
.03
>1 wk 18 0.15 (0.52 to 0.22)
Internet: tutorial
Yes 36 0.09 (0.16 to 0.33)
.77
No 7 0.17 (0.32 to 0.65)
Internet: online discussion
Present 18 0.07 (0.42 to 0.27)
.20
Absent 24 0.22 (0.06 to 0.50)
Internet: synchronous
Yes 9 0.08 (0.59 to 0.43)
.43
No 34 0.15 (0.09 to 0.39)
Comparison intervention
Face to face 28 0.12 (0.37 to 0.14)
Paper 11 0.60 (0.27 to 0.94) .02
Other 4 0.23 (0.56 to 1.03)
Quality Newcastle-Ottawa scale rating
High (4) 13 0.48 (0.07 to 0.89)
.02
Low (3) 30 0.07 (0.30 to 0.17)

2 1 0 1 2
Pooled Effect Size (95% Confidence Interval)

Studies are classified according to relative between-intervention differences in key instructional methods; namely, did the comparison intervention have more (comparison
Internet), less (comparison Internet), or the same (equal) amount of interactivity, practice exercises, discussion (face-to-face and Internet-based discussion combined),
and repetition. Boxes represent the pooled effect size (Hedges g). P values reflect paired or 3-way comparisons among bracketed subgroups. Participant groups are not
mutually exclusive; thus, no statistical comparison is made. All outcomes were subjectively determined; hence, no contrast is reported for this characteristic. Crossover
studies assessed participant preference after exposure to Internet-based and nonInternet-based interventions. I2 for pooling all interventions is 92.2%.

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1191

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

comes in the no-intervention compari- Limitations and Strengths coding was subjective and based on pub-
son studies. For comparisons with non- Our study has several limitations. First, lished descriptions rather than direct
Internet formats these inconsistencies many reports failed to describe key ele- evaluation of instructional events. Poor
make inferences tenuous. Additional re- ments of the context, instructional de- reporting might have contributed to
search is needed to explore the incon- sign, or outcomes. Although the re- modest interrater agreement for some
sistencies identified in this review. view process was conducted in duplicate, variables. Although we obtained addi-

Figure 6. Random-Effects Meta-analysis of Internet-Based Learning vs Alternate Instructional Media: Knowledge Outcomes

No. of Pooled Effect Size Favors Alternate Favors


Subgroup Interventions (95% CI) Instructional Media Internet P for Interaction
All Interventions 63 0.12 (0.003 to 0.24)
Design
Posttest-only 34 0.21 (0.04 to 0.39)
.14
Pretest-posttest 29 0.04 (0.12 to 0.20)
Randomization
Yes 24 0.04 (0.12 to 0.21)
.25
No 39 0.18 (0.01 to 0.35)
Setting
Classroom 38 0.12 (0.04 to 0.29)
.71
Practice 24 0.08 (0.10 to 0.26)
Participants
Medical students 16 0.21 (0.08 to 0.49)
Physicians 9 0.12 (0.10 to 0.33)
Nurses 17 0.09 (0.13 to 0.32)
Other 26 0.01 (0.17 to 0.18)
Interactivity
Comparison > Internet 4 0.01 (0.47 to 0.49)
Equal 30 0.03 (0.11 to 0.18) .29
Comparison < Internet 23 0.23 (0.01 to 0.44)
Practice exercise
Equal 43 0.09 (0.03 to 0.21)
.32
Comparison < Internet 14 0.29 (0.09 to 0.67)
Discussion
Comparison > Internet 9 0.08 (0.33 to 0.17)
Equal 39 0.07 (0.08 to 0.22) .03
Comparison < Internet 10 0.50 (0.12 to 0.88)
Repetition
Equal 40 0.10 (0.05 to 0.24)
.57
Comparison < Internet 23 0.17 (0.04 to 0.38)
Duration
1 wk 31 0.04 (0.20 to 0.12)
.03
>1 wk 28 0.23 (0.05 to 0.40)
Internet: tutorial
Yes 52 0.10 (0.04 to 0.23)
.29
No 11 0.23 (0.02 to 0.43)
Internet: online discussion
Present 27 0.34 (0.15 to 0.52)
.002
Absent 34 0.04 (0.19 to 0.11)
Internet: synchronous
Yes 12 0.26 (0.05 to 0.48)
.17
No 51 0.09 (0.05 to 0.23)
Comparison intervention
Face to face 48 0.21 (0.07 to 0.34)
Paper 12 0.14 (0.45 to 0.16) .07
Other 3 0.07 (0.23 to 0.10)
Outcome assessment
Objective 61 0.10 (0.02 to 0.22)
.23
Subjective 2 0.83 (0.35 to 2.01)
Quality Newcastle-Ottawa scale rating
High (4) 36 0.04 (0.11 to 0.19)
.09
Low (3) 27 0.25 (0.05 to 0.46)

2 1 0 1 2
Pooled Effect Size (95% Confidence Interval)

For a definition of figure elements and study parameters, see the legend to Figure 5. I2 for pooling all interventions is 88.1%.

1192 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

tional outcome data from several au- signs or quality or after excluding im- lished. We did not use funnel plots to as-
thors, we still imputed effect sizes for puted effect sizes. sess for publication bias because these are
many studies with concomitant poten- Second, interventions varied widely misleading in the presence of marked
tial for error. Sparse reporting of valid- from study to study. Because nearly all heterogeneity.43
ity and reliability evidence for assess- no-intervention comparison studies Third, we report our results using
ment scores precluded inclusion of such found a benefit, this heterogeneity sug- subgroups as an efficient means of syn-
evidence. Furthermore, methodologi- gests that a wide variety of Internet- thesizing the large number of studies
cal quality was generally low. How- based interventions can be used effec- identified and simultaneously to ex-
ever, subgroup and sensitivity analyses tively in medical education. Alternatively, plore heterogeneity. However, sub-
did not reveal consistently larger or this finding may indicate publication bias group results should be interpreted with
smaller effects for different study de- with negative studies remaining unpub- caution due to the number of compari-

Figure 7. Random-Effects Meta-analysis of Internet-Based Learning vs Alternate Instructional Media: Skills Outcomes

No. of Pooled Effect Size Favors Alternate Favors


Subgroup Interventions (95% CI) Instructional Media Internet P for Interaction
All Interventions 12 0.09 (0.26 to 0.44)
Design
Posttest-only 8 0.08 (0.46 to 0.29)
.20
Pretest-posttest 4 0.43 (0.26 to 1.12)
Randomization
Yes 4 0.02 (0.73 to 0.69)
.71
No 8 0.14 (0.30 to 0.57)
Setting
Classroom 7 0.16 (0.21 to 0.54)
.67
Practice 5 0.01 (0.74 to 0.71)
Participants
Medical students 2 0.10 (0.51 to 0.71)
Physicians 2 0.44 (0.01 to 0.89)
Nurses 6 0.03 (0.59 to 0.64)
Other 4 0.02 (0.47 to 0.51)
Interactivity
Comparison > Internet 1 1.47 (2.20 to 0.73)
Equal 8 0.21 (0.16 to 0.58) .04
Comparison < Internet 1 0.93 (0.50 to 1.36)
Practice exerises
Comparison > Internet 1 1.47 (2.20 to 0.73)
Equal 9 0.26 (0.08 to 0.61) .03
Comparison < Internet 1 0.67 (0.25 to 1.08)
Discussion
Comparison > Internet 4 0.47 (0.92 to 0.02)
Equal 4 0.55 (0.26 to 0.85) .005
Comparison < Internet 3 0.59 (0.38 to 0.81)
Repetition
Equal 6 0.46 (0.26 to 0.66)
.02
Comparison < internet 6 0.28 (0.88 to 0.32)
Duration
1 wk 7 0.08 (0.50 to 0.33)
.15
>1 wk 5 0.37 (0.09 to 0.83)
Internet: online discussion
Present 4 0.29 ( 0.21 to 0.80)
.38
Absent 8 0.00 (0.42 to 0.43)
Internet: synchronous
Yes 1 0.67 (0.25 to 1.08)
.03
No 11 0.04 (0.34 to 0.41)
Comparison intervention
Face to face 11 0.06 (0.33 to 0.44)
.18
Other 1 0.42 (0.05 to 0.80)
Quality New England-Ottawa scale rating
High (4) 8 0.04 (0.41 to 0.49)
.69
Low (3) 4 0.19 (0.41 to 0.80)

2 1 0 1 2
Pooled Effect Size (95% Confidence Interval)

For a definition of figure elements and study parameters, see the legend to Figure 5. All interventions were tutorials, and all outcomes were objectively determined except
for 1 study in which the method of assessment could not be determined; hence, no contrasts are reported for these characteristics. I2 for pooling all interventions is 89.3%.

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1193

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

sons made, the absence of a priori hy- mitting ongoing access vs non- and lower effect sizes for behavior or
potheses for many analyses, the limi- Internet intervention available only patient effects; this could be ex-
tations associated with between-study once) had lower pooled effect sizes than plained by true differential effect of the
(rather than within-study) compari- interventions with equal repetition. interventions on these outcomes, varia-
sons, and inconsistent findings across These results could be due to chance, tion in responsiveness across out-
outcomes and study types.44 For ex- confounding, bias, or true effect. An- comes, unrecognized confounders, or
ample, we found contrary to expecta- other example is the finding that prac- chance.
tion that interventions with greater tice exercises were associated with Finally, by focusing our review on In-
repetition (Internet-based course per- higher effect sizes for skills outcomes ternet-based learning, we of necessity

Figure 8. Random-Effects Meta-analysis of Internet-Based Learning vs Alternate Instructional Media: Behaviors in Practice and Effects on Patients

No. of Pooled Effect Size Favors Alternate Favors


Subgroup Interventions (95% CI) Instructional Media Internet P for Interaction
All Interventions 6 0.51 (0.24 to 1.25)
Design
Posttest-only 2 0.03 (1.65 to 1.60)
.36
Pretest-posttest 4 0.79 (0.12 to 1.45)
Randomization
Yes 2 1.03 (0.21 to 2.26)
.33
No 4 0.24 (0.72 to 1.21)
Setting
Classroom 1 0.84 (1.22 to 0.46)
<.001
Practice 5 0.79 (0.25 to 1.34)
Participants
Medical students 2 1.33 (0.70 to 1.97)
Physicians 2 0.54 (0.15 to 0.93)
Nurses or other 2 0.45 (1.27 to 0.38)

Interactivity
Equal 3 0.70 (0.26 to 1.66)
.62
Comparison < Internet 2 0.96 (0.64 to 1.27)
Practice exerises
Equal 4 0.73 (0.00 to 1.46)
.50
Comparison < Internet 1 1.01 (0.64 to 1.38)
Discussion
Comparison > Internet 1 0.00 (0.57 to 0.58)
.02
Equal 4 0.97 (0.42 to 1.53)
Repetition
Equal 3 1.19 (0.69 to 1.69)
.006
Comparison < Internet 3 0.15 (0.97 to 0.67)
Duration
1 wk 4 0.90 (0.26 to 1.54)
.11
>1 wk 2 0.22 (1.43 to 1.00)
Internet: tutorial
Yes 5 0.79 (0.25 to 1.34)
<.001
No 1 0.84 (1.22 to 0.46)

Internet: online discussion


Present 1 0.40 (0.05 to 0.75)
.18
Absent 4 0.90 (0.26 to 1.54)
Internet: synchronous
Yes 1 0.40 (0.05 to 0.75)
.80
No 5 0.53 (0.42 to 1.49)
Comparison intervention
Face to face 3 0.15 (0.97 to 0.67)
.006
Paper 3 1.19 (0.69 to 1.69)
Outcome assessment
Objective 5 0.61 (0.25 to 1.46)
.25
Subjective 1 0.00 ( 0.57 to 0.58)
Quality New England-Ottawa scale rating
High (4) 3 0.96 (0.13 to 1.79)
.23
Low ( 3) 3 0.06 (1.15 to 1.27)

2 1 0 1 2
Pooled Effect Size (95% Confidence Interval)

For a definition of figure elements and study parameters, see the legend to Figure 5. I2 for pooling all interventions is 94.6%.

1194 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

ignored a great body of literature on variety of learners, learning contexts, mates. However, inconsistencies in the
nonInternet-based computer-assisted clinical topics, and learning out- current evidence together with con-
instruction. comes. Internet-based instruction ap- ceptual concerns27,28 suggest limited
Our review also has several strengths. pears to have a large effect compared value in further research seeking to
The 2 study questions are timely and with no intervention and appears to demonstrate a global effect of Internet-
of major importance to medical edu- have an effectiveness similar to tradi- based formats across learners, content
cators. We intentionally kept our scope tional methods. domains, and outcomes.
broad in terms of subjects, interven- The studies making comparison with The inconsistency in effect across
tions, and outcomes. Our search for rel- no intervention essentially asked both study types suggests that some
evant studies encompassed multiple lit- whether a Web-based course in a par- methods of implementing an Internet-
erature databases supplemented by ticular topic could be effective. The an- based course may be more effective than
hand searches. We had few exclusion swer was almost invariably yes. Given others. Thus, we propose that greater
criteria, and included several studies this consistency of effect and assum- attention be given to the question, How
published in languages other than En- ing no major publication bias, there ap- can Internet-based learning be effec-
glish. All aspects of the review process pears to be limited value in further re- tively implemented? Elucidating how
were conducted in duplicate with ac- search comparing Internet-based to effectively implement Internet-
ceptable reproducibility. Despite the interventions against no-intervention based instruction will be answered most
large volume of data, we kept our analy- comparison groups. Although no- efficiently through research directly
ses focused, conducting relatively few intervention controlled studies may be comparing different Internet-based in-
planned subgroup analyses to explain useful in proof-of-concept evalua- terventions.7,10,27-29,49 Inconsistency may
inconsistency and sensitivity analyses tions of new applications of Internet- also be due to different learning con-
to evaluate the robustness of our find- based methods (such as a study look- texts and objectives, and thus the ques-
ings to the assumptions of our meta- ing at rater training on the Web48), truly tion, When should Internet-based
analyses. novel innovations requiring such study learning be used? should be consid-
are likely to be increasingly rare and will ered as well.10
Comparison infrequently merit publication. Finally, although our findings re-
With Previous Reviews Studies making comparison to alter- garding the quality of this body of re-
The last meta-analyses of computer- nate instructional media asked whether search are not unique to research in
assisted instruction in health profes- Internet-based learning is superior to Internet-based instruction,50-52 the rela-
sions education45,46 were published (or inferior to) traditional methods. In tively low scores for methodological
in or before 1994, and computer- contrast to no-intervention controlled quality and the observed reporting de-
assisted instruction has changed dra- studies, the answers to this question ficiencies suggest room for improve-
matically in the interim. To the 16 varied widely. Some studies favored the ment.
no-intervention controlled and 9 non- Internet, some favored traditional meth- Author Contributions: Dr Cook had full access to all
Internet comparative studies reported ods, and on average there was little dif- of the data in the study and takes responsibility for
the integrity of the data and the accuracy of the data
in the last comprehensive review of ference between the 2 formats. Al- analysis.
Web-based learning,9 we add 176 ad- though the pooled estimates favored Study concept and design: Cook, Levinson, Dupras,
Garside, Erwin, Montori.
ditional articles as well as a meta- Internet-based instruction, for all but Acquisition of data: Cook, Levinson, Dupras, Gar-
analytic summary of results. This and behavior or patient effects the magni- side, Erwin.
other reviews11-14,16,17,47 concur with the tude of benefit was small and could be Analysis and interpretation of data: Cook, Montori.
Drafting of the manuscript: Cook.
present study in concluding that Inter- explained by sources of variation noted Critical revision of the manuscript for important in-
net-based learning is educationally ben- above or by novelty effects.27 These find- tellectual content: Cook, Levinson, Dupras, Garside,
Erwin, Montori.
eficial and can achieve results similar ings support arguments that computer- Statistical analysis: Cook, Montori.
to those of traditional instructional assisted instruction is neither inher- Obtained funding: Cook.
Administrative, technical or material support: Cook,
methods. ently superior to nor inferior to Montori.
traditional methods.10,27-29 Few non- Study supervision: Cook.
Implications Financial Disclosures: None reported.
Internet comparison studies reported Funding/Support: This work was supported by intra-
This review has implications for both skills and behavior or patient effects mural funds and a Mayo Foundation Education In-
education and research. Although con- outcomes, and the CIs for these pooled novation award.
Role of Sponsor: The funding source for this study
clusions must be tempered by incon- estimates do not exclude education- played no role in the design and conduct of the study;
sistency among studies and the ally significant effects. Additional re- in the collection, management, analysis, and inter-
pretation of the data; or in the preparation of the manu-
possibility of publication bias, the syn- search, using outcome measures script. The funding source did not review the manu-
thesized evidence demonstrates that In- responsive to the intervention and sen- script.
Additional Information: Details on included studies and
ternet-based instruction is associated sitive to change, would be required their quality, and on the meta-analyses are available
with favorable outcomes across a wide to improve the precision of these esti- at http://www.jama.com.

2008 American Medical Association. All rights reserved. (Reprinted) JAMA, September 10, 2008Vol 300, No. 10 1195

Downloaded from www.jama.com at University of Florida on October 21, 2009


INTERNET-BASED LEARNING

Additional Contributions: We thank Melanie Lane, Palacky Olomouc Czech Repub. 2006;150(2): Correcting Error and Bias in Research Findings. Thou-
BA, Mohamed Elamin, MBBS, and M. Hassan Mu- 357-361. sand Oaks, CA: Sage; 2004.
rad, MD, from the Knowledge and Encounter Re- 18. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie 36. Curtin F, Altman DG, Elbourne D. Meta-analysis
search Unit, Mayo Clinic, for assistance with data ex- D, Stroup DF. Improving the quality of reports of combining parallel and cross-over clinical trials, I: con-
traction and meta-analysis planning and execution, and meta-analyses of randomised controlled trials: the tinuous outcomes. Stat Med. 2002;21(15):2131-
Kathryn Trana, from the Division of General Internal QUOROM statement. Quality of Reporting of 2144.
Medicine, Mayo Clinic, for assistance in article acqui- Meta-analyses. Lancet. 1999;354(9193):1896- 37. Higgins JP, Green S. Cochrane Handbook for Sys-
sition and processing. These individuals received com- 1900. tematic Reviews of Interventions (Version 5.0.0 ).
pensation as part of their regular employment. 19. Stroup DF, Berlin JA, Morton SC, et al. Meta- Available at: http://www.cochrane.org/resources
analysis of observational studies in epidemiology: a pro- /handbook/index.htm. Updated February 2008. Ac-
posal for reporting. JAMA. 2000;283(15):2008- cessed 29 May 2008.
REFERENCES 2012. 38. Higgins JP, Thompson SG, Deeks JJ, Altman DG.
20. Bransford JD, Brown AL, Cocking RR, et al. How Measuring inconsistency in meta-analyses. BMJ. 2003;
1. Berners-Lee T, Cailliau R, Luotonen A, Nielsen HF, People Learn: Brain, Mind, Experience, and School. 327(7414):557-560.
Secret A. The World-Wide Web. Commun ACM. 1994; Washington, DC: National Academy Press; 2000. 39. Curran V, Lockyer J, Sargeant J, Fleet L. Evalua-
37(8):76-82. 21. Davis D, OBrien MA, Freemantle N, Wolf FM, tion of learning outcomes in Web-based continuing
2. Friedman RB. Top ten reasons the World Wide Web Mazmanian P, Taylor-Vaisey A. Impact of formal con- medical education. Acad Med. 2006;81(10)(suppl):
may fail to change medical education. Acad Med. 1996; tinuing medical education: do conferences, work- S30-S34.
71(9):979-981. shops, rounds, and other traditional continuing edu- 40. Cohen J. Statistical Power Analysis for the Be-
3. MacKenzie JD, Greenes RA. The World Wide Web: cation activities change physician behavior or health havioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence
redefining medical education. JAMA. 1997;278 care outcomes? JAMA. 1999;282(9):867-874. Erlbaum; 1988.
(21):1785-1786. 22. Mayer RE. Cognitive theory of multimedia learning. 41. Mehta MP, Sinha P, Kanwar K, Inman A, Albanese
4. Ruiz JG, Mintzer MJ, Leipzig RM. The Impact of In: Mayer RE, ed. The Cambridge Handbook of Mul- M, Fahl W. Evaluation of Internet-based oncologic
e-learning in medical education. Acad Med. 2006; timedia Learning. New York, NY: Cambridge Univer- teaching for medical students. J Cancer Educ. 1998;
81(3):207-212. sity Press; 2005:31-48. 13(4):197-202.
5. Cook DA. Web-based learning: pros, cons, and 23. Cook DA, Thompson WG, Thomas KG, Thomas 42. Patterson R, Harasym P. Educational instruction
controversies. Clin Med. 2007;7(1):37-42. MR, Pankratz VS. Impact of self-assessment ques- on a hospital information system for medical stu-
6. Effective Use of Educational Technology in Medi- tions and learning styles in Web-based learning: a ran- dents during their surgical rotations. J Am Med In-
cal Education: Summary Report of the 2006 AAMC domized, controlled, crossover trial. Acad Med. 2006; form Assoc. 2001;8(2):111-116.
Colloquium on Educational Technology. Washing- 81(3):231-238. 43. Lau J, Ioannidis JPA, Terrin N, Schmid CH, Olkin
ton, DC: Association of American Medical Colleges; 24. Cook DA, McDonald FS. E-learning: is there any- I. The case of the misleading funnel plot. BMJ. 2006;
2007. thing special about the E? Perspect Biol Med. 2008; 333(7568):597-600.
7. Tegtmeyer K, Ibsen L, Goldstein B. Computer- 51(1):5-21. 44. Oxman A, Guyatt G. When to believe a sub-
assisted learning in critical care: from ENIAC to HAL. 25. Marinopoulos SS, Dorman T, Ratanawongsa N, group analysis. In: Hayward R, ed. Users Guides In-
Crit Care Med. 2001;29(8)(suppl):N177-N182. et al. Effectiveness of continuing medical education. teractive. Chicago, IL: JAMA Publishing Group; 2002.
8. Davis MH, Harden RM. E is for everything Evid Rep Technol Assess (Full Rep). 2007;149: http://www.usersguides.org. Accessed August 14,
e-learning? Med Teach. 2001;23(5):441-444. 1-69. 2008.
9. Chumley-Jones HS, Dobbie A, Alford CL. Web- 26. Shea JA. Mind the gap: some reasons why medi- 45. Cohen PA, Dacanay LD. Computer-based instruc-
based learning: sound educational method or hype? cal education research is different from health ser- tion and health professions education: a meta-
a review of the evaluation literature. Acad Med. 2002; vices research. Med Educ. 2001;35(4):319-320. analysis of outcomes. Eval Health Prof. 1992;15:
77(10)(suppl):S86-S93. 27. Clark RE. Reconsidering research on learning from 259-281.
10. Cook DA. Where are we with Web-based learn- media. Rev Educ Res. 1983;53:445-459. 46. Cohen PA, Dacanay LD. A meta-analysis of com-
ing in medical education? Med Teach. 2006;28 28. Cook DA. The research we still are not doing: an puter-based instruction in nursing education. Com-
(7):594-598. agenda for the study of computer-based learning. Acad put Nurs. 1994;12(2):89-97.
11. Greenhalgh T. Computer assisted learning in un- Med. 2005;80(6):541-548. 47. Lewis MJ. Computer-assisted learning for teach-
dergraduate medical education. BMJ. 2001;322 29. Friedman CP. The research we should be doing. ing anatomy and physiology in subjects allied to
(7277):40-44. Acad Med. 1994;69(6):455-457. medicine. Med Teach. 2003;25(2):204-206.
12. Lewis MJ, Davies R, Jenkins D, Tait MI. A review 30. Kirkpatrick D. Revisiting Kirkpatricks four-level 48. Kobak KA, Engelhardt N, Lipsitz JD. Enriched rater
of evaluative studies of computer-based learning in model. Train Dev. 1996;50(1):54-59. training using Internet based technologies: a compari-
nursing education. Nurse Educ Today. 2001;21 31. Shrout PE, Fleiss JL. Intraclass correlations: uses son to traditional rater training in a multi-site depres-
(1):26-37. in assessing rater reliability. Psychol Bull. 1979; sion trial. J Psychiatr Res. 2006;40(3):192-199.
13. Wutoh R, Boren SA, Balas EA. eLearning: a re- 86:420-428. 49. Keane DR, Norman G, Vickers J. The inadequacy
view of Internet-based continuing medical education. 32. Wells GA, Shea B, OConnell D, et al. The of recent research on computer-assisted instruction.
J Contin Educ Health Prof. 2004;24(1):20-30. Newcastle-Ottawa Scale (NOS) for assessing the qual- Acad Med. 1991;66(8):444-448.
14. Chaffin AJ, Maddux CD. Internet teaching meth- ity of nonrandomised studies in meta-analyses. Avail- 50. Cook DA, Beckman TJ, Bordage G. Quality of re-
ods for use in baccalaureate nursing education. Com- able at: http://www.ohri.ca/programs/clinical porting of experimental studies in medical educa-
put Inform Nurs. 2004;22(3):132-142. _epidemiology/oxford.htm. Accessed June 16, tion: a systematic review. Med Educ. 2007;41(8):
15. Curran VR, Fleet L. A review of evaluation out- 2008. 737-745.
comes of Web-based continuing medical education. 33. Morris SB, DeShon RP. Combining effect size es- 51. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern
Med Educ. 2005;39(6):561-567. timates in meta-analysis with repeated measures and DE, Wright SM. Association between funding and qual-
16. Hammoud M, Gruppen L, Erickson SS, et al. To independent-groups designs. Psychol Methods. 2002; ity of published medical education research. JAMA.
the point: reviews in medical education online com- 7(1):105-125. 2007;298(9):1002-1009.
puter assisted instruction materials. Am J Obstet 34. Dunlap WP, Cortina JM, Vaslow JB, Burke MJ. 52. Issenberg SB, McGaghie WC, Petrusa ER, Lee
Gynecol. 2006;194(4):1064-1069. Meta-analysis of experiments with matched groups Gordon D, Scalese RJ. Features and uses of high-
17. Potomkova J, Mihal V, Cihalik C. Web-based in- or repeated measures designs. Psychol Methods. 1996; fidelity medical simulations that lead to effective learn-
struction and its impact on the learning activity of medi- 1:170-177. ing: a BEME systematic review. Med Teach. 2005;
cal students: a review. Biomed Pap Med Fac Univ 35. Hunter JE, Schmidt FL. Methods of Meta-Analysis: 27(1):10-28.

1196 JAMA, September 10, 2008Vol 300, No. 10 (Reprinted) 2008 American Medical Association. All rights reserved.

Downloaded from www.jama.com at University of Florida on October 21, 2009

También podría gustarte