Documentos de Académico
Documentos de Profesional
Documentos de Cultura
SHANNON L. FERRUCCI
A Thesis
The Degree of
MASTER OF SCIENCE
IN
APPLIED INTELLIGENCE
A Thesis
Submitted to the Faculty of Mercyhurst College
In Partial Fulfillment of the Requirements for
The Degree of
MASTER OF SCIENCE
IN
APPLIED INTELLIGENCE
Submitted By:
SHANNON L. FERRUCCI
Certificate of Approval:
___________________________________
Kristan J. Wheaton
Assistant Professor
Department of Intelligence Studies
___________________________________
William J. Welch
Instructor
Department of Intelligence Studies
___________________________________
Phillip J. Belfiore
Vice President
Office of Academic Affairs
May 2009
Copyright © 2009 by Shannon L. Ferrucci
All rights reserved.
3
ACKNOWLEDGMENTS
I would like to thank Kristan J. Wheaton, my thesis advisor and primary reader, for his
continued guidance and encouragement throughout the course of this work. I would also
like to thank Professor Hemangini Deshmukh for her patience and assistance with the
3
3
ABSTRACT OF THE THESIS
By
Shannon L. Ferrucci
which very little specific research has thus far been done. However, when considering
the complexity and depth of most intelligence requirements it becomes evident that
consideration of this topic is both crucial and long overdue. This thesis examines what
discussing relevant studies from other fields that help to shed light on the need for, and
value of, incorporating this technique into intelligence analysis. After examining the
relevant literature, an experiment was conducted to test the hypothesis that intelligence
analysts who engage in ECM will generate better analytic products, as evaluated by
thoroughness of process and accuracy of product, than analysts who do not. However,
despite a wealth of literature strongly suggesting that ECM will improve analysis the
results of this study’s experiment did not support that notion. The author ends by
drawing conclusions from the experimental data highlighting the notion that ECM
successful.]
4
TABLE OF CONTENTS
Page
ACKNOWLEDGEMENTS…………………………………………………………. iv
ABSTRACT……………………………………………………………………….... v
TABLE OF CONTENTS…………………………………………………………… vi
LIST OF FIGURES…………………………………………………………………. ix
CHAPTER
1 INTRODUCTION…………………………………………………… 1
Conceptual Models………………………………………………….. 2
Explicit Modeling and Intelligence Analysis……………………….. 3
2 LITERATURE REVIEW………………………………………….... 6
Constructivist Roots…………………………………………………. 6
Mental Models and Intelligence……………………………………... 7
Memory Limitations………………………………………………… 8
Group Intellect………………………………………………………. 9
Combating Groupthink……………………………………………… 10
Related Mapping Disciplines………………………………………... 11
Mind Maps…………………………………………………………... 13
Concept Maps……………………………………………………….. 14
Technology Aids…………………………………………………….. 15
Learning Styles……………………………………………………… 17
Hypotheses…………………………………………………………... 17
3 METHODOLOGY…………………………………………………... 18
Research Design……………………………………………………... 18
Subjects……………………………………………………………… 18
Preliminaries………………………………………………………… 22
Control Group: Day 1……………………………………………….. 22
Experimental Group: Day 1…………………………………………. 24
Bubbl.us……………………………………………………………... 26
Control Group: Day 2……………………………………………….. 27
Experimental Group: Day 2…………………………………………. 28
Data Analysis Procedures…………………………………………… 29
5
4 RESULTS…....……………………………………………………… 30
Significance Testing…………………………………………………. 30
Pre- and Post-Questionnaire Results………………………………… 31
Process – Conceptual Model Findings………………………………. 36
Process – Logic/Quality of Supporting Evidence Findings…………. 40
Product – Forecasting Findings……………………………………... 41
Quality Of Supporting Evidence Vs. Forecasting Accuracy………... 42
Product – Source Reliability and Analytic Confidence Findings…… 46
5 CONCLUSIONS……………………………………………………. 48
BIBLIOGRAPHY…………………………………………………………………... 55
APPENDICES………………………………………………………………………. 57
6
Appendix 15: Structured Conceptual Modeling Exercise…………... 80
7
8
LIST OF FIGURES
Page
Figure 4.9 Forecasting Accuracy: Top Vs. Bottom Half Process Rankings 44
9
10
1
CHAPTER I:
INTRODUCTION
Nazareth, Forrest Church, Minister of Public Theology at the Unitarian Church of All
Souls in New York City, tells the reader of an historical offer made by Thomas Jefferson
to Congress.1 Jefferson’s offer consisted of selling his personal library to replace the
volumes in the Library of Congress burned by the British during the War of 1812. While
some might find the most interesting aspect of Jefferson’s proposal to be the reaction it
elicited from members of Congress who were insulted by the specific makeup of the
his books by “the process of mind employed on them.”3 Therefore, books having to do
with philosophy were classified under reason, history books could be found under the
label of memory and books focused on fine art could be located under a section entitled
imagination.4 However, Jefferson did not stop there. Under each of the overarching
categories mentioned above was a variety of intricate subdivisions that further organized
appears that the true value of Jefferson’s system of categorization lies in its ability to
1***This research partially funded by the Mercyhurst College Academic Enrichment Fund***
Forrest Church, Introduction to The Jefferson Bible: The Life and Morals of Jesus of Nazareth, by Thomas
Jefferson (Boston: Beacon Press, 1989), 1.
2 Church, The Jefferson Bible, 2.
3 Ibid.
4 Ibid.
2
provide a glimpse into the inner thoughts and beliefs of Jefferson himself. Due to the
detail with which the library was constructed, Church was able to surmise Jefferson’s
Conceptual Models
What Bacon in the early 1600s, and Jefferson in the early 1800s, were essentially
doing through their systems of classification was attempting to make their individual
mental models of the world around them explicit. Surprisingly enough, not only do the
likes of Bacon and Jefferson develop such mental models, but each and every one of us
carries out this same exercise on a variety of different levels numerous times per day.
For example, we construct mental models of the route we take on the way to the grocery
analysis, when considering that we also build models when faced with questions.
Whether the issue is a simple one, such as what to do on our day off, or as complex as an
intelligence requirement set forth by a decision maker, the human mind automatically
attempts to model the question and arrive at possible preliminary answers. Oftentimes in
doing this, we are able to recognize not only what we currently know about a given
situation, but also what we think we need to know in order to arrive at a comprehensive
answer.
Obviously, the more complex the question, the more intricate the subsequent model tends
to be. This is especially true of requirements posed to intelligence analysts, which often
organizations, industries, etc. Therefore, the odds of any analyst being able to develop a
complete model of an intelligence requirement on their first try are very slim. More often
than not, analysts are able to fill in pieces of their model with information they already
know, but are forced to fill in the rest with topics they recognize they need to understand
more about.
study: determining the value of making these conceptual models explicit within the
obvious that the vast majority of related conceptual models will become too complex to
be held in an individual’s memory, and hence would benefit from being made explicit.
This is especially true when considering that conceptual models are not static, but
actually quite amorphous, constantly evolving and adapting to new information and
explicit conceptual modeling’s (ECM) place within the field of intelligence is both
5 Kalervo Jarvelin and T.D. Wilson, “On Conceptual Models for Information Seeking and Retrieval
Research,” Information Research 9, no. 1 (2003), http://informationr.net/ir/9-1/paper163.html (accessed
January 15, 2009).
3
individual intelligence analysts (both students and practitioners). For these groups the
incorporation of ECM into the analytic process would likely be beneficial on a variety of
fronts. First, explicit modeling may increase efficiency in the analyst’s collection
managers at the head of small analytic teams might more easily grasp what needs to be
done, the best method for doing it, and the most efficient way to originally task project
analysts. In addition, after initial areas of responsibility are assigned to each analyst it is
likely that managers might find it easier to supervise analysts, due to the organizational
ECM may also be useful in helping analysts to assess their level of analytic
confidence in the estimate produced. In addition, analysts may share, compare and
discuss models amongst themselves and with other professionals. Finally, ECM would
likely be useful for after the fact review and in providing a solid starting point for any
related questions posed to an analyst in the future. However, while these are only some
of the potential benefits stemming from the incorporation of ECM into the analytic
process, this thesis will show that obtaining the abovementioned results is not easy.
Furthermore, this study will call into question conventional wisdom regarding what
Taken as a whole, this thesis will argue that despite the relative dearth of studies
focused specifically on conceptual modeling within the field of intelligence, literature and
examples from other fields will shed light on the need for, and value of, incorporating
2
this technique into intelligence analysis. As one study on conceptual modeling within the
field of intelligence has stated, “Conceptual models both fix the mesh of the nets that the
analyst drags through the material in order to explain a particular action or decision and
direct him to cast his net in select ponds, at certain depths, in order to catch the fish he is
after.”6
6 Graham T. Allison. “Conceptual Models and the Cuban Missile Crisis,” in The Sociology of
Organizations: Classic, Contemporary and Critical Readings, ed. Michael Jeremy Handel,
(Thousand Oaks: SAGE, 2003), 185.
1
2
CHAPTER II:
LITERATURE REVIEW
and the support they provide for the need to make our mental models explicit. This is
followed by a segment on the various methods for making these models explicit, in
particular mind maps, concept maps and explicit conceptual models. The benefit of using
mentioned, as is the notion that the utility of ECM may be affected by varying individual
learning styles. Finally, this section concludes with the author’s original hypotheses for
this study.
Constructivist Roots
In particular, the work of the famed Swiss psychologist Jean Piaget is relevant, as his
viewpoint holds that individuals continually construct cognitive models to make sense of
the world around them by organizing and connecting their ideas, observations and
understanding of the world around them.8 Piaget called the constructs for assembling
these models schema or “a concept or framework that exists in the individuals’ mind to
organize and interpret information.”9 While a discussion of the pros and cons of
constructivism theory is outside the reach of this paper, using this theory to aid in
thinking about the development of models within our minds is actually quite useful.
analysis it becomes clear that each analyst’s cognitive, or mental model, is uniquely
shaped by the context and purpose of the requirement posed to them, along with the
Again,” by Charles A. Mangio, of Shim Enterprise, Inc., and Bonnie J. Wilkinson of the
Given the importance of the mental model in influencing and shaping the
analysis (i.e., from problem exploration and formulation, to purpose
refinement, through data acquisition and evaluation, and ultimately
determining meaning and making judgments), it is not surprising how it
influences the discussion of intelligence analysis.11
However, despite the importance of a well-defined and thorough mental model to an
Memory Limitations
8 Ibid.
9 Ibid.
10 Charles A. Mangio and Bonnie J. Wilkinson, “Intelligence Analysis: Once Again” (paper presented at
the annual international meeting of the International Studies Association, San Francisco, California, 26
March, 2008): 8.
11 Ibid.
4
community, the formulation of these models can significantly impact the process of
concepts and relationships included in the analyst’s mental model continues to grow, it
becomes difficult to store all of that information accurately in working memory. In “The
Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing
University, argues that we only have the ability to hold 7 things (plus or minus 2) in our
mind at a given time without making mistakes in differentiation.12 Having said that, there
are methods individuals can use to help them surpass these known limits, as well as there
are a variety of exceptions to the rule in the first place. In Psychology of Intelligence
Analysis, Richards Heuer, prior staff officer and contractor of the CIA for almost 45
years, discusses one such method for aiding analysts in exceeding memory constraints.
Essentially, what Heuer describes is none other than making an individual’s mental
model explicit:
handle, the idea that there is an upper limit is quite evident, as well as is the fact that most
12 George A. Miller, “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for
Processing Information,” Psychological Review 63 (1956): 81-97.
13 Richards J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence, 1999), 27.
5
intelligence requirements will easily exceed that maximum. Limits on working memory,
although one of the major concerns considered in the argument for making mental
models explicit, are only one of the factors behind the need for engaging in the process of
ECM. Another reason strengthening the argument for explicit modeling stems from the
Group Intellect
individual conclusions as being much more sensible and sound. In fact, groups are often
viewed as bringing out the worst in individuals, resulting in illogical and foolish
behavior. However, in The Wisdom of Crowds, James Surowiecki, a staff writer at The
New Yorker, actually defends group decisions, noting that the four conditions
individual intelligences of the people making up that group.15 However, group work does
Combating Groupthink
14 James Surowiecki, The Wisdom of Crowds (New York: Anchor Books, 2004), 10.
15 Surowiecki, Wisdom of Crowds, XIII.
6
individuals ranging from those inclined to chime in to discussions and offer opinions ad
nauseam to those who shudder at the thought of speaking up. While there can of course
classroom discussion or workplace meetings, a common fear is that their responses will
take a couple of minutes to write out their initial answers to a question can help. If a
student has already written an answer, the step to speaking is much less than answering
Essentially, the notion is that even the most timid will contribute when simply
asked to read off what they have already written down. While McKeachie, Professor
same concept applies to professionals. By asking all individuals to jot down answers to a
proposed question, then focusing on each person in turn and having them voice those
ideas out loud, equal involvement is fostered. No one is allowed to passively soak up the
information being offered by others, while at the same time a select few individuals are
Decisions and Fiascoes, Irving L. Janis defines groupthink as, “A mode of thinking that
people engage in when they are deeply involved in a cohesive in-group, when the
16 Wilbert J. McKeachie, McKeachie’s Teaching Tips (Boston: Houghton Mifflin Company, 2002), 42.
7
University prior to his death in 1990, there are three main categories of groupthink: group
have trouble successfully completing the requirements placed upon them, subsequently
failing to meet their goals. By having all group members write down their thoughts and
then repeat those thoughts aloud, the phenomenon of individuals keeping quiet so as not
to voice unpopular or contrasting views is limited. Furthermore, this method also limits
having only a select few outspoken individuals’ perspectives heard and considered.
Therefore, the process of ECM is likely beneficial not only in surpassing memory
limitations but also in combating groupthink, one of the most common problems plaguing
group work.
knowledge in this study, a variety of related mapping disciplines with similar functions
do exist. Two in particular that warrant a brief discussion are mind maps and concept
maps, both of which are highly analogous to conceptual modeling. For the purposes of
this study however, conceptual models were found to be the most functional and user-
friendly method for experiment participants to learn and understand in a short time
period. Based on the fact that the methods of mind and concept mapping are both
criteria (see below mind map and concept mapping sections for further detail).
17 Irving L. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascoes (Boston:
Houghton Mifflin Company, 1982), 9.
18 Janis, Groupthink, 174-175.
8
ECM on the other hand, simply focuses on the visualization of concepts and their
relationships without the added emphasis on the specific construction of the model,
allowing individuals the maximum freedom possible to organize the model in whatever
way was most helpful to them. As a result, the method and design for conceptual model
construction used within this thesis has been operationalized from the relevant literature
(see the methodology section for further detail regarding development of the models).
Please refer to Figure 2.1 below, taken from Bubbl.us, for an illustration of a conceptual
Figure 2.1
Of course, taking a somewhat abstract concept and translating that into a concrete
and measurable product, is not without its problems. First, this author’s interpretation of
the physical creation of the conceptual models may differ from the interpretation of
others. Additionally, whereas this author treats the notion of conceptual modeling as
distinct and unique, others may disagree, believing it to be simply a subset of a related
mapping discipline.
9
Mind Maps
The popular exercise of mind mapping that many individuals are familiar with
today got its start in the 1960s with its originator, Tony Buzan. In “Mind Maps as
Classroom Exercises,” John Budd, professor in the Industrial Relations Center at the
the creation of a mind map.20 Therefore, the essential function of a mind map is very
similar to that of the explicit conceptual model, but the process for developing one is
more formalized. Please see Figure 2.2 below, taken from the TechKNOW Tools
Figure 2.2
19 John W. Budd, “Mind Maps as Classroom Exercises,” Journal of Economic Education (Winter 2004):
36.
20 Budd, “Mind Maps as Classroom Exercises.”
10
Concept Maps
According to Alberto Canas, Associate Director of the Institute for Human and
Machine Cognition, and Joseph Novak, known for his development of concept mapping
in the 1970s:
important to the notion of concept mapping and helps to distinguish this form of mapping
from mind mapping and conceptual modeling. Another difference between concept maps
and mind maps is that the latter are organized around one central concept, whereas the
21 Joseph Novak and Alberto Canas, The Theory Underlying Concept Maps and How to Construct and
Use Them, Technical Report IHMC Cmap Tools 2006-01 Rev 01-2008, Florida Institute for Human and
Machine Cognition, 2008.
http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMaps.pdf (accessed August 8,
2008).
11
former tend to be organized around several. Like conceptual models, concept maps are
However, once again, while concept mapping serves a very similar purpose to that of
conceptual modeling, its construction is much more structured in nature. Please see
Figure 2.3 below, taken from the cited work of Novak and Canas, for an illustration of a
Figure 2.3
Technology Aids
At this point, after a discussion of the abovementioned techniques for information
visualization, one may be left to wonder if making such models explicit is hard to do. In
truth, the answer to this question is no, in large part due to technological advances that do
away with much of the burden of creation for these models. In fact, the relatively recent
concept maps and mind maps has brought about an increased interest in these techniques.
Compared to the traditional pencil and paper construction, software programs make the
functionality encourages users to revise and expand their maps as their knowledge base
externalizing information as they create concept maps, students are better able to detect
and correct gaps and inconsistencies in their knowledge.”23 However, the development
and evolution of concept maps as a response to detailed questions and requirements can
sometimes prove difficult, lengthy and disorganized when created by hand. De Simone,
Assistant Professor of Education at the University of Ottawa, further states that many
students in her classes “find electronic concept mapping …very useful, as it minimizes
the cumbersome and time-consuming activity of erasing, revising, and beginning anew.
It allows them greater freedom to adjust their conceptual thinking and mapped
variety exist, are likely the most useful and advantageous tools for intelligence students
and analysts to utilize in the construction of complex and fluid conceptual models.
Learning Styles
However, even with the increased efficiency in creating conceptual models
brought about through technological aids, it is important to note that some individuals
22 Josianne Basque and Beatrice Pudelko. “Using a Concept Mapping Software as a Knowledge
Construction Tool in a Graduate Online Course,” in Proceedings of ED-MEDIA 2003, World Conference
on Educational Multimedia, Hypermedia &Telecommunications, Honolulu, June 23-28, 2003, ed. D.
Lassner and C. McNaught (Norfolk: Association for the Advancement of Computing in Education, 2003),
2268-2274.
23 Christina De Simone, “Applications of Concept Mapping,” Journal of College Teaching 55, no. 1
(2007): 34.
24 De Simone, “Applications of Concept Mapping,” 35.
13
may be better suited towards this type of exercise than others. Research conducted by
Josianne Basque and Beatrice Pudelko, from the LICEF Research Center in Canada,
showed that some graduate students claiming to be auditory learners found little to no
better understand a topic when concepts were structured in such a visual way.26
Hypotheses
Based on review of the above literature the following hypotheses were formed:
First, intelligence analysts who engage in ECM will generate better analytic products, as
evaluated by thoroughness of process and accuracy of product, than analysts who do not.
Second, that the individual to group method employed for creation of the conceptual
models in this study will affect, either positively or negatively, the models’ ability to aid
in intelligence analysis.
CHAPTER III:
METHODOLOGY
In order to test the stated hypotheses I conducted an experiment that examined the
value of ECM as it applies to the quality of the analysis produced. The experiment was
designed to determine if intelligence analysts who engaged in ECM would generate better
comparison to analysts who did not. The following methodology section will provide the
Research Design
control group. Efforts were made to ensure that conditions in both groups were identical,
with the exception of the addition of ECM in the experimental group. The experimental
group was instructed to use a structured ECM approach, facilitated by the use of the
aid in their analysis. The control group on the other hand was not instructed to use any
particular method in conducting their analysis, as this group served as a baseline from
Subjects
Intelligence Community, I chose to draw from the undergraduate and graduate student
population at the Mercyhurst College Institute for Intelligence Studies (MCIIS). The aim
of this program is to produce graduates qualified to enter the government or private sector
15
specifically to the study of intelligence analysis. The program offers coursework in the
fields of national security, law enforcement and competitive business analysis. Students
in both the undergraduate and graduate programs are subjected to a rigorous academic
curriculum during their time at Mercyhurst. They are expected to meet certain foreign
language proficiency and internship requirements and are often faced with accelerated
project deadlines for real-world decision makers in the field of intelligence. MCIIS
considers their students to be experts in the exploitation of open source information for
analytic purposes. For these reasons, MCIIS turns out capable and well-trained entry-
Intelligence Studies program at Mercyhurst College via email to ask if they would be
willing to let me briefly speak to their classes in order to recruit students. I then followed
and graduate intelligence classes where I gave a broad overview of the experiment and
passed out individual signup sheets to interested students (see Appendix 1). These sheets
asked for the student’s name, email address, class year and available time slots for
experiment participation. Students were given a total of twelve time slots to choose from,
with six slots on the first day of the experiment and six slots on the second day. The only
requirement placed on students was that they choose at least two time slots, one on each
16
day of the experiment. Additionally, of these two time slots students were asked to select
one for the duration of ninety minutes and the other for the duration of thirty minutes.
Students could either turn these sheets back into me on the spot or they could drop them
off at their convenience to my worksite located within the Intelligence Studies building.
After receiving all signup sheets students were divided into subgroups based on
class year. Individuals within each subgroup were then randomly divided between the
control and the experimental group in an attempt to control for the educational level of
participants. For example, after the subgroup of freshman who signed up for the
experiment was established, individuals within the group were randomly assigned to
Students were then notified of their designated time slots for participation via an
email, which included information on the location of the experiment. The email also
stated that participation would be rewarded with extra credit from select professors within
the Intelligence Studies Department, in addition to free refreshments and pizza on the
second day of the experiment. Finally, my contact data was included in the email so all
participants would easily be able to access me if they had any questions or concerns in
the days leading up to the experiment. Intelligence Studies students of all grade levels
Testing took place approximately three-quarters of the way through the fall term,
unfortunately falling at a time when students were especially busy trying to meet the
actually participated in, and finished, this experiment. For a breakdown of participants
by educational level and group, please see Figures 3.1 & 3.2 below.
17
Figure 3.1
Figure 3.2
Preliminaries
the Mercyhurst College Institutional Review Board (IRB) for approval (see Appendix 2).
It is Mercyhurst College policy that any student conducting research involving the use of
18
human subjects be granted permission by the IRB. In order to receive a green light from
the IRB students must provide a description of the proposed research, its purpose, and an
However, not only did I need to secure the consent of the IRB I also needed the
consent of each individual experiment participant. Therefore, on the first day of the
experiment (for both control and experimental sessions) all participants were given a
formal consent form upon arrival (see Appendix 3 for control group consent form and
Appendix 4 for experimental group consent form). This form outlined what would be
expected of them as participants, along with the fact that there were no foreseen dangers
or risks associated with involvement in the experiment. The form also asked for basic
Control group participants were asked to attend two experiment sessions. The
first session was slotted for thirty minutes and the next session, scheduled for one week
later, was slotted for 90 minutes. Three control group sessions were run on both days of
the experiment, for a total of six sessions, in order to make it as convenient as possible
for subjects to schedule participation into their busy agendas. Out of those who
originally signed up for the experiment and were assigned to the control group (37
individuals), a total of 25 actually attended and completed the experiment. Please see
Figure 3.3 below for a graphic representation of this breakdown by class year.
Figure 3.3
19
Both groups were given the same question to analyze, regarding October 2008
presidential elections in Zambia (see Appendix 5). The control group was simply asked
to forecast the winner of the elections and to provide a list of the main pieces of evidence
that aided them in their analysis. Both groups were provided with information regarding
source reliability and analytic confidence as they were asked to supply measures of both
in their final product (see Appendix 6). Also, both groups were given a semi-structured
answer sheet with space for their name, a pre-written forecast with built-in words of
estimative probability and presidential candidates to choose from and space for a bulleted
discussion (see Appendix 7). The bottom of the answer sheet also asked them to identify
their source reliability and analytic confidence on a scale from low to high and to provide
the names of their professors offering extra credit for participation in the experiment.
Lastly, participants were given a sheet of expectations for the second and final
session of the experiment one week later (see Appendix 8) and were asked to fill out a
short pre-experiment questionnaire (see Appendix 10). They were also once again
20
questions while working on their analysis during the course of the week (see Appendix
11).
Experimental group participants were also asked to attend two sessions. The first
session was slotted for ninety minutes and the next session, scheduled for one week later,
was slotted for thirty minutes. Three experimental group sessions were run on both days
of the experiment, for a total of six sessions, once again to make scheduling more
convenient for participants. Out of those who originally signed up for the experiment and
were assigned to the experimental group (37 individuals), a total of 22 actually attended
and completed the experiment. Please see Figure 3.4 below for a graphic representation
Figure 3.4
21
The experimental group was given the same tasking as the control group (see
Appendix 5). This group was also provided with the same information regarding source
reliability and analytic confidence as the control group (see Appendix 6), along with the
same answer sheet (see Appendix 7). Although this group was given the same tasking as
the control group in regards to forecasting the winner of the elections, they were also
familiarize experimental group participants with what conceptual models are, how they
can be used and the proposed value of making them explicit in the field of intelligence
(see Appendix 12). Following the lecture, I had all participants sign into a computer
whereby I led them through a step-by-step tutorial of the program Bubbl.us (see
Appendix 13). Once everyone was comfortable with how the program worked, I began a
Participants began the exercise by making individual lists of concepts they felt
would be important to answering the question asked of them. After individual lists were
22
completed, participants were asked to read their lists aloud one at a time. As concepts
were read off, they were written on a whiteboard at the front of the room, creating one
combined group list. For every time a concept was repeated a check mark was placed
next to it in order to highlight the most commonly thought of concepts. Also, concepts
that the group immediately recognized as useful but that were mentioned only once or
twice were made note of as well. Due to limitations on time, participants were not asked
evaluate the master list before producing their own conceptual models. Instead,
participants were simply asked to use Bubbl.us to construct a conceptual model based on
their individual list and thoughts as well as that of the collaborative group list. Finally,
participants were given a chance to briefly look at the way others around them had
assembled their models and were then asked to electronically share what they had created
with me through the collaboration function within Bubbl.us (see Appendix 15).
Following this task participants were given a sheet of expectations for the second
and final session of the experiment one week later (see Appendix 9) and were asked to
fill out a pre-experiment questionnaire (see Appendix 10). Lastly, they were provided
Bubbl.us
for use in this experiment due to its extremely simple user-interface. Not only did the
program encompass all the relevant functions necessary to complete the conceptual
modeling segment of my experiment, it could easily be taught and learned within the
minimal amount of time I had during sessions. Basic functions include the creation of
bubbles and lines to illustrate concepts and their relationships, internet-based sharing of
23
work with other Bubbl.us users, and exporting finished products as photos or embedding
On the second day of the experiment, the control group was expected to arrive at
their designated session with their completed answer sheets ready to turn in. All research
and analysis was to be done prior to arriving at this session. Since the control group had
not received any training on Bubbl.us in the first session, I began their second session by
asking them to login to a computer and follow along with me as I taught them the basic
functions of the program. However, they still received no lectures detailing conceptual
modeling background information and were not given any specifics regarding how to
After this group became familiar with the program they were asked to illustrate
the concepts and relationships they found to be important over the course of the week in
answering the question posed to them. This was done in order to draw a comparison
24
between the quality of conceptual models made after background information was
provided and those made with little to no prior instruction. Additionally, it compared the
quality of conceptual models made pre-collection and updated throughout the analytic
process with those made post-analysis. In bringing the session to a close participants were
asked to fill out a post-experiment questionnaire (see Appendix 16) and were given a
debriefing sheet thanking them for their time and further explaining the purpose of the
Once again, the experimental group was expected to arrive on the second day of
the experiment with their research and analysis completed and a finished answer sheet
ready to be handed in. Additionally, they were expected to have electronically updated
the conceptual models made during the first session of the experiment throughout the
course of the week to reflect their expanding knowledge base in regards to the question at
hand. Therefore, after handing in their answer sheets this group was asked to fill out a
follow-up questionnaire (see Appendix 17) and was then provided with the same
Since the intent of this experiment was to test the value of ECM as it applies to
analysis, control and experimental group results were compared in terms of quality and
accuracy of process and product. To evaluate and compare the processes of the two
groups, three MCIIS second year graduate students independently ranked the discussion
section of each participant’s answer sheet from best to worst. Students were used in lieu
25
of professors who tend to have unique grading styles, as the students all received the
same training regarding what makes a sound analysis and were thus thought to be on
more equal footing. All identifying information including the student’s name, class year
and group were removed prior to evaluation. Additionally, the students doing the
evaluating did not know the outcome of the elections at the time of ranking in order to
keep measures of process and product independent of one another. The product measure
was derived through a simple tally of whether or not the participant predicted the
question correctly. Finally, the actual conceptual models created were compared in terms
of complexity, based on how many concepts and connections between concepts each
encompassed.
Spreadsheet along with subject education level breakdowns and information from the
determine whether or not the experiment results were statistically significant. The
control group was generally expected to fall lower in the graduate student process
rankings than the experimental group and was also expected to be less accurate overall in
CHAPTER IV:
RESULTS
surprising results. The next section will first provide a brief explanation of the statistical
significance testing conducted throughout this thesis and will then detail the results
derived from the analysis of pre- and post-experiment questionnaires. Next, findings
from the experiment itself will be discussed as a function of process and product, with
Significance Testing
All significance tests related to this thesis were conducted at the 0.10 significance
level (see Appendix 20). The reason behind setting what some may consider to be a
rather lax level of significance is the fact that the research conducted in this thesis
proper significance levels based on situation, this author felt that a 0.10 level was most
appropriate when dealing with this particular set of research and data.
27 G. David Garson, Guide to Writing Empirical Papers, Theses, and Dissertations (New York: CRC
Press, 2002), 199.
27
Prior to the experiment participants were asked whether or not they thought they
would be able to dedicate a sufficient amount of time to completing the experiment over
the course of the next week. In response, 64.3% of control group participants thought
that they would have ample time, 32.1% were not sure and 3.6% did not expect to be able
to dedicate a sufficient amount of time to the experiment. On the other hand, 58.3% of
experimental group participants expected to have enough time and 41.7% were unsure.
When asked post-experiment whether or not they had actually been able to devote a
sufficient amount of time to the experiment over the course of the past week, 68% of
control group participants claimed they had, 16% were unsure and 16% said that they had
not. In the experimental group 60% claimed to have had enough time to dedicate to the
While the percentage of participants in both the control and experimental group
claiming to have been able to dedicate a sufficient amount of time to completion of the
experiment increased slightly from pre- to post-experiment, the percentage of those who
claimed that they did not have a sufficient amount of time to dedicate to the experiment
experimental group claimed that they had not had enough time (up from 0% pre-
experiment) and 16% of control group participants claimed the same (up from 3.6% pre-
experiment). As previously stated in the methodology section of this thesis the timing of
the experiment fell during an extremely busy time in the participants’ trimester, serving
as a limitation to this study as students had to balance the experiment with their class
work and other responsibilities. Please see Figure 4.1 below for a graphic display of this
data.
28
Figure 4.1
interested they were in the study, with 1 being not interested and 5 being extremely
interested. The average response for both groups was a 3.5, illustrating that both control
and experimental groups were on average equally interested in the experiment at its
onset. The difference between control and experimental group responses to this question
was not found to be statistically significant at the 0.10 level (p-value = 0.851).
When asked the same question post-experiment, the average response of the
control group increased to 3.8, while the average response of the experimental group
remained the same at 3.5. This shows a slight average increase in interest on the part of
the control group from pre- to post-experiment. However, once again the difference in
control and experimental group responses was not found to be statistically significant at
experiment was how useful they feel structured approaches to the analytic process are,
29
with 1 being not useful and 5 being extremely useful. On average, control group
participants responded with a 3.8 and experimental group participants responded with a
4.2. Although a significant difference at the 0.10 level was not found between control
and experimental group responses to this question, the results did approach significance
(p-value = 0.127).
Faced with the same question post-experiment the average control group response
increased to a 4.1, while the experimental group response maintained steady at 4.2.
was not found to be significant at the 0.10 level (p-value = 0.617). Results for this
question show that the experimental group found structured approaches to the analytic
process to be more useful than did the control group, both pre- and post-experiment. The
control group’s feelings regarding the utility of structured approaches to the analytic
process grew throughout the course of the experiment, whereas the experimental group’s
did not.
When experiment participants were asked to identify the learning style they most
closely associated with, over half of both control and experimental group participants
identified themselves as visual learners. Since ECM is a visual learning aid, it is likely
that the exercise was generally more beneficial to those claiming to be visual learners
than to those who chose an alternative learning style. Please see Figure 4.2 below for the
full range of control and experimental group responses to the question regarding learning
styles.
Figure 4.2
30
Also, post-experiment both control and experimental groups were asked to gauge
with 1 being extremely low and 5 being extremely high. The average control group
response to this question was a 3.6, whereas the average experimental group response
was a 3.2. While the difference between control and experimental responses was not
found to be significant at the 0.10 level, the results did approach significance (p-value =
0.187).
conceptual modeling following the experiment, using the same scale. Post-experiment
the average response for both the control and experimental group was a 4.0 and was
therefore not statistically significant at the 0.10 level (p-value = 1.0). Results from this
question show that post-experiment both control and experimental group participants
claimed to have the same understanding of conceptual modeling, an increase for both
groups from their pre-experiment knowledge on the topic. However, the experimental
group claimed to have less of an understanding of conceptual modeling than the control
group at the onset of the experiment, signaling that on average the experiment raised their
In relation to the above question, post-experiment all participants were asked how
often ECM had been a part of their personal analytic process prior to the experiment on a
scale of 1 to 5, with 1 being never and 5 being every time they produced an intelligence
estimate. On average the control group responded with a 2.8 and the experimental group
responded with a 2.75. The difference between control and experimental group responses
was not found to be statistically significant at the 0.10 level (p-value = 0.872).
31
However, post-experiment both groups were also asked how often they plan to
incorporate ECM into their personal analytic process in the future, using the same scale.
In response to this question, the control group average was a 3.5 and the experimental
group average was a 3.6. Once again, the difference between experimental and control
group responses was not found to be statistically significant at the 0.10 level (p-value =
0.617). Results from the above question highlight that although the control group
claimed to employ ECM in their analytic process prior to the experiment on average
slightly more than the experimental group, the experimental group claimed that they will
employ ECM in their analytic process on average more than the control group in the
future.
regarding their specific responsibilities within the experiment. First, the experimental
group was asked to rate whether or not they found that ECM aided them in developing a
more thorough and nuanced intelligence analysis in this experiment. In response, 33% of
experimental group participants claimed that ECM definitely aided them in their analysis,
and 66.7% claimed that it helped them somewhat, with no participants responding that it
The experimental group was also asked post-experiment to rate how useful they
found the conceptual modeling training provided at the beginning of the experiment to
be, with 1 being not at all helpful and 5 being extremely helpful. On average,
experimental group participants responded to this question with a 3.9. Additionally, the
experimental group was asked how effective they found the conceptual modeling method
used in this experiment, inclusive of both individual work and group collaboration, to be.
The average response to this question was a 3.7. Finally, the experimental group was
32
asked how useful they found the technology aid, Bubbl.us, to be in creating and updating
their conceptual models, with 1 being not useful and 5 being extremely useful. Overall,
response of 4.1.
confirms that, “To the extent possible, analysis should incorporate insights from the
As such, the conceptual models resulting from the experiment were analyzed in terms of
complexity by simply tallying the number of concepts and connections between concepts
found in each model (please see Figure 4.3 below to see the distinction between concepts
and connections). Control group conceptual models averaged 12.6 concepts and 12.9
Figure 4.3
Concept
Concept
Connection Connection
averaged 30.9 concepts and 31 connections, per model. Results of significance testing
for the number of concepts in pre- and post-experimental group conceptual models was
found to be significant at the 0.10 level (p-value = 0.056), however the number of
Figure 4.4, both the average number of concepts and the average number of connections
Figure 4.4
distinguishing between pre- and post-analysis) the models averaged 27.9 concepts and
29.9 connections, per model. As illustrated below in Figure 4.5, when comparing these
34
experimental group averages with control group averages the difference in complexity
Figure 4.5
The
control group’s models were much simpler, consisting on average of less than half the
models. Results of significance testing for the number of concepts in control versus
experimental group conceptual models was found to be significant at the 0.10 level (p-
value = 0.000), furthermore the number of connections was also found to be significant
(p-value = 0.000). Please see Figure 4.6 below for an illustration of a typical conceptual
model made by a control group participant and Figure 4.7 for an illustration of a typical
Figure 4.6
35
Figure 4.7
36
Intelligence Community Directive Number 203 highlights the need for logical
ranked the analytic products of all 47 participants from best to worst, based on the quality
and logic of evidence supporting the analyst’s estimate. Based on the rankings assigned
by each graduate student, the overall average ranking of control group participants was a
22.4 and the overall average ranking of experimental group participants was a 25.8.
Therefore, control group participants scored approximately 3 points higher than did
estimates, implying that participants not using ECM to aid in their analysis were able to
29 Ibid.
37
formulate slightly better supporting arguments than participants who did in fact use
ECM.
Correlation scores amongst the three graduate student rankers at the source of this
finding were relatively high across the boards (0.76, 0.72 and 0.63), illustrating
statistician and Professor Emeritus at New York University before his death in 1998,
convention, correlations above a 0.5 are traditionally considered to be large within the
social sciences.31 This lends support to the Mercyhurst method for the evaluation of
analytic products, as the ranking consistency of the three graduate student raters was
high.
should “make accurate judgments and assessments.”32 Therefore, not only must the
quality of the analyst’s process be accounted for, but the correctness of estimates must be
measured as well. In terms of forecasting the correct outcome of the October 2008
whereas only 40.9% of experimental group participants did. Therefore, individuals who
did not use ECM to aid in their analysis were able to identify the actual outcome of the
research question posed to them much more often than individuals who did incorporate
30 The common measures of inter rater-reliability, known as Cohen’s and Fleiss’ Kappa, were not used
when conducting tests of correlation in this study. This is due to the fact that both measures are designed
for use in situations where the data is categorical (ex. yes vs. no) and were therefore felt by the author to be
inappropriate measures for the type of data present (ordinal numbers).
31 Jacob Cohen, Statistical Power Analysis for the Behavioral Sciences (Philadelphia: Lawrence Erlbaum
Associates, 1988).
32 United States Government, Intelligence Community Directive Number 203.
38
ECM into their analytic process. Forecasting result differences were found to be
statistically significant at the 0.10 levelFigure 4.8= 0.065). Please see Figure 4.8 below
(p-value
After looking at both the quality of the evidence supporting the participants’
analysis and the participants’ forecasting accuracy separately, the opportunity to compare
the two findings presented itself. Therefore, the following supplementary conclusion
regarding graduate student process rankings and forecasting accuracy, although not
directly related to the hypotheses of this experiment, was thought to be interesting enough
qualitative strength of the individual’s assessment, were compared against whether or not
the individual correctly forecasted the outcome of the election. To make this comparison
the process rankings were simply split in half and a tally of the amount of individuals in
the top half and bottom half who forecasted the elections correctly was conducted. This
39
measurement was carried out three times: first for the group as a whole, second for just
the control group and lastly for just the experimental group. The expectation following
this comparison was that individuals in the top half of the graduate student process
rankings would forecast the winner of the elections correctly considerably more often
than individuals falling in the bottom half of the rankings. However, this was not found
to be the case. In fact, results of the tally showed little difference in forecasting accuracy
between those who were ranked better qualitatively than those who were not.
When looking at the group as a whole, 14 individuals in the top half of the
graduate student process rankings forecasted the outcome of the elections correctly and 9
individuals forecasted incorrectly, compared with an even split in the bottom half of 12
the control group, 9 individuals in the top half of the graduate student process rankings
forecasted the outcome of the elections correctly and 4 individuals did not, compared to 8
individuals who forecasted correctly in the bottom half and 4 who did not. Finally, when
looking at the experimental group on its own, there was an even split of 5 individuals in
the top half of the graduate student process rankings who forecasted correctly and 5 who
did not, compared to 4 individuals in the bottom half who forecasted correctly with 8 who
did not. Please see Figures 4.9, 4.10 and 4.11 below for graphic representations of this
data.
Figure 4.9
40
Figure 4.10
Figure 4.11
41
conclusion, making it difficult to ascertain the extent to which any extraneous variable or
variables has impacted it, the point in its most basic form remains the same.
Therefore, this conclusion suggests that individuals who are better writers, or who
are able to craft more convincing arguments, are not necessarily anymore likely to
forecast correctly than individuals who are lacking in those skills. This notion is an
offshoot of the general argument made in Philip Tetlock’s Expert Political Judgment,
which basically states that the way in which we reason or think about things is more
important than our backgrounds and accomplishments or even our belief systems.33 How
we think, then appears to be more important than what we think when it comes to being
proficient forecasters.
33 Philip E. Tetlock, Expert Political Judgment (Princeton: Princeton University Press, 2005).
42
should “properly describe quality and reliability of underlying sources” and “properly
experiment participants were asked to assess both their source reliability and analytic
confidence on a scale from low to high. Control group findings regarding source
reliability illustrate that 4% of participants claimed low source reliability, 80% claimed
medium and 16% claimed high. Findings for the experimental group show 4.5% of
participants claimed low reliability, 68.2% claimed medium and 27.3% claimed high.
participants claimed to have high source reliability than did control group participants, it
is necessary to note that this difference is a function of just two participants. As a result
source reliability findings were not found to be significant at the 0.10 level (p-value =
0.451). Please see Figure 4.12 below for a graphic representation of this data.
Figure 4.12
In terms of analytic confidence, 12% of control group participants claimed low
analytic confidence, 80% claimed medium and 8% claimed high. On the other hand,
34 Ibid.
43
medium and 13.6% claimed high. Although percentages reveal that approximately 6%
one participant. As a result, findings for analytic confidence were not found to be
significant at the 0.10 level (p-value = 0.745). Please see Figure 4.13 below for a graphic
CHAPTER V:
CONCLUSIONS
As previously stated, the purpose of this study was to determine the value of ECM
within the analytic process. This was accomplished by requiring the experimental group
group approach, into their analytic process. The control group on the other hand was
simply asked to analyze the question posed to them, using no particular method.
While the control group correctly predicted the outcome of the elections 68% of
the time, the experimental group forecasted the outcome correctly only 40.9% of the
time. This result was found to be statistically significant at the 0.10 level (p-value =
experimental group participants claimed to have high source reliability and analytic
confidence than did control group participants. Therefore, the experimental group
members did considerably poorer in terms of correctly forecasting the result of the
elections, but felt they had more reliable sources and were more confident in their
assessment. However, neither source reliability (p-value = 0.451) nor analytic confidence
(p-value = 0.745) results were found to be statistically significant at the 0.10 level. Even
so, in terms of product measures only, these results paint a bleak picture of the role of
control group participants, on average, roughly 3 points higher than experimental group
participants in terms of the quality and logic of the evidence used to support their
analysis. Correlation scores amongst the three graduate student rankers were relatively
45
high across the boards (0.76, 0.72 and 0.63), illustrating consistency in experiment
appreciably more complex than the control groups in terms of the amount of concepts and
relationships between concepts. This result was found to be statistically significant at the
0.10 level (p-value = 0.000). Therefore, although the experimental group’s conceptual
models appear to be more complex and thorough than the control group’s models, they
scored lower on average in regards to the reasoning used to substantiate their estimates.
Once again, these results appear largely to invalidate any suggested value of ECM within
intelligence analysis. Since the literature strongly suggests that ECM will improve
Sheena S. Iyengar and Mark R. Lepper in, “When Choice is Demotivating: Can
One Desire Too Much of a Good Thing?,” state that, “It is a common supposition in
modern society that the more choices the better—that the human ability to manage, and
the human desire for, choice is infinite.”35 Traditionally, research has tended to support
the concept that having some choice produces better outcomes than having no choice.
However, a growing body of literature concludes that when the amount of choices
available becomes too large, people have a very hard time managing that complexity.
a field experiment at an upscale grocery store whereby they observed the outcome of
consumers visiting one of two tasting booths. One tasting booth displayed only 6 jams,
while the other displayed a variety of 24 different flavored jams. Iyengar and Lepper’s
35 Sheena S. Iyengar and Mark R. Lepper, “When Choice is Demotivating: Can One Desire Too Much of
a Good Thing?,” Journal of Personality and Social Psychology 79, no. 6 (2000): 995.
46
findings showed that initially, shoppers who encountered the booth with 24 flavors were
more attracted to the display (stopping 60% of the time) than shoppers who encountered
the booth with only 6 (stopping only 40% of the time). 36 Additionally, even though one
booth displayed only 6 flavors, whereas the other displayed 24, there were no significant
differences in the amount of jams sampled by visitors to each of the different booths.37
Finally, almost 30% of consumers who stopped at the 6 flavor booth bought a jar of jam,
while only 3% of consumers who stopped at the booth with 24 flavors did.38 This
suggests that although individuals originally found the booth with the plethora of flavors
to be more attractive it hampered their ability and motivation to make a choice when it
refers to as hedgehogs (those who know one big thing) and foxes (those who know many
possible future scenarios in regards to a particular country and were asked to forecast the
scenario that was most likely. This presentation of scenarios did not substantially affect
the predictions of hedgehogs who were quite easily able to reject scenarios that they
believed would not actually happen.40 However, the foxes, being more open-minded,
found it very difficult not to consider even the strange or implausible scenarios.41
Therefore, for this group in particular, the danger of attributing limited resources to the
contemplation of a plethora of possibilities did little more than send them on a wild goose
chase. This illustrates that “foxes become more susceptible than hedgehogs to a serious
bias: the tendency to assign so much likelihood to so many possibilities that they become
entangled in self-contradictions.”42
Importance of Convergence
thought, focusing on the need for thinking outside the box, maintaining an open mind and
has perpetually received a bad rap. However, research has finally begun to unearth the
benefits of a combined approach including both divergent and convergent thinking. “In
oriented toward deriving the single best (or correct) answer to a clearly defined
University of Latvia for the past eleven years, argues that divergent thinking is essential
to the creation of novel ideas, but that convergent thinking is then vital to the exploration
of those ideas.44 Truly utilitarian creative thought, says Cropley, can only be achieved
through the generation of ideas through divergence, followed by the criticism and
of competition and variety in intelligence is a recipe for failure, its institution does not
guarantee success.”46 Handel, joint founding editor of the journal Intelligence and
42 Ibid.
43 Arthur Cropley, “In Praise of Convergent Thinking,” Creativity Research Journal 18, no.3 (2006), 391.
44 Cropley, “In Praise,” 398.
45 Ibid.
46 Michael I. Handel, “Intelligence and the Problem of Strategic Surprise,” The Journal of Strategic
Studies (September 1984), 268. In Stephen Marrin, “Preventing Intelligence Failures by Learning from the
48
National Security, further notes that while divergent thinking exercises lead to an
increased number of opinions for consideration, they are not able to aid in ascertaining
49
the best alternative.47 Richard Betts, director of the Institute of War and Peace Studies,
and the director of the International Security Policy Program at Columbia University,
makes a similar point in “Analysis, War, and Decision: Why Intelligence Failures Are
Inevitable.” Betts states that, “To the extent that multiple advocacy works, and succeeds
resources of all contending analysts, it may simply highlight the ambiguity rather than
50
resolve it.”48 Essentially, both Handel and Betts acknowledge that while divergent
thinking methods may indeed be useful and necessary within intelligence analysis, they
are not without their limitations. In specific, the generation of numerous ideas alone does
not automatically result in better answers, highlighting the need for a combination of both
Final Thoughts
The jam experiment and scenario exercises discussed above tie directly into the
findings of this experiment, showing that although experimental group conceptual models
were significantly larger than control group conceptual models, the control group did
significantly better than the experimental group in forecasting the correct outcome of the
method employed in this study, the generation of a multitude of ideas seemed to do little
more than confuse and overwhelm experimental group participants. Faced with such a
sense of the relevant relationships and to identify adequately the information most
detail above, becomes relevant. Experimental group participants, having been involved
in a structured individual to group divergent thinking exercise, were left with many more
options to consider than control group participants who simply set off to research the
the divergent thinking exercise appeared to work, leaving the experimental group with a
much wider array of concepts to consider. However, this experiment found that while
51
this is true, it is not enough. Divergent thinking on its own appears to be a handicap,
Therefore, it is likely that experimental group participants, once faced with the
plethora of ideas generated by the group, would have greatly benefitted from a structured
models. The goal of this exercise being to critically evaluate the ideas proposed, possibly
eliminating those concepts that were clearly off base and prioritizing what was left into a
groups is based on the amount of individual members within that group. For example,
often times a group with four team members will find a way to organize and break down
information into four separate sections, thereby allowing them to assign one section to
each team member. While this is certainly not the ideal way to engage in convergent
thinking, it is likely better than not incorporating it at all. However, at this point the best
method for engaging in convergent thinking within the process of ECM has yet to be
determined.
Future Research
and convergent thinking into the method for the construction of conceptual models.
While divergence was sufficiently accounted for in this study, due to time limitations
convergence was not. Therefore, it would be interesting to see the effect of taking the
individual to group conceptual modeling method employed in this study one-step further.
As Surowiecki, McKeachie and Janis tell us, starting the process at the individual level
and then moving into group collaboration is very valuable. However, at this juncture,
once all possibilities the group can think of have been accounted for, it is necessary for
52
the group to narrow the scope of the conceptual model into only the most relevant
concepts and relationships and then organize them accordingly. Therefore, additional
research is needed to establish the best method for carrying out this task.
53
54
BIBLIOGRAPHY
Allison, Graham T. “Conceptual Models and the Cuban Missile Crisis.” In The Sociology
of Organizations: Classic, Contemporary and Critical Readings, ed. Michael
Jeremy Handel. Thousand Oaks: SAGE, 2003.
Budd, John W. “Mind Maps as Classroom Exercises.” Journal of Economic Education (Winter
2004).
Church, Forrest. Introduction to The Jefferson Bible: The Life and Morals of Jesus of
Nazareth, by Thomas Jefferson, 1-31. Boston: Beacon Press, 1989.
Cohen, Jacob. Statistical Power Analysis for the Behavioral Sciences (Philadelphia:
Lawrence Erlbaum Associates, 1988).
Cropley, Arthur. “In Praise of Convergent Thinking.” Creativity Research Journal 18,
no. 3 (2006).
Garson, G. David. Guide to Writing Empirical Papers, Theses, and Dissertations (New
York: CRC Press, 2002).
Heuer, Richards J. Psychology of Intelligence Analysis (Center for the Study of Intelligence,
1999).
Iyengar, Sheena S. and Mark R. Lepper. “When Choice is Demotivating: Can One Desire
Too Much of a Good Thing?.” Journal of Personality and Social Psychology 79,
no. 6 (2000).
Past,” International Journal of Intelligence and CounterIntelligence 17, no. 4 (2004), 665.
47 Ibid.
48 Richard K. Betts, “Analysis, War, and Decision: Why Intelligence Failures Are Inevitable,” World
Politics 31 (1978): 76.
55
Jarvelin, Kalervo and T.D. Wilson. “On Conceptual Models for Information Seeking and
Retrieval Research.” Information Research 9, no. 1 (October 2003),
http://informationr.net/ir/9-1/paper163.html (accessed January 15, 2009).
Mangio, Charles A. and Bonnie J. Wilkinson. “Intelligence Analysis: Once Again.” Paper
presented at the annual international meeting of the International Studies
Association, San Francisco, California 26 March, 2008.
Miller, George A. “The Magical Number Seven, Plus or Minus Two: Some Limits on our
Capacity for Processing Information.” Psychology Review 63 (1956).
Novak, Joseph and Alberto Canas. The Theory Underlying Concept Maps and How to
Construct and Use Them, Technical Report IHMC Cmap Tools 2006-01 Rev 01-
2008, Florida Institute for Human and Machine Cognition, 2008.
http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMap
s.pdf (accessed August 8, 2008).
Surowiecki, James. The Wisdom of Crowds. New York: Anchor Books, 2004.
Tetlock, Philip E. Expert Political Judgment. Princeton: Princeton University Press, 2005.
United States Government. Intelligence Community Directive Number 203 (June 21,
2007),
http://www.fas.org/irp/dni/icd/icd-203.pdf (accessed January 26, 2009).
56
57
APPENDICES
58
Please sign up for at least 1 time slot in column A & 1 time slot in Column B
Please sign up for at least 1 time slot in column A & 1 time slot in Column B
Even though you are signing up for multiple spots, you will only be asked to come in
once on the 20th & once on the 27th
Contact Information:
Shannon Ferrucci
sferru13@mercyhurst.edu
(315) 525-3967
Erie, PA 16509
Please describe the proposed research and its purpose, in narrative form:
60
The purpose of this study is to assess whether or not explicit conceptual modeling
improves collection and the subsequent analysis The more complicated a question is and
the more concepts that play a part in the answering of that question the harder it is to
recall all of those concepts simply from memory. Therefore, putting these concepts and
their relationships to each other down on paper can be extremely useful. Explicit
conceptual modeling prior to collection should help to improve the efficiency of the
collection process as the model provides you with a basis for what types of information
to look for. Also, since the conceptual model is not static, but a fluid diagram that
evolves as you learn more about a certain topic, the model should help to highlight and
minimize gaps in knowledge. These improvements in collection should then improve the
subsequent analysis.
Consent Form
Debriefing Form
Research Question
Writing Utensils
Post-Test Questionnaire
Procedure:
One week prior to the start of the experiment, I will make an appearance in various
undergraduate and graduate intelligence studies classes in order to promote my
experiment and have students sign up. The students will be asked to provide their
availability on two separate dates. I will then email them a designated time slot for both
61
dates. Date and time assignments, as well as group assignments, will be random. On the
first day of the experiment, the control group will show up for half an hour. They will be
provided with a research question regarding upcoming 2008 Zambian presidential
elections and a semi-structured format for a written intelligence product. I will also give
them explanations of both source reliability and analytic confidence. At the end of the
half hour they will be sent home and given one week to research and analyze the question
posed to them. The experimental group on the other hand will be asked to come in for an
hour and a half on the first day of the experiment. They will be provided with the same
research question as the control group, will be given the same format for a written
intelligence product and will also receive the same information regarding source
reliability and analytic confidence. However, this group will undergo a small training
session regarding what conceptual modeling is and how to create one on the same piece
of free conceptual modeling software, such as Mindomo. After the training session the
participants will each be asked to make a list of concepts off the top of their heads that
they feel are relevant to the research question I provided them with. Next, we will go
through and make a master list, consolidating all of the participants’ individual lists,
highlighting concepts that were commonly found, significant differences in opinion and
uncommon but highly useful concepts. Finally, each individual will then be asked to use
this master list to create a conceptual model using software that shows the perceived
relationships between concepts. Participants will then discuss the models they have
created with the individual sitting next to them in order to share their ideas and see how
another person visualized the same information. They will then print out a copy for
themselves and a copy for me.
On the second day of the experiment, a week from the first day, both the experimental
and control groups will come back in. The experimental group will come in for half an
hour. They will hand in their written analysis and an updated conceptual model that they
have modified to reflect what they learned through their research. After doing this they
will answer a short post-experiment questionnaire and will then be debriefed. The
control group on the other hand will come in for an hour and a half on this day. They
will hand in their written analysis and will then simply be told to visualize the concepts
that were important in answering the research question using bubbles and lines. After
completion they will hand in their visualization, complete a post-experiment
questionnaire and then be debriefed.
Participants who successfully complete all experiment responsibilities will receive extra
credit from intelligence professors and pizza and soda will be offered to all those who
participate in the study. Three second-year intelligence studies graduate students, who
were not experiment participants, will then evaluate the written analyses using the same
criteria that students are graded against in the Intelligence Communications class that
these graduate students have already successfully completed.
62
1. Do you have external funding for this research (money coming from outside the
College)? Yes[ ] No[X]
2. Will the participants in your study come from a population requiring special
protection; in other words, are your subjects someone other than Mercyhurst College
students (i.e., children 17-years-old or younger, elderly, criminals, welfare recipients,
persons with disabilities, NCAA athletes)? Yes[ ] No[X]
If your participants include a population requiring special protection, describe how you
will obtain consent from their legal guardians and/or from them directly to insure their
full and free consent to participate.
N/A
Indicate the approximate number of participants, the source of the participant pool, and
recruitment procedures for your research:
Will participants receive any payment or compensation for their participation in your
research (this includes money, gifts, extra credit, etc.)? Yes[X] No[ ]
If yes, please explain: Students will obtain extra credit from the intelligence professors
willing to grant it for participation in an experiment and all participants will be offered
pizza and refreshments at the end of the experiment.
3. Will the participants in your study be at any physical or psychological risk (risk is
defined as any procedure that is invasive to the body, such as injections or drawing blood;
any procedure that may cause undue fatigue; any procedure that may be of a sensitive
nature, such as asking questions about sexual behaviors or practices) such that
participants could be emotionally or mentally upset? Yes[ ] No[X]
63
Describe any harmful effects and/or risks to the participants' health, safety, and emotional
or social well being, incurred as a result of participating in this research, and how you
will insure that these risks will be mitigated:
None.
4. Will the participants in your study be deceived in any way while participating in this
research? Yes[ ] No[X]
If your research makes use of any deception of the respondents, state what other
alternative (e.g., non-deceptive) procedures were considered and why they weren't
chosen:
N/A
5. Will you have a written informed consent form for participants to sign, and will you
have appropriate debriefing arrangements in place? Yes[X] No[ ]
Describe how participants will be clearly and completely informed of the true nature and
purpose of the research, whether deception is involved or not (submit informed consent
form and debriefing statement):
Prior to the start of the experiments, participants will be provided with a general
overview of what will occur during the session as well as the consent form, which will
also describe what is expected of them. Following the experiment participants will be
asked to fill out an administrative questionnaire and will then be provided with a
debriefing statement that will explain how the results from the session will be used
(please see forms at the end of this proposal).
Please include the following statement at the bottom of your informed consent form:
“Research at Mercyhurst College which involves human participants is overseen by
the Institutional Review Board. Questions or problems regarding your rights as a
participant should be addressed to Mr. Tim Harvey Institutional Review Board
64
Chair; Mercyhurst College; 501 East 38th Street; Erie, Pennsylvania 16546-0001;
Telephone (814) 824-3372.”
6. Describe the nature of the data you will collect and your procedures for insuring that
confidentiality is maintained, both in the record keeping and presentation of this data:
Names are not required for my research and thus no names will be used in the recording
of the results or the presentation of my data. Names will only be used to notify
professors of participation in order for them to correctly assign extra credit.
7. Identify the potential benefits of this research on research participants and humankind
in general.
For participants:
An opportunity to practice the intelligence analysis skills they have learned in the
classroom in an experiment aimed at testing the value of explicit conceptual modeling as
it applies to the quality of the analysis produced. Students are often asked to complete
short written intelligence assignments with quick turnaround times in Intelligence Studies
courses. This experiment hopes to validate a particular method for the creation of
conceptual models, which if used by intelligence students should increase efficiency in
collection and accuracy in analysis.
The purpose of this research is to test the value of variations in analytic approaches as they apply
to the quality of the analysis produced.
Your participation involves the development of a short analytic product, the completion of a data
visualization exercise and the filling out of a post-experiment questionnaire. This process will
require your onsite attendance today and one week from today during the pre-determined timeslot
that was designated to you. In total time spent onsite should not exceed two hours, however,
some participation on your own time is required throughout the week. Your name WILL NOT
66
appear in any information disseminated by the researcher. Your name will only be used to notify
professors of your participation in order for them to assign extra credit.
There are no foreseeable risks or discomforts associated with your participation in this study.
Participation is voluntary and you have the right to opt out of the study at any time for any reason
without penalty.
_________________________________ __________________
Signature Date
_________________________________ __________________
If you have any further question about this research you can contact me at
sferru13@mercyhurst.edu
The purpose of this research is to test the value of explicit conceptual modeling as it applies to the
quality of the analysis produced. Furthermore, the following experiment will test the
effectiveness of a structured individual to group conceptual modeling method.
Your participation involves an instruction period and training exercise, the development of a
short analytic product with accompanying model and the filling out of a post-experiment
questionnaire. This process will require your onsite attendance today and one week from today
67
during the pre-determined timeslots that have been designated to you. In total, time spent onsite
should not exceed two hours, however, some participation on your own time is required
throughout the week. Your name WILL NOT appear in any information disseminated by the
researcher. Your name will only be used to notify professors of your participation in order for
them to assign extra credit.
There are no foreseeable risks or discomforts associated with your participation in this study.
Participation is voluntary and you have the right to opt out of the study at any time for any reason
without penalty.
_________________________________ __________________
Signature Date
_________________________________ __________________
If you have any further question about Conceptual Modeling or this research, you can contact me
at sferru13@mercyhurst.edu
Due to the 19 August 2008 death of Zambia’s President Mwanawasa, early presidential
elections will take place on 30 October 2008 in accordance with the Zambian constitution
that requires new elections to be held within 90 days of a president's untimely departure
68
from office. Who will win this upcoming Zambian presidential election (Rupiah Banda,
Michael Sata or Hakainde Hichilema) and why?
Source Reliability:
Source Reliability reflects the accuracy and reliability of a particular source over time.
Sources with high reliability have been proven to have produced accurate, consistently
69
reliable, information in the past. Sources with low reliability lack the accuracy and
proven track record commensurate with more reliable sources.
○ In this experiment source reliability will be measured on a low - high scale conveying the
reliability of the sources used for that piece of intelligence/report.
○ For more information regarding internet source reliability please refer to:
http://www.library.jhu.edu/researchhelp/general/evaluating/
Analytic Confidence:
Analytic Confidence reflects the level of confidence an analyst has in his or her estimates
and analyses. It is not the same as using words of estimative probability, which indicate
likelihood. It is possible for an analyst to suggest an event is virtually certain based on
Amount Of Collaboration
Task Complexity
the available evidence, yet have a low amount of confidence in that forecast due to a
variety of factors or vice versa.
○ In this experiment Analytic Confidence will be measured on a low - high scale.
○ For more information regarding factors contributing to the assessment of analytic confidence
see the Peterson Table of Analytic Confidence provided below.
70
NAME:
FORECAST:
It is (likely, highly likely, almost certain) that (Rupiah Banda, Michael Sata, Hakainde
BULLETED DISCUSSION:
71
Thank you for agreeing to participate in this study! Please take a few moments to answer
the following questions. Your feedback is greatly appreciated.
1. Do you feel as though you will be able to dedicate a sufficient amount of time to
working on this experiment over the next week?
Yes Maybe No
3. Please rate how interested you are in this study, with 1 being not interested and 5
being extremely interested.
1 2 3 4 5
4. Please rate how useful you feel structured approaches to the analytic process are,
with 1 being not useful and 5 being extremely useful.
1 2 3 4 5
CONTACT INFORMATION
Please feel free to get a hold of me during the week if you have any questions or
problems! Thank you again for your participation.
Email: sferru13@mercyhurst.edu
• To start you will see that there is a single bubble in the center of the screen
○ If you click on the text saying start here you can replace that with
whatever words or concept you deem appropriate
○ In this case, since it is the 1st bubble it is important to start by entering the
specific requirements you need to answer based on the research question
provided to you
• Now if you simply place your cursor over the center of the bubble you will
see a choice of 6 icons. Let’s start on the top left hand corner.
○ If you click on the cross with arrows you can move the bubble anywhere
you like on the screen
○ Moving to the top right, if you click on the X your bubble will disappear
(to get it back simply click the undo button on the top left of your screen)
○ Clicking on the middle icon on the right hand side of the bubble allows
you to create a new sibling bubble (i.e. a bubble that does not spring from
the 1st bubble, but is entirely separate)
○ The blue circles icon allows you to show directional relationships through
the use of arrowed lines. By clicking on the icon and dragging your cursor
to the sibling bubble you just made in the previous step you can see an
example of this
○ Clicking on the middle bottom icon of one of the bubbles you have created
allows you to make a child balloon (i.e. a bubble that does spring from a
previous bubble, generally these bubbles have some sort of direct
relationship to each other, with the concept in the child balloon being a
sub-concept of the original parent balloon)
○ Lastly, clicking on the bottom left hand icon allows you to change the
color of the balloon
○ Also, if you would like to print your conceptual model you can click the
set print area button at the upper left. This will help you to ensure your
entire conceptual model is within the printable area of the page
○ Also, to zoom in and out you can scroll up and down on your mouse or hit
the plus and minus buttons on the upper left
• Now take about 5 minutes to familiarize yourself with the software on your
own
○ To do this begin to craft a practice conceptual model on important things
to consider when buying a new car (ex. gas mileage)
○ Practice using the different icons that we just went over and try to
incorporate each function into your conceptual model at least once
○ I will walk around to offer suggestions and take questions
• To start you will see that there is a single bubble in the center of the screen
○ If you click on the text saying start here you can replace that with
whatever words or concept you deem appropriate
• Now if you simply place your cursor over the center of the bubble you will
see a choice of 6 icons. Let’s start on the top left hand corner.
○ If you click on the cross with arrows you can move the bubble anywhere
you like on the screen
○ Moving to the top right, if you click on the X your bubble will disappear
(to get it back simply click the undo button on the top left of your screen)
○ Clicking on the middle icon on the right hand side of the bubble allows
you to create a new sibling bubble (i.e. a bubble that does not spring from
the 1st bubble, but is entirely separate)
○ The blue circles icon allows you to show directional relationships through
the use of arrowed lines. By clicking on the icon and dragging your cursor
to the sibling bubble you just made in the previous step you can see an
example of this
○ Clicking on the middle bottom icon of one of the bubbles you have created
allows you to make a child balloon (i.e. a bubble that does spring from a
previous bubble, generally these bubbles have some sort of direct
relationship to each other, with the child balloon being subordinate to the
original parent balloon)
○ Lastly, clicking on the bottom left hand icon allows you to change the
color of the balloon
• Come together as a group and consolidate individual lists into group list on
board
○ Go around the room with each student reading off their list
○ Emphasize commonalities with plus signs
○ Highlight legitimate differences in opinion as food for thought
○ Take note of “AHA” moments
Very important concept that only a select few thought of, but all
recognize as essential to the question
• Go back onto Bubbl.us and create your own conceptual model combining
your individual list and thoughts with that of the collaborative group list
○ Remember to start with requirements and build from there
○ Highlight relationships between concepts and directional flow of those
relationships where applicable
• Briefly look at the way someone sitting next to you has set up their
conceptual model
○ Take away ideas for your own
○ Offer suggestions or alternatives
Follow-Up Questionnaire B
81
Thanks for your participation! Please take a few moments to answer the following
questions. Your feedback is greatly appreciated.
1. Please rate your understanding of conceptual modeling prior to this study, with 1
being extremely low and 5 being extremely high.
1 2 3 4 5
2. Please rate your understanding of conceptual modeling following this study, with
1 being extremely low and 5 being extremely high.
1 2 3 4 5
1. Please rate how often explicit conceptual modeling has been a part of your
personal analytic process prior to this experiment, with 1 being never and 5 being
every time you produce an intelligence estimate.
1 2 3 4 5
1. Please rate how often you plan to incorporate explicit conceptual modeling into
your personal analytic process in the future, with 1 being never and 5 being every
time you produce an intelligence estimate.
1 2 3 4 5
2. Please rate your interest in the study after having completed the experiment, with
1 being not interested and 5 being extremely interested.
1 2 3 4 5
82
3. Based on your experience in this experiment, how useful do you feel structured
approaches to the analytic process are, with 1 being not useful and 5 being
extremely useful.
1 2 3 4 5
5. Please provide any additional comments you may have regarding conceptual
modeling in general or any particular part of this experiment.
Follow-Up Questionnaire A
83
Thanks for your participation! Please take a few moments to answer the following
questions. Your feedback is greatly appreciated.
1. Please rate your understanding of conceptual modeling prior to this study, with 1
being extremely low and 5 being extremely high.
1 2 3 4 5
2. Please rate your understanding of conceptual modeling following this study, with
1 being extremely low and 5 being extremely high.
1 2 3 4 5
3. Please rate how useful you found the conceptual modeling training provided at the
beginning of this experiment to be, with 1 being not at all helpful and 5 being
extremely helpful.
1 2 3 4 5
4. Please rate how often explicit conceptual modeling has been a part of your
personal analytic process prior to this experiment, with 1 being never and 5 being
every time you produce an intelligence estimate.
1 2 3 4 5
5. Please rate how often you plan to incorporate explicit conceptual modeling into
your personal analytic process in the future, with 1 being never and 5 being every
time you produce an intelligence estimate.
1 2 3 4 5
84
6. Please rate whether or not you found that explicit conceptual modeling in this
experiment aided you in developing a more thorough and nuanced intelligence
analysis.
7. Please rate how effective you think the conceptual modeling method used in this
experiment, inclusive of both individual work and group collaboration, was.
1 2 3 4 5
8. Please rate how useful you found the use of the technology aid Bubbl.us to be in
creating and updating your conceptual models, with 1 being not useful and 5
being extremely useful.
1 2 3 4 5
Yes Maybe No
10. Please rate your interest in the study after having completed the experiment, with
1 being not interested and 5 being extremely interested.
1 2 3 4 5
11. Based on your experience in this experiment, how useful do you feel structured
approaches to the analytic process are, with 1 being not useful and 5 being
extremely useful.
1 2 3 4 5
85
13. Please provide any additional comments you may have regarding conceptual
modeling in general or any particular part of this experiment.
Participation Debriefing B
Thank you for participating in this research. I appreciate your contribution and
willingness to support the student research process.
This experiment was designed to test the specific part of the analytic process termed
conceptual modeling. Currently there has been little research done on the topic of
86
conceptual modeling within the field of intelligence analysis, and this study hopes to take
the first of many steps in establishing the importance of explicit conceptual modeling
within the analytic process.
If you have any further questions about conceptual modeling or this research you can
contact me at sferru13@mercyhurst.edu.
Participation Debriefing A
Thank you for participating in this research. I appreciate your contribution and
willingness to support the student research process.
The purpose of this study was to test the value of explicit conceptual modeling as it
applies to the quality of the analysis produced. Furthermore, this experiment tested the
87
If you have any further questions about conceptual modeling or this research you can
contact me at sferru13@mercyhurst.edu.
88
***The following results are based on a 0.05 level of significance. However, due to
the fact that this research is exploratory in nature, a 0.10 level of significance was
deemed most appropriate and is therefore reflected in the text of this thesis.***
Results:
Null: there is no difference between control and experimental for source reliability.
Alternative: there a difference between control and experimental for source reliability.
3.00 23 24
22 25 Box plot shows outliers for
2.50 control group. Even with the
Response
Control Experimental
Group
89
-1
-2
1.0 1.5 2.0 2.5 3.0
Observed Value
Group Statistics
Std. Error
Group N Mean Std. Deviation Mean
Response Control 25 2.1200 .43970 .08794
Experimental 22 2.2273 .52841 .11266
90
In d ep en d en t S am p les T est
According to Levene’s test (P-value = 0.142) > ( = 0.05), thus assumption of equal
variances is satisfied.
Indep endent S am ples Test
Null: there is no difference between control and experimental for analytic confidence.
Alternative: there a difference between control and experimental for analytic confidence.
24 47
-2
1.0 1.5 2.0 2.5 3.0
Observed Value
1.0
Most points are close to the
0.5 line thus the assumption of
0.0 normality is satisfied for
-0.5 the group Experimental.
-1.0
-1.5
1.0 1.5 2.0 2.5 3.0
Observed Value
92
Group Statistics
Std. Error
Group N Mean Std. Deviation Mean
Response for Control 25 1.9600 .45461 .09092
Analytical Confidence Experimental 22 1.9091 .61016 .13009
According to Levene’s test (P-value = 0.137) > ( = 0.05), thus assumption of equal
variances is satisfied.
Independent Samples Test
Null: there is no difference between control and experimental for forecast results.
Alternative: there a difference between control and experimental for forecast results.
2.00
1.40
1.20
1.00
Control Experimental
Group
0.25
0.00 Most points are not close to
-0.25 the line thus the assumption
-0.50 of normality is not satisfied
for the group Control.
-0.75
-1.00
1.0 1.2 1.4 1.6 1.8 2.0
Observed Value
0.75
Most points are not close to
0.50
the line thus the assumption
0.25
of normality is not satisfied
0.00 for the group Experimental.
-0.25
-0.50
Cannot use independent samples t- test as normality is not satisfied. Need to use
Wilcoxon Rank Sum test, non-parametric test.
Descriptive Statistics
Ranks
Test Statisticsa
Response
for Forecast
Results
Mann-Whitney U 200.500
Wilcoxon W 453.500
Z -1.844
Asymp. Sig. (2-tailed) .065
a. Grouping Variable: Group
Is there a difference between control and experimental for question, “Please rate how
interested you are in this study, with 1 being not interested and 5 being extremely
interested.”?
Null: There is no difference between control and experimental for question, “Please rate
how interested you are in this study, with 1 being not interested and 5 being extremely
interested.”
95
Alternative: There is a difference between control and experimental for question, “Please
rate how interested you are in this study, with 1 being not interested and 5 being
extremely interested.”
5.00
Response for Q. 1 in
Pre-Experiment
4.50
Box plot shows no outliers
4.00 for both groups.
3.50
3.00
2.50
2.00
Control Experimental
Group for Questionnaire
Results
0.0
normality is satisfied for the
-0.5 group Control.
-1.0
-1.5
-2.0
2.0 2.5 3.0 3.5 4.0
O bserved Value
96
Group Statistics
According to Levene’s test (P-value = 0.476) > ( = 0.05), thus assumption of equal
variances is satisfied.
Independent Samples Test
Is there a difference between control and experimental for question, “Please rate your
interest in the study after having completed the experiment, with 1 being not interested
and 5 being extremely interested.”?
Null: There is no difference between control and experimental for question, “Please rate
your interest in the study after having completed the experiment, with 1 being not
interested and 5 being extremely interested.”
Alternative: There is a difference between control and experimental for question, “Please
rate your interest in the study after having completed the experiment, with 1 being not
interested and 5 being extremely interested.”
5.00
Response for Q. 1 in
0.0
Except one, most points
-0.5 are close to the line thus
-1.0 the assumption of
normality is satisfied for
-1.5
the group Experimental.
-2.0
1.0 1.5 2.0 2.5 3.0 3.5 4.0
Observed Value
Independent Samples Test
According to Levene’s test (P-value = 0.548) > ( = 0.05), thus assumption of equal
variances is satisfied.
99
Group Statistics
Is there a difference between control and experimental for question, “Please rate how
useful you feel structured approaches to the analytic process are, with 1 being not useful
and 5 being extremely useful.”?
Null: There is no difference between control and experimental for question, “Please rate
how useful you feel structured approaches to the analytic process are, with 1 being not
useful and 5 being extremely useful.”
Alternative: There is a difference between control and experimental for question, “Please
rate how useful you feel structured approaches to the analytic process are, with 1 being
not useful and 5 being extremely useful.”
5.00 28 27 26
Response for Q. 2 in
25
Box plot shows outliers for
Pre-Experiment
4.00 Control group. Even with the
presence of outliers, normality
3.00 is satisfied (See below). Also
these values are important for
2.00
the analysis. Thus the
decision is not to remove the
1 2 outlier.
1.00
Control Experimental
Group for Questionnaire
Results for Pre
-2
1 2 3 4 5 6
Observed Value
-1.5
3.0 3.5 4.0 4.5 5.0
Observed Value
101
According to Levene’s test (P-value = 0.541) > ( = 0.05), thus assumption of equal
variances is satisfied.
Independent Samples Test
Null: There is no difference between control and experimental for question, “Based on
your experience in this experiment, how useful do you feel structured approaches to the
analytic process are, with 1 being not useful and 5 being extremely useful.”
Alternative: There is a difference between control and experimental for question, “Based
on your experience in this experiment, how useful do you feel structured approaches to
the analytic process are, with 1 being not useful and 5 being extremely useful.”
Group Statistics
Group for
Questionnai
re Results Std. Std. Error
for Post N Mean Deviation Mean
Response for Q. 2 in Control 25 4.0800 .81240 .16248
Post-Experiment Experiment 20 4.2000 .76777 .17168
al
Sig. (2-
F Sig. t df tailed)
Response for Q. 2 Equal .123 .727 -.504 43 .617
in Post-Experiment variances
assumed
Equal -.508 41.758 .614
variances not
assumed
According to Levene’s test (P-value = 0.727) > ( = 0.05), thus assumption of equal
variances is satisfied.
104
Sig. (2-
F Sig. t df tailed)
Response for Q. 2 Equal .123 .727 -.504 43 .617
in Post-Experiment variances
assumed
Equal -.508 41.758 .614
variances not
assumed
Is there a difference between control and experimental for question, “Please rate your
understanding of conceptual modeling prior to this study, with 1 being extremely low and
5 being extremely high.”
Null: There is no difference between control and experimental for question, “Please rate
your understanding of conceptual modeling prior to this study, with 1 being extremely
low and 5 being extremely high.”
Alternative: There is a difference between control and experimental for question, “Please
rate your understanding of conceptual modeling prior to this study, with 1 being
extremely low and 5 being extremely high.”
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal variances .137 .713 1.340 43 .187
3 in Pre- assumed
Experiment Equal variances 1.335 40.189 .189
not assumed
According to Levene’s test (P-value = 0.713) > ( = 0.05), thus assumption of equal
variances is satisfied.
Group Statistics
Group for
Questionnair
e Results for Std. Std. Error
Q. 2, 3, and 4 N Mean Deviation Mean
Response for Q. 3 in Control 25 3.5600 1.00333 .20067
Pre-Experiment Experimental 20 3.1500 1.03999 .23255
107
Sig. (2-
F Sig. t df tailed)
Is there a difference between control and experimental for question, “Please rate your
understanding of conceptual modeling following this study, with 1 being extremely low
and 5 being extremely high.”?
Null: There is no difference between control and experimental for question, “Please rate
your understanding of conceptual modeling following this study, with 1 being extremely
low and 5 being extremely high.”
Alternative: There is a difference between control and experimental for question, “Please
rate your understanding of conceptual modeling following this study, with 1 being
extremely low and 5 being extremely high.”
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal .944 .337 .000 43 1.000
3 in Post- variances
Experiment assumed
Equal .000 43.000 1.000
variances not
assumed
According to Levene’s test (P-value = 0.337) > ( = 0.05), thus assumption of equal
variances is satisfied.
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal .944 .337 .000 43 1.000
3 in Post- variances
Experiment assumed
Equal .000 43.000 1.000
variances not
assumed
Is there a difference between control and experimental for question, “Please rate how
often explicit conceptual modeling has been a part of your personal analytic process prior
to this experiment, with 1 being never and 5 being every time you produce an intelligence
estimate.”?
Null: There is no difference between control and experimental for question, “Please rate
your understanding of conceptual modeling following this study, with 1 being extremely
low and 5 being extremely high.”
Alternative: There is a difference between control and experimental for question, “Please
rate your understanding of conceptual modeling following this study, with 1 being
extremely low and 5 being extremely high.”
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal .002 .968 .162 43 .872
4 in Pre- variances
Experiment assumed
Equal .162 41.211 .872
variances not
assumed
According to Levene’s test (P-value = 0.968) > ( = 0.05), thus assumption of equal
variances is satisfied.
Group Statistics
Group for
Questionnair
e Results for Std. Std. Error
Q. 2, 3, and 4 N Mean Deviation Mean
Response for Q. 4 in Control 25 2.8000 1.04083 .20817
Pre-Experiment Experimental 20 2.7500 1.01955 .22798
113
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal .002 .968 .162 43 .872
4 in Pre- variances
Experiment assumed
Equal .162 41.211 .872
variances not
assumed
Is there a difference between control and experimental for question, “Please rate how
often you plan to incorporate explicit conceptual modeling into your personal analytic
process in the future, with 1 being never and 5 being every time you produce an
intelligence estimate.”?
Null: There is no difference between control and experimental for question, “Please rate
how often you plan to incorporate explicit conceptual modeling into your personal
analytic process in the future, with 1 being never and 5 being every time you produce an
intelligence estimate.”
Alternative: There is a difference between control and experimental for question, “Please
rate how often you plan to incorporate explicit conceptual modeling into your personal
analytic process in the future, with 1 being never and 5 being every time you produce an
intelligence estimate.”
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal .086 .770 -.504 43 .617
4 in Post- variances
Experiment assumed
Equal -.501 39.631 .619
variances not
assumed
According to Levene’s test (P-value = 0.77) > ( = 0.05), thus assumption of equal
variances is satisfied.
Group Statistics
Group for
Questionnair
e Results for Std. Std. Error
Q. 2, 3, and 4 N Mean Deviation Mean
Response for Q. 4 in Control 25 3.4800 .77028 .15406
Post-Experiment Experimental 20 3.6000 .82078 .18353
116
Sig. (2-
F Sig. t df tailed)
Response for Q. Equal .086 .770 -.504 43 .617
4 in Post- variances
Experiment assumed
Equal -.501 39.631 .619
variances not
assumed
Null: Number of bubbles for Pre and Post experimental CM are not different.
Alternative: Number of bubbles for Pre and Post experimental CM are significantly
different.
80.00 36
Box plot shows outliers
60.00
for both groups. Even
21 with the presence of
Bubbles
outliers, normality is
40.00 satisfied (See below).
Also these values are
important for the
20.00
analysis. Thus the
decision is not to
0.00 remove the outliers.
Pre Post
Group
Tests of Normality
a
Kolmogorov-Smirnov Shapiro-Wilk
Group Statistic df Sig. Statistic df Sig.
Bubbles Pre .183 24 .036 .927 24 .086
Post .193 23 .026 .863 23 .005
a. Lilliefors Significance Correction
Kolmogorov-Smirvov test gives p-values < ( = 0.05), thus normality assumption is not
satisfied for both samples. So take a look at Shapiro-Wilk test. P-value for group Pre is
> (( = 0.05), thus normality assumption is satisfied for group Pre. Shapiro-Wilk test
gives p-values < ( = 0.05), thus normality assumption is not satisfied for group Post.
Need to look at Normal probability plot for group Post.
118
-2
0 10 20 30 40 50 60
Observed Value
Observed Value
Group Statistics
Std. Error
Group N Mean Std. Deviation Mean
Bubbles Pre 24 23.0000 11.90542 2.43018
Post 23 30.9130 15.60569 3.25401
119
According to Levene’s test (P-value = 0.207) > ( = 0.05), thus assumption of equal
variances is satisfied.
Conclusion: At 5% level, Number of bubbles for Pre and Post experimental CM are
not different.
Null: Number of lines and arrows for Pre and Post experimental CM are not different.
Alternative: Number of lines and arrows for Pre and Post experimental CM are
significantly different.
70.00 36
Box plot shows
60.00
Lines and Arrows
outliers for Post group.
50.00 Even with the presence
40.00
of outliers, normality is
satisfied (See below).
30.00 Also this value is
20.00 important for the
analysis. Thus the
10.00 decision is not to
0.00 remove the outliers.
Pre Post
Group
Tests of Normality
a
Kolmogorov-Smirnov Shapiro-Wilk
Group Statistic df Sig. Statistic df Sig.
Lines and Arrows Pre .128 24 .200* .942 24 .181
Post .200 23 .018 .895 23 .020
*. This is a lower bound of the true significance.
a. Lilliefors Significance Correction
0 10 20 30 40 50 60 70
Observed Value
Group Statistics
Std. Error
Group N Mean Std. Deviation Mean
Lines and Arrows Pre 24 26.5000 12.39916 2.53097
Post 23 30.9565 12.77952 2.66471
According to Levene’s test (P-value = 0.708) > ( = 0.05), thus assumption of equal
variances is satisfied.
Conclusion: At 5% level, Number of lines and arrows for Pre and Post experimental
CM are not different.
Null: Number of bubbles for Control and Experimental CM are not different.
80.00 60
Box plot shows outliers
for both groups. Even
60.00 with the presence of
53
outliers, normality is
Bubbles
Control Experimental
Group
Tests of Normality
a
Kolmogorov-Smirnov Shapiro-Wilk
Group Statistic df Sig. Statistic df Sig.
Bubbles Control .202 24 .012 .920 24 .057
Experimental .191 47 .000 .893 47 .000
a. Lilliefors Significance Correction
Kolmogorov-Smirvov test gives p-values < ( = 0.05), thus normality assumption is not
satisfied for both samples. So take a look at Shapiro-Wilk test. P-value for group
Control is > (( = 0.05), thus normality assumption is satisfied for group Control.
Shapiro-Wilk test gives p-values < ( = 0.05), thus normality assumption is not satisfied
for group Experimental. Need to look at Normal probability plot for group Experimental.
123
0 20 40 60 80
Observed Value
Group Statistics
Std. Error
Group N Mean Std. Deviation Mean
Bubbles Control 24 12.5833 3.46306 .70689
Experimental 47 26.8723 14.25942 2.07995
According to Levene’s test (P-value = 0.000) < ( = 0.05), thus assumption of equal
variances is not satisfied.
124
Null: Number of lines and arrows for Control and Experimental CM are not different.
Alternative: Number of lines and arrows for Control and Experimental CM are
significantly different.
70.00 59
60.00
Lines and Arrows
Group
125
Tests of Normality
a
Kolmogorov-Smirnov Shapiro-Wilk
Group Statistic df Sig. Statistic df Sig.
Lines and Arrows Control .129 24 .200* .942 24 .182
Experimental .174 46 .001 .943 46 .025
*. This is a lower bound of the true significance.
a. Lilliefors Significance Correction
Observed Value
Group Statistics
Std. Error
Group N Mean Std. Deviation Mean
Lines and Arrows Control 24 12.9167 3.88885 .79381
Experimental 46 29.3043 12.03859 1.77499
126
According to Levene’s test (P-value = 0.000) < ( = 0.05), thus assumption of equal
variances is not satisfied.
Conclusion: At 5% level, Number of lines and arrows for Control and Experimental
CM are significantly different.