Documentos de Académico
Documentos de Profesional
Documentos de Cultura
RESEARCH BASICS
Research can be defined as disciplined, systematic, replicable inquiry. At one time or another, most
graduate students in Communication will either conduct quantitative research or evaluate quantitative
research done by others that uses these methodologies: laboratory-experimental, content analysis, and
sample survey. The survey method will be the focus of COM5331.
It may be helpful to look at four ways that we can characterize research done in Communication:
1.
2.
3.
4.
1.
2.
3.
4.
5.
Formulating hypotheses
6.
Telephone
Mail
Face-to-face
7.
Writing questions
8.
Designing questionnaire
9.
10.
Sampling
11.
12.
13.
Analyses
14.
15.
16.
Presenting/disseminating results
17.
Replication/extension
Important concepts
There are a number of concepts that are important to all research methodologies. They include:
Variables
Variables are concepts or characteristics that reflect variation (take on different values) among the things
being studied. They may be characteristics of people such as demographics, attitudes, behaviors and
knowledge. We presume there is variation in the concepts of interest. There also is an expectation of
measurement - we will do something to measure the variation.
Hypothesis
A hypothesis is a statement of relationship between two or more variables. Typically, hypotheses derive
from theory. We sometimes refer to research as hypothesis testing.
Reliability
The key to reliability is the notion of consistency. If results from a questionnaire are consistent, we say
that it is a reliable measurement instrument. Do repeated observations yield similar results? A basic
principle of research (including surveys) is the use of multiple measurements to assess the reliability of a
measure.
Reliability may take several forms: consistency over time, across forms (e.g., different versions of a
questionnaire), across items, and across people.
Consistency over time refers to "test-retest reliability." We might give the same set of questions to the
same group of individuals on two separate occasions and compare the results. If the questionnaire is
reliable, there should be consistency in the responses. A "coefficient of stability" would indicate the
stability over time -- a high, positive coefficient would mean individuals would respond similarly if given the
same questions at two points in time.
To measure consistency across forms, sometimes termed equivalence or parallel forms reliability, we
correlate the results from two different forms of a measurement instrument given to a single group of
individuals. The more consistent the results between two parallel forms, the greater the equivalence or
reliability.
To assess reliability across people, we establish inter-rater reliability. The greater the agreement
(consistency), the greater the reliability. Inter-rater reliability is especially important in content analysis
research. It can be measured in terms of percent of agreement (the percentage of incidents for which
two observers agree). Alternatively, a correlation coefficient can be calculated to assess the reliability
between coders/observers.
Lastly, reliability can be examined in terms of consistency of items within a measurement instrument. This
is referred to as internal consistency. This is an indicator of how consistently items measure a concept.
We want to determine if the items measure the same content and are thus consistent with each other.
One means of arrive at this is referred to as a split-half procedure. The items are divided into two halves
and a correlation is done on the two halves. The most typical approach to assessing internal consistency
is Cronbach's coefficient alpha. It is the average of all possible split-half estimates. The process of
determining the reliability of a set of items involves item analysis and SPSSWIN's Reliability Analysis
(which goes beyond the scope of COM5331).
Validity
Validity can be thought of as the extent to which a measurement scale or variable represents what it is
supposed to. In other words, does the variable measure the concept it is meant to measure? If a series of
questions are written to measure respondents' satisfaction derived from using on-line newspapers, do
they in fact measure this concept?
There are several ways of assessing validity. The first, content validity, is defined as the extent to which
the content of the measurement instrument reflects what is supposed to be measured. Do the questions
written satisfactions obtained from using on-line newspapers reflect the content of what the researcher
seeks to measure? The more expertise one has in the research area, the more confident one can be that
the questions meet the test of content validity.
Face validity refers to a questionnaire (or other data collection instrument) being judged valid by those
being measured (the respondents). Put more simply, does the measure appear, on the face of it, to be
measuring what is intended?
Concurrent validity involves a comparison of the results from a measurement instrument to results from
other instruments designed to measure the same thing. If you develop a measure of job satisfaction, the
results from that instrument would be compared to other job satisfaction indices. If the two measures are
consistent, we consider the new instrument to have concurrent validity. This is sometimes referred to as
criterion-related validity. In general, in criterion-related, a measure is considered valid to the extent that it
enables a researcher to predict a score on some other measure or predict a behavior of interest.