Está en la página 1de 79

METHODOLOGY

METHODOLOGY
METHODOLOGY

 Research Design
 Sampling Technique

 Research Instrument

 Collecting the Data

 Statistical Tools
 Descriptive
 Inferential
 Research Design- refers to the overall plan and scheme
for conducting the study.
 Historical Design
 Descriptive Design
 Experimental Design

Sampling- refers to the design for getting the respondents


of the study with minimum cost and such that the
resulting observation will be representative of the entire
population
 Instruments- refer to the data gathering devices that will
be used in the study.
RESEARCH DESIGNS
 Descriptive Method of Research
 The purpose of this design is to describe the status of
events, people or subjects as they exist.
 Present oriented
 Not able to explain the cause-effect relationship but is
able to provide clues to such relationships.
 Describes and interprets what is currently prevailing.
DESCRIPTIVE METHODS OF RESEARCH
Descriptive research usually makes some type of
comparison contrasts and correlation and sometimes, in
carefully planned and orchestrated descriptive
researches, cause-effect relationship may be
established to some extent
 Descriptive Comparative Studies- aims to establish
significant differences between two or more groups of subjects
on the basis of a criterion measure. (ex. Compare the
managerial effectiveness of three groups of managers A, B, C).

Limitations descriptive research

1. The lack of control variables making them less reliable in terms


of actual hypothesis testing.
2. Unless the normative survey where the entire population is
considered, conclusions drawn from descriptive designs are at
best tentative.
EXPERIMENTAL METHOD OF RESEARCH
 Often regarded as the most rigid and scientific of all
research methods.
 When used properly it can provide conclusions that are
beyond question.
 Future oriented

 Characterized by its strict adherence to the scientific


process.
Features of Experimental Research
1. Presence of independent and dependent variable;
2. The presence of control; and
3. The measurement of effect of an independent
variable, on the dependent variable neglecting the
effects of all other variables.
 Variable- measure of a specific quality of an object,
individual or event

 Factor- is a set of variables that correlate highly with


each other. Consists of several variables.

Factor- Intelligence
Variables- mental ability, mathematical ability etc.
EXPERIMENTAL DESIGNS
1. Single treatment design
 One group pretest –posttest design
- Two-Group Pretest-posttest design
- Solomon Four Group Design
- Posttest-only control group design

2. Double or Multiple –Treatment Design


- RCBD –randomized complete block Design
eg . Block treatment design
RCBD treatment-2 block design
DESIGNS
 Pre-test/Post-test control group design
 The design requires two groups of equivalent standing in terms of
a criterion measure (e.g. achievement or mental ability).
 The first group is designated as the control group while the
second group is designated as the experimental group.
 Both groups are given same pretest.
 The control group is not subjected to a treatment while the
experimental group is given the treatment factor.
Pre-test/Post-test control group design

 After the experimental period, both groups are


again given the same posttest.
 The researcher may now conduct a comparison of
the posttest results or the gain in scores (posttest-
pretest) between the experimental and control
group.
 This design is threatened by certain factors:
maturation, test wiseness and natural attrition
(natural death or drop outs).
 Single Group Pretest –Posttest Design
- In experimental conditions where a limited number of subjects
are available.
- the group is first given a pretest followed by the usual
treatment and then a posttest is administered.
- this design is very delicate because the researcher must see to
it that the situations are equivalent before and during the
experimental factor is introduced.
-more open to threats to internal validity such as the Hawthorne
effect (test-wiseness), maturation and attrition.
 Solomon Four-Group Design
- employ four equivalent groups.
-The first two groups obey the pretest-posttest control group
design; the third group is given no pretest with treatment and a
posttest.
-The last group is given no pretest, no treatment but with posttest.
- the design can eliminates the Hawthorne effect, effects of
maturation and attrition, but has the main disadvantage of
requiring a large number of respondents.
 To analyze the data in Solomon four- group design, one
uses a two factor analysis of variance.
 Factor A is the effect of treatment while the factor B is the
effect of the pretest.
 The interaction effect AB would indicate if the treatment
works well with pretesting or without pretesting
SAMPLING PLANS AND SAMPLING DESIGNS
 Sample- is the small group that you observe .
 Population- is the larger group about which your
generalization is made
 Sampling is the process of obtaining information from a
proper subset of a population
 The values calculated from this sample will not be too far
from the actual values of the population.
 If we know that the underlying population is normally
distributed, then if we have some estimate of the
variability of the population such as sample variance (s2)
then the formula for the sample size is

N=4s2/e2 where e= error tolerance (about .05 or .01) for a


confidence coefficient of a=.05

For a = .01, then the formula becomes


N=9s2/e2
 In the event of complete ignorance about the behavior of
the population, the Slovin’s formula may be applied:

n= N/(1+Ne2) where N= population size and


e= error tolerance or
desired margin of error.
n= a sample size

N=1000 , error of e .05 is tolerated


n=(1000)/(1+(1000)(.05)(.05))
n=286
 Example
 Population

Male Female Total


High School 200 350 550
College 150 300 450
Total 350 650 1000

Sampling Plan
Male Female Total
High School 57 100 157
College 43 86 129
Total 100 186 286
SAMPLING TECHNIQUES
1. Random Sampling- is a method of selecting a
sample size from the universe such that each
member of the population has an equal chance of
being included in the sample and all possible
combinations of size have an equal chance of
being selected as the sample. Considered as the
best procedure.
Random Sampling Methods
a. Table of random numbers- most systematic
technique for getting sample units at
random.
-number all the members of the population.
-determine the number of column and rows
as start through lottery or fishbowl
technique.
(Sevilla et al).
b. Lottery sampling or fish bowl technique
 assign numbers to the participants of the population.
 Write the number of the participants in small pieces of paper
one number to a piece.
 Roll the small pieces of papers and put them in a container.
 Shake the box thoroughly
 Pick the desired number of participants from the container.
 Sampling without replacement- in which the drawn
pieces of paper with a number each are no longer
returned to the box.

 Sampling with replacement- which returns to the box


every piece of paper drawn. Considered more scientific
since each sample is chosen on the same probability.
SYSTEMATIC SAMPLING
 Vockell (1993) defines it is a strategy for selecting the
members of a sample that allows only chance and a
“system” to determine membership in the sample. A
system is a planned strategy for selecting members
after a starting point is selected at random such as
every 5th or 10th etc.
 Decide on the number of participants as sample of study

 Divide the population by the needed sample size to


determine the sampling interval
 Example sample 200 out of 2000 participants
 Randomly select a number between 1 to 10 and start
with the participant and take every 10th participant in
the list after that.
 Example between 1 to 10 number 3 was randomly
picked then add 3 to the random interval (3+10 = 13),
thus 13th individual is the second sample, then 23rd ,
etc.
 Continue adding the constant sampling interval until
you reach the end of the list.
 The original population list should be in random order
to ensure that systematic sampling is a good
substitute for random sampling.
STRATIFIED SAMPLING
 Is defined as a strategy for selecting samples in such a
way that specific sub groups (strata) will have a
sufficient number of representative within the sample
to provide sample numbers for sub-analysis of the
members of sub-groups (Vockell, 1983).
 The target population is first divided into groups each
belonging to the same stratum.
 To be effective the participants within each of the
strata should be selected at random. (statified random
sampling)
 Basis of stratification may be geographical or may
involve characteristics of population such as income,
occupation, age , sex, year in college, professional
status, etc.
 This strategy enables to determine to what extent
each stratum in the population is represented in the
sample (stratified proportional sampling).
CLUSTER SAMPLING
 Occurs when you select the members of your sample
in clusters rather than in using separate individuals.

 Sampling in which groups not individual are randomly


selected.
MULTI-STAGE SAMPLING
 also known as multi-stage cluster sampling)
 is a more complex form of cluster sampling which contains two
or more stages in sample selection.
 large clusters of population are divided into smaller clusters in
several stages in order to make primary data collection more
manageable.
 It has to be acknowledged that multi-stage sampling is not as
effective as true random sampling; however, it addresses
certain disadvantages associated with true random sampling
such as being overly expensive and time-consuming.
MULTI-STAGE SAMPLING
NON-RANDOM SAMPLING
 All the participants of the investigation are not derived
through equal chances. Certain parts of the overall
group are deliberately not included in the selection of
the representative subgroup.

 Also called non-probability sampling or judgment


sampling.
CLASSIFICATION OF NON-RANDOM SAMPLING
 Purposive or deliberate sampling- sampling with a
purpose.
 Example you are interested in finding out a particular
reaction of some students on the devaluation of the peso,
instead of asking the opinions of all students in various
colleges and universities you may purposively ask only
the student of a particular college or university.
 Quota sampling- identify a set of important
characteristics of a population and then select your
desired samples in a non-random way. It is assumed that
the samples will match the population with regard to the
chosen set of characteristics.

 If for instance you are required in a research class to


determine the most favored soft drinks from a population
of televiewers, you should interview televiewers who
drink soft drinks. You continue this process until you
arrive at your quota.
 Convenience Sampling- sampling strategy based on
the convenience of the researcher.

 Example if you want to know the opinions of Filipinos


about national reconciliation in the Philippines through
telephone interviews, you will have the chance to
interview only those who have telephone to the bias
against those who have no telephone.
DATA COLLECTION METHODS
INSTRUMENTATION
General Criteria of a good research instrument
 Validity- refers to the extent to which the instrument
measures what it intends to measure (variables in the
study)
Content validity can be established through the opinion
of experts in the area of knowledge being investigated.
 Reliability – refers to the degree of consistency and
precision or accuracy that a measuring instrument
demonstrates.
 The instrument should be able to elicit approximately the
same response when applied to respondents who are
similarly situated.
 Sensitivity – the ability of an instrument to make
the discriminations required for the research
problem.
 If the reliability and validity of a test are high,
most likely, the test is also sensitive enough to
make finer distinctions in the degrees of variations
of the characteristics being measured.
 Objectivity- a the degree to which the measure is
independent of the personal opinions, subjective
judgment, biases, and beliefs of the individual test user.

 The procedure for determining difficulty as well as


discrimination indices are empirical and objective, hence
there is a certain degree of objectivity in the test items
established through item analysis.
 Feasibility –is concerned with the aspects of skills, cost
and time

 Readability- refers to the level of difficulty of the


instrument relative to the intended users.
TEST CONSTRUCTION
Step 1. Content Validation

 Is the degree to which the test represents the essence,


the topics, and the areas that the test is designed to
measure.

 The test items need to be a representative sample of the


content of the variable being measured.
How to achieve high degree of content validity

1.documentary analysis or pre-survey


2. development of a Table of Specification
3. Consultation with experts
4. Item Writing
Step 2 Face Validation
 The crude type of validity pertains to whether the test
looks valid, that is, if by its face of the instrument, it looks
like it can measure what you intend to measure.
 The test items are ocularly inspected and later on judged
superficially .
 Item inspection- initial draft of the instrument inspected
by a group of evaluators e.g. thesis adviser, test
construction expert, teacher or professional whose
specialization are related to the subject matter.
 Test items will be evaluated as suitable, not suitable,
needs revision.

 Inter-judge consistency- collate the data gathered for


analysis. For instance you requested three persons to
inspect the test items. If an item is judged by at least
two out of three evaluators as a suitable item, then the
item can be retained.
Step 3. First Trial Run
 Be sure to try out your test to a sample that is comparable
to your target population. The try-out sample should be
large enough to provide meaningful computations.

Purposes is to:
1. determine the language suitability of the items and ease in
following directions from the point of view of the examiners.
2. Determine the average length of time to finish the test and
other problems relevant to taking the test.
Step 4. Item Analysis

 For judging good and poor items quantitatively


 Determines internal consistency, or item homogeneity
and the discriminability and difficulty indices of the items.
Item Analysis
 The U-L Index Method-

by John Stocklein (1957)

It is appropriate for test whose criterion is measured


along a continuum scale (scholastic rating, job ratings,
performance records, achievement test scores) and
whose individual item is scored right or wrong and
positive or negative.
U-L Index Method
Steps
1. score the test
2. arrange the papers from highest to lowest scores.
3. separate the top 27 percent and the bottom 27 percent
of the cases
 Prepare a tally sheet. Tally the number of cases from
each group who got the items right for each of all the
items
Item No. Upper 27% Lower 27%

 Convert the tallies to frequencies and then to


proportions
 Compute the difficulty index of each item using the
formula
Df= PU + PL/2

where: Df= difficulty index


PU= proportion of the upper 27 percent group who
got the item right
PL= proportion of the lower 27 percent who got
the item right
 Compute discrimination index of each item using the
formula:
Ds =PU –PL where Ds = discrimination index
 Deciding whether to retain or discard an item will be
based on two ranges. Items with difficulty indices within
.20 to .80 and discrimination indices within .30 to .80 are
retained.
 Chung-the-fan item analysis table can be used to obtain
the discrimination indices of the items. The indices
obtained can be interpreted using Ebel’s “rule of thumb”
(Stanley and Hopkins 1972)

Index of Discrimination Evaluation


.40 and up Very good item
.30 to .39 Reasonably good item
but possibly subject to
improvement
.20 to .29 Marginal item usually
needing improvement
.19 and below Poor item to be rejected
Improved or revised
SUMMARY OF ITEM ANALYSIS
Item No. Upper Lower Df Ds Decision
27% 27% (Difficul (Discrim
ty Index) ination
F P F P Index)
1 15 .79 5 .26 .53 .53 Very
Good
2 12 .63 15 .79 .71 .16 Poor
3 10 .53 3 .16 .35 .37 Good
4 10 .53 6 .32 .43 .10 Poor
 In achievement test of multiple choice type, an analysis of
the plausibility and attractiveness of the distractors is also
done.
Item Options (correct option is Remar
No. enclosed) ks
a b c d
1 3 (20) 5 10 All
Good
2 0 10 10 (18) Reviise
d (a)
3 5 6 (20) 7 All
Good
4 9 8 2 (19) All
Good
5 (15) 15 4 4 Revised
(b)
OTHER ITEM ANALYSIS TECHNIQUES
 The Pearson Product-Moment Correlation Method
 Point-Biserial Correlation Method

 The Criterion of Internal Consistency Method

 Use of t-test

 The use of two or more techniques


Step 5. Second Run or Final Test Administration

Step 6. Evaluation of the Test


Test the validity and reliability

Evaluating the Test Reliability


1. Split-half Reliability
Odd- even split half technique
2. Test-Retest Reliability or coefficient of stability. It is the
consistency of the test over time.
To calculate the coefficient, the test is administered twice
to the same sample with a given time interval at least two
weeks. The Pearson r is then calculated to determine the
reliability of the test.

The delay eliminates the “the exposure or practice effect” as


well as “maturity effect” on the part of the respondents.
3. Alternate Form or Parallel form Reliability

The coefficient of equivalence is computed by administering


two parallel or equivalent forms of the test to the same
group of individuals.
The same correlational method is applied to compute for the
reliability coefficient.
The difficulty of this technique lies in the preparation of the
test forms.
EVALUATING THE TEST VALIDITY

1. Criterion-Related Validity
Characterized by a prediction of relation to an outside
criterion and by checking a measuring instrument, either
now or in the future. Also called predictive validity and
empirical validity.
2. Construct Validity

Validation of theory or the concept, behind the test.


Sometimes called concept validity for it involves
discovering a positive correlation between and among
the variables/construct that define the concept.

Factor analysis as the most powerful method of


construct validation.
METHODS OF DATA COLLECTION
1. Observation Method – the researcher or observer watches
the research situation
Types of Observation
a. Unstructured observation –is flexible and open.
b. Structured observation - makes use of objective
observation guides.
 The presence of guides delimits the subject for observation
so that activities, events, or behavior relevant to the problem
at hand are recorded. Sharper focus on the more relevant
data.
2. The questioning Technique (Survey)

Criteria of an effective questions


1. clarity of language- the vocabulary level , language
structure and conceptual level of the questions should
suit the level of the respondents.
2. Specificity of content and time period.
3. Singleness of purpose.
4. Freedom from assumption
5. Freedom from suggestion
Linguistic completeness and grammatical consistency
METHODS AND TOOLS FOR QUESTIONING
1. Research Interview
2. Questionnaire
 Close form or an open form

 Response formats

 open end format

 Multiple choice format

 Checklist format
3. Objective Methods
Supposed to have greater degree of objectivity in that
scoring items does not pose problems of consistency or
homogeneity
 Multiple-choice type
 Scale type
 Likert scale
QUESTIONNAIRE CONSTRUCTION
Example (Employability)
Steps:
1. Define the concept in terms of behavioral words.
eg. Employability –the ability of the person to find employment
suited to his qualification.
2. Identify the incidence of the concept according to the
definition.
Eg. Some indicators of employability include:
time lapsed before finding a job (T)
Suitability of the job found to the qualification of a person (Q)
Salary of the person (S)
Employability status (ES)
3. Indicate how each indicator is to be measured
eg. Time lapse = months
Suitability of the job found (related=1, not related =0)
salary of the person= in pesos
employment status (employed=1,
unemployed =0)

4. Evolve a single measure of the concept


eg. E=f (T,Q,S,ES)
Time lapse : Less than 3 months =1
3 or more months =0
Salary: P2,500-above =1
below P2,500 =0

All scores are coded as 1 or 0,


We can therefore construct;
E=W1T + W2Q+ W3S+ W4ES
where W’s are weight
E= .2T + .3Q + .1 S + .4 ES
Then , a person who:
T: found a job after 3 months (0)
Q: found a job suited to his qualification (1)
S: receives a salary of P3,000(1)
ES: is employed (1)

Would have a score:


E=.2(0) + .3(1) + .1 (1) + .4 (1) = .8
 The researcher may opt to categorize the
employability scores as follows:

Range of Scores Interpretation


.19-below very unemployable
.2-.4 unemployable
.41- .60 Fairly employable
.61-.80 Employable
.81-above Highly employable
Other ways of generating data
Structured interviews- follows specific format with the
same line of questioning. Administered by skilled
researchers
 The researchers should determine the real picture by
analyzing the responses of the key informants.

Unstructured interviews- no specific format is being


followed. Freely talk about the topic and information
gathered are recorded, systematically presented, from
which conclusion are drawn
STATISTICAL TOOLS
Descriptive Statistics
 is the term given to the analysis of data that helps
describe, show or summarize data in a meaningful way
such that, for example, patterns might emerge from the
data.
 Descriptive statistics do not, however, allow us to make
conclusions beyond the data we have analyzed or reach
conclusions regarding any hypotheses we might have
made.
 They are simply a way to describe our data.
Two general types of statistic that are used to describe
data
 Measures of central tendency

 mode, median, and mean


 Measures of spread

 range, quartiles, absolute deviation, variance and


standard deviation.
Inferential Statistics

 techniques that allow the use of samples to make


generalizations about the populations from which the
samples were drawn.

 It is, therefore, important that the sample accurately


represents the population.

 The methods of inferential statistics are (1) the estimation


of parameter(s) and (2) testing of statistical hypotheses.
SELECTING APPROPRIATE STATISTICAL
TECHNIQUES.
 Parametric test are usually used for the data that
are of the interval or ratio levels of measurements.

 Z-test of one sample mean is used to determine if


an obtained sample mean or average of scores or
values is but a random sample from population with
a given or hypothesized or expected population
mean ,
SELECTING APPROPRIATE STATISTICAL
TECHNIQUES
 t-test for independent sample mean is used to
determine if an observed difference between the
averages of two independent groups is statistically
significant,

 t-test for dependent sample means is used to determine


if there is a significant difference between two groups of
correlated scores in terms of their means.
 One –way Analysis of Variance is used in order to
determine if there are differences among means of three
or more groups.

 Two way Analysis of variance, also called the factorial


analysis of variance, is employed in order to determine
the main and interaction effects of two independent
factors.

 Pearson Product Moment Correlation is employed


when there are two set of scores and you would like to
determine if the two sets are correlated.
 Chi-square goodness- of- fit test tells if an
observed frequency distribution on a variable differs
significantly from an expected or theoretical
distribution of frequencies

 Chi-square test of association is used to


determine whether or not two variables are
associated with each other.
END OF PRESENTATION

También podría gustarte