Está en la página 1de 7

Wk 1 DQ 1

After week ones readings research means to me; the process of finding solutions for problems
through investigation, study, and analyzing factors of the problem/s. Having knowledge of
research allows individuals to Identify the critical issues, gathering relevant information,
analyzing the data in ways that would help decision making, and implementing the right course
of action, are all facilitated by understanding business research (Sekaran, 2003).

Research contributes to the proper organization of the business and development of all its
constituent elements. Moreover, the research can contribute to the selection and proper
organization of elements of the business which are the most effective to achieve purpose of a
specific business project. In such a way, the research proves to be the power which defines the
effective performance of modern companies.

The internet has changed the quality and quantity of research in positive and negative ways.
Some of the postive examples of ways the internet has changed the quality and quantity of
research include the speed and vast choices that someone can find information through research
on the internet. Some of the negative ways the internet has changed the quality and quantity of
research is the false information that can be found on the internet. Not all websites on the internet
can be considered reliable sources when doing research.

Wk 1 DQ 2

Yes, an organization should create research testing for all problems confronting the business
because research investigates the causes of the problems, creates ideas that can be used to solve
the problems confronting the business.

Exploratory research is most appropriately used in an issue or problem where there are few or no
earlier studies to refer to. The focus is on gaining insights and familiarity for later investigation.

The main goal of descriptive research is to describe the data and characteristics about what is
being studied. The idea behind this type of research is to study frequencies, averages, and other
statistical calculations. Although this research is highly accurate, it does not gather the causes
behind a situation. This why descriptive research cannot be used to make final decisions in
decision-making.

Wk 1 DQ 3

Reliability is the consistency of your measurement, or the degree to which an instrument


measures the same way each time it is used under the same condition with the same subjects. In
short, it is the repeatability of your measurement. A measure is considered reliable if a person's
score on the same test given twice is similar. It is important to remember that reliability is not
measured, it is estimated

Validity is the strength of our conclusions, inferences or propositions. A more formal definition
would be the best available approximation to the truth or falsity of a given inference, proposition
or conclusion."

It is my belief that validity is more important than reliability because if an instrument does not
accurately measure what it is supposed to, there is no reason to use it even if it measures
consistently.

Wk. 2 DQ 1

Primary data is data that you have collected for your own use. Secondary data is data collected
by someone else that you are "borrowing." The main disadvantage using secondary data is that
you have no control over how the data was collected. It's human nature to believe most things
that are printed and sometimes that requires people to take a chance on the data provided. The
advantage to secondary data is that it is cheap and immediately available. Primary data on the
other hand is data collected first-hand and eventually used. Primary data can be expensive to
acquire but its main advantage is that it's your data.

Some secondary data may be of questionable accuracy and reliability. Even government
publications and trade magazines statistics can be misleading. For example, many trade
magazines survey their members to derive estimates of market size, market growth rate and
purchasing patterns, then average out these results. Often these statistics are merely average
opinions based on less than 10% of their members (Steppingstones, 2004).

Primary and secondary data should both be used because the primary is obviously a direct
source, but secondary has a broader span of research.

Wk. 2 DQ 2

Some statistics presented on the evening news are valid and reliable. The evening news generally
presents reliable information yet at times one must question the validity of the statistics presented
due to possible biases and agendas. Many news outlets are politically driven which causes them
to lean to one political side or the other skewing the validity of the statistics presented. I would
say the most valid and reliable statistics presented on the evening news is the sports segment
because statistics in sports are pretty cut and dry. 

We have to be concerned with errors in the questionnaire design because an error in design can
confuse participants. Some questionnaires raise concerns about the influence they may have over
the answers of the participants. Some participants may respond in a way they feel the
questionnaire wants them too.
As with questionnaires the design and statistical information is very important to the validity of
the data received. Questionnaires can be designed to control the answers given by the
participants of the questionnaire affecting is validity. Misinterpreting statistical information can
lead to validity problems as in case of Coca-Cola creating Coke 2 which consumers did not take
well to causing Coke to go back to the original formula.

Wk. 2 DQ 3

The term population in statistics represents all possible measurements or outcomes that are of
interest to us in a particular study. The term sample refers to a portion of the population that is
representative of the population from which it was selected. A real-world example of a
population is all dodge vehicles. The example of a real-world sample would dodge dakotas.

With probability sampling, all elements (e.g., persons, households) in the population have some
opportunity of being included in the sample, and the mathematical probability that any one of
them will be selected can be calculated.

With non-probability sampling, in contrast, population elements are selected on the basis of their
availability (e.g., because they volunteered) or because of the researcher's personal judgment that
they are representative. The consequence is that an unknown portion of the population is
excluded (e.g., those who did not volunteer).

Wk. 3 DQ 1

The normal distribution is pattern for the distribution of a set of data which follows a bell shaped
curve. This distribution is sometimes called the Gaussian distribution in honor of Carl Friedrich
Gauss, a famous mathematician. The bell shaped curve has several properties; The curve
concentrated in the center and decreases on either side. This means that the data has less of a
tendency to produce unusually extreme values, compared to some other distributions.The bell
shaped curve is symmetric. This tells you that he probability of deviations from the mean are
comparable in either direction.

The normal distribution is a continuous distribution defined by two parameters, the mean and the
variance. Since the variance can be any positive value and the mean can be any real value there
are many different values for the mean and for the variance.

The idea behind statistics is to create and inference about a population based on a small
representative sample of the population. It would be impossible to do this if we assumed the data
represented a different population than the one of interest.

It is very helpful and makes the models and calculations much easier if we know that the data
follows a known distribution.
Wk. 3 DQ 2

When the number of possible values of a variable is infinitely large and impossible to count, then
it is a continuous data. An example of continuous data would be the number of stars on a clear
night. You will not be able to count all the stars on a clear night. On the other hand if the number
of possible values of a variable is not infinitely large and countable, then it is a discrete data. An
example of discrete data could be the number of family members who live at your home.

The measurement of money is considered discrete data. Money can be expressed in figures such
as 25.61, but because money only pays out to two decimal places the measurement of money is
discrete data.

Yes, you'll have to approximate the discrete data and treat it as continuous. The only trouble you
may have is you may not be able to find the probability of an event equal to a single value.

Wk. 3 DQ 3

The mean of a set of data is probably what most people refer to as average. You find the mean of
a set of data by adding up all the numbers in the data set and then dividing by the number of data
points you added up. The median refers to the number in the middle of a data set. When finding
the median of a data set, you have to make sure that the numbers are put in order first. If there is
an odd number of data in the list, there is only one number that is exactly in the middle of the
data. But if there is an even number of data points, then there are two numbers in the middle. In
that case, you have to add those two numbers together and then divide by two to find the median.
The mode of a data set refers to the number that occurs most often. If there is not a number that
occurs more than any other, we say there is no mode for the data. It is possible to have more than
one mode for a data set.

It Depends on your set of data, If it has an outlier then the median will be the best option, but if
not then always go with the mean.

The median is the best measure of central tendency when the data is skewed or when there are a
huge number of outliers.

WK. 4 DQ 1

The shape is a concern because if the shape is symmetrical, the mean will be near the center of
the distribution. The mean is very sensitive to extreme values and if the distribution is not
symmetrical, the mean will be away from the center and toward the extreme values. Also, since
most statistics rely on normality, its important that to use those statistics, that the underlying
population is normally distributed.
The larger the sample size the smaller the standard error (standard deviation/sqrt(n)) and thus a
more reliable estimate. Also when analyzing the mean with large samples, you can rely on all the
tests and statistics which assume normality, based on the central limit theorem

Yes there are various methods for normalizing skewed distribution, such as the Square root
transformation and logarithmic transformation to name two.

1. Begin with a set of data which is obviously not normal, something like the uniform
distribution or any non-normal distribution. In fact draw a histogram to show that it is not
normal.
2 Select samples from the set of data of size 5, compute the average, repeat say several hundred
times, and draw a histogram of this set of averages
3. Repeat step 2 with larger samples size and draw histogram
4. You will see the histogram of averages resembling a normal distribution as your sample sizes
increase

Wk 4 DQ 2

The confidence interval represents a range of values around the sample mean that include the
true mean.

The most controllable method of increasing the precision; thus narrowing the confidence
interval, is by increasing the sample size. In other words taking more samples will increase the
precision thus narrowing the confidence interval. 

Quite simply, the confidence level represents the likelihood that another sample will provide the
same results. It is the percent likelihood statement that accompanies the width of the confidence
interval. It is set often to the 95% level by convention but can be adjusted.

Wk 4 DQ 3

To win the election, we know you should win over 50% of the total votes.
In this case, 1,100 x 0.50 = 550 votes, and you should be more these votes to win. Now, if 572
said they were planning to vote for the current mayor, my initial hunch the mayor will WIN and
that is based on 100% confidence -----> OVER 50% OF THE VOTE!

572/1100 = 0.52
Assuming that the mean is 0.5, the standard deviation is sqrt(0.5*0.5/1100)
= 0.015076
---------------
For a 95% confidence interval the upper critical value is 0.5 + 1.96*0.015076
= 0.29548... = .529549
-----------------------
since the sample proportion (0.52) is less than the critical value, you
cannot reject a hypothesis that there is a 50% chance of either candidate winning. You cannot be
95% confident that your candidate will win.

Wk.5 DQ 1

Yes. When we incorrectly either rejected or accepted the null hypothesis in this case idea, “that
the machine is accurate” we have a type error.
 
It is a type 1 error when we reject a null hypothesis that is actually true. In most cases this is
definitely the more serious type of error. We don't want to reject a null hypothesis that a
medicine does no good and start making widespread use of the medicine when it really does do
no good.
 
A type 2 error involves not rejecting the null hypothesis when the null hypothesis is really false.
When we do statistical tests we never really embrace the null hypothesis, we merely decide there
is not enough evidence to reject it. So when we do not reject a false null hypothesis it is a lesser
kind of error if indeed it should be considered an error at all. But we do call it an error--a type 2
error. This is the type of error we make when we decide there is insufficient evidence in support
of some medicine to reject the idea that it does no good--when it really does good. Basically
what we do here is send it back for further testing--not reject it for all time.
 
With your bottle filling machine, we really want to catch any bad machines, so we will set things
up to reject a null hypothesis. This will result in far more type 1 errors where we incorrectly
reject a null hypothesis.

Wk. 5 DQ 2
In hypothesis testing the null hypothesis is never proven or established, but is possibly disproved,
in the course of the test. Every experiment may be said to exist only in order to give the facts a
chance of disproving the null hypothesis. In other words, hypothesis testing begins with the
statement that the “null” is true and it is up to the experimenter/researcher to prove it false. If the
researcher proves it to be false, then the “null” is rejected. But if the researcher cannot prove that
the “null” is false, it does not mean the “null” will be accepted as true because we had initially
set out with the assumption that the “null” is true. This is why we either reject the null hypothesis
or fail to reject the null hypothesis, but never accept it.

An example from real life is the cornerstone of the legal system: A person is considered innocent
until proven guilty in a court of law. We may formulate this decision-making process in the form
of a hypothesis test, as follows:
Ho: The person is innocent vs. Ha: The person is not innocent (is guilty) Now, it is up to the
prosecution to build a case to prove guilt beyond a reasonable doubt. It should be noted here that
a jury can never prove a person to be innocent. The defendant can only be declared not guilty.
i.e., the jury has failed to reject the null hypothesis, but it has not accepted it.

También podría gustarte