Document 6534743

Transcription

Document 6534743
Modeling the Distribution of
Sample Proportions
• Rather than showing real repeated
samples, imagine what would happen if
we were to actually draw many samples.
• Now imagine what would happen if we
looked at the sample proportions for these
samples. What would the histogram of all
the sample proportions look like?
Modeling the Distribution of
Sample Proportions (cont.)
Modeling the Distribution of
Sample Proportions (cont.)
• We would expect the histogram of the
sample proportions to center at the true
proportion, p, in the population.
• As far as the shape of the histogram goes,
we can simulate a bunch of random
samples that we didn’t really draw.
• It turns out that the histogram is unimodal,
symmetric, and centered at p.
• More specifically, it’s an amazing and
fortunate fact that a Normal model is just
the right one for the histogram of sample
proportions.
• To use a Normal model, we need to
specify its mean and standard deviation.
The mean of this particular Normal is at p.
Modeling the Distribution of
Sample Proportions (cont.)
Modeling the Distribution of
Sample Proportions (cont.)
• When working with proportions, knowing
the mean automatically gives us the
standard deviation as well—the standard
pq
deviation we will use is
• A picture of what we just discussed is as
follows:
n
• So, the distribution of the sample
proportions is modeled with a probability
model that is
⎛
pq ⎞
N ⎜⎜ p,
⎝
⎟
n ⎟⎠
1
Assumptions and Conditions
How Good Is the Normal Model?
• The Normal model gets better as a good
model for the distribution of sample
proportions as the sample size gets
bigger.
• Just how big of a sample do we need?
This will soon be revealed…
•
•
Most models are useful only when
specific assumptions are true.
There are two assumptions in the case of
the model for the distribution of sample
proportions:
1. The sampled values must be independent of
each other.
2. The sample size, n, must be large enough.
Assumptions and Conditions (cont.)
Assumptions and Conditions (cont.)
• Assumptions are hard—often impossible—to
check. That’s why we assume them.
• Still, we need to check whether the assumptions
are reasonable by checking conditions that
provide information about the assumptions.
• The corresponding conditions to check before
using the Normal to model the distribution of
sample proportions are the 10% Condition and
the Success/Failure Condition.
1.
10% condition: If sampling has not been made with
replacement, then the sample size, n, must be no
larger than 10% of the population. ( N > 10n )
2.
Success/failure condition: The sample size has to be
big enough so that both np and nq are greater than
10.
Assumptions and Conditions (cont.)
Example:
A candy company claims that 25% of the jelly beans in its
spring mix are pink. Suppose that the candies are
packaged at random in small bags containing about 300
jelly beans. A class of students opens several bags,
counts the various colors of jelly beans, and calculates the
proportion that are pink in each bag. Is it appropriate to
use a Normal model to describe the distribution of the
proportion of pink jelly beans?
A Normal model is appropriate
Randomization condition is satisfied the 300 jelly beans in the bag
are selected at random and can be considered representative of all
jelly beans
10% condition is satisfied the sample size, 300, is less than 10% of
the population of all jelly beans.
success/failure condition is satisfied np = 300(0.25) = 75 and nq =
300(0.75) = 225 are both greater than 10
So, we need a large enough sample that is not too large.
A Sampling Distribution Model
for a Proportion
• A proportion is no longer just a computation
from a set of data.
– It is now a random quantity that has a distribution.
– This distribution is called the sampling distribution
model for proportions.
• Even though we depend on sampling
distribution models, we never actually get to
see them.
– We never actually take repeated samples from the
same population and make a histogram. We only
imagine or simulate them.
2
A Sampling Distribution Model
for a Proportion (cont.)
• Still, sampling distribution models are important
because
– they act as a bridge from the real world of data to the
imaginary world of the statistic and
– enable us to say something about the population
when all we have is data from the real world.
The Sampling Distribution Model
for a Proportion (cont.)
• Provided that the sampled values are
independent and the sample size is large
enough, the sampling distribution of pˆ is
modeled by a Normal model with
– Mean: μ pˆ = p
– Standard deviation: σ pˆ =
The Sampling Distribution Model for a Proportion (cont.)
Example:
Assume that 25% of students at a university wear contact
lenses. We randomly pick 200 students. What is the
probability that more than 28% of this sample wear contact
lenses?
Conditions:
• Randomization is satisfied
• N > 10(200)
• np = 200(0.25) = 50 ≥ 10
• nq = 200(0.75) = 150 ≥ 10
z=
0.28 − 0.25
( 0.25)( 0.75)
=
0.164
0.03
= 0.9798
0.03062
pq
n
What About Quantitative Data?
• Proportions summarize categorical
variables.
• The Normal sampling distribution model
looks like it will be very useful.
• Can we do something similar with
quantitative data?
• We can indeed. Even more remarkable,
not only can we use all of the same
concepts, but almost the same model.
200
Simulating the Sampling Distribution
of a Mean
• Like any statistic computed from a random
sample, a sample mean also has a
sampling distribution.
• We can use simulation to get a sense as
to what the sampling distribution of the
sample mean might look like…
Means – The “Average” of One Die
• Let’s start with a simulation of 10,000
tosses of a die. A histogram of the results
is:
3
Means – Averaging More Dice
• Looking at the
average of two dice
after a simulation of
10,000 tosses:
• The average of three
dice after a simulation
of 10,000 tosses
looks like:
Means – What the Simulations
Show
• As the sample size (number of dice) gets
larger, each sample average is more likely
to be closer to the population mean.
– So, we see the shape continuing to tighten
around 3.5
• And, it probably does not shock you that
the sampling distribution of a mean
becomes Normal.
The Fundamental Theorem of
Statistics (cont.)
• The CLT is surprising and a bit weird:
– Not only does the histogram of the sample
means get closer and closer to the Normal
model as the sample size grows, but this is
true regardless of the shape of the population
distribution.
• The CLT works better (and faster) the
closer the population model is to a Normal
itself. It also works better for larger
samples.
Means – Averaging Still More Dice
• The average of 5 dice
after a simulation of
10,000 tosses looks
like:
• The average of 20
dice after a simulation
of 10,000 tosses
looks like:
The Fundamental Theorem of
Statistics
• The sampling distribution of any mean
becomes Normal as the sample size
grows.
– All we need is for the observations to be
independent and collected with
randomization.
– We don’t even care about the shape of the
population distribution!
• The Fundamental Theorem of Statistics is
called the Central Limit Theorem (CLT).
The Fundamental Theorem of
Statistics (cont.)
The Central Limit Theorem (CLT)
The mean of a random sample has a
sampling distribution whose shape can be
approximated by a Normal model. The
larger the sample, the better the
approximation will be.
4
Assumptions and Conditions
•
The CLT requires remarkably few
assumptions, so there are few conditions to
check:
1. Random Sampling Condition: The data values must
be sampled randomly or the concept of a sampling
distribution makes no sense.
2. Independence Assumption: The sample values
must be mutually independent. (When the sample is
drawn without replacement, check the 10%
condition…)
A Sampling Distribution Model
for a Mean
• The population mean for the sampling
distribution is equal to the population mean.
μx = μ
• The standard deviation of the sampling
distribution for means is equal to the population
standard deviation divided by the square root
of the sample size.
σx =
• The standard deviation of the sampling
distribution declines only with the square
root of the sample size.
• While we’d always like a larger sample,
the square root limits how much we can
make a sample tell about the population.
(This is an example of the Law of
Diminishing Returns.)
Example:
The mean annual income for adult women in one city is
$28,520 and the standard deviation of the incomes is
$5700. The distribution of incomes is skewed to the right.
Determine the sampling distribution of the mean for
samples of size 110. In particular, state whether the
distribution of the sample mean is normal or approximately
normal and give its mean and standard deviation.
Approximately normal
mean = $28,520,
standard deviation =
A Sampling Distribution Model
for a Mean
Example:
The number of hours per week that high school seniors spend on
homework is normally distributed, with a mean of 11 hours and a
standard deviation of 3 hours. 70 students are chosen at random. Find
the probability that the mean number of hours spent on homework for
this group is between 10.2 and 11.5.
−0.8
10.2 − 11
=
= −2.231
3
0.3586
70
11.5 − 11
0.5
z=
=
= 1.394
3
0.3586
70
z=
n
A Sampling Distribution Model
for a Mean
Diminishing Returns
Conditions:
• Randomization is satisfied
• N > 10(70)
σ
0.9055
5700
= $543
110
A Sampling Distribution Model
for a Mean
Example:
At a shoe factory, the time taken to polish a finished shoe
has a mean of 3.7 minutes and a standard deviation of 0.48
minutes. If 44 shoes are polished, there is a 5% chance that
the mean time to polish the shoes is below what value?
Conditions:
• Randomization is assumed
• N > 10(44)
−1.645 =
x − 3.7
⇒ −0.119 = x − 3.7
0.48
44
0.05
x = 3.581
5
Standard Error
• Both of the sampling distributions we’ve
looked at are Normal.
– For proportions
SD ( pˆ ) =
pq
n
– For means
SD ( x ) =
σ
• For a sample proportion, the standard
error is
ˆˆ
pq
SE ( x ) =
Sampling Distribution Models
• Always remember that the statistic itself is
a random quantity.
– We can’t know what our statistic will be
because it comes from a random sample.
n
• For the sample mean, the standard error is
s
n
Sampling Distribution Models
(cont.)
•
• When we don’t know p or σ, we’re stuck,
right?
• Nope. We will use sample statistics to
estimate these population parameters.
• Whenever we estimate the standard
deviation of a sampling distribution, we call
it a standard error.
n
Standard Error (cont.)
SE ( pˆ ) =
Standard Error (cont.)
There are two basic truths about
sampling distributions:
1. Sampling distributions arise because
samples vary. Each random sample will
have different cases and, so, a different
value of the statistic.
2. Although we can always simulate a
sampling distribution, the Central Limit
Theorem saves us the trouble for means
and proportions.
• Fortunately, for the mean and proportion,
the CLT tells us that we can model their
sampling distribution directly with a Normal
model.
What Can Go Wrong?
• Don’t confuse the sampling distribution
with the distribution of the sample.
– When you take a sample, you look at the
distribution of the values, usually with a
histogram, and you may calculate summary
statistics.
– The sampling distribution is an imaginary
collection of the values that a statistic might
have taken for all random samples—the one
you got and the ones you didn’t get.
6
What Can Go Wrong? (cont.)
• Beware of observations that are not
independent.
– The CLT depends crucially on the assumption
of independence.
– You can’t check this with your data—you have
to think about how the data were gathered.
What have we learned?
• Sample proportions and means will vary
from sample to sample—that’s sampling
error (sampling variability).
• Sampling variability may be unavoidable,
but it is also predictable!
• Watch out for small samples from skewed
populations.
– The more skewed the distribution, the larger
the sample size we need for the CLT to work.
What have we learned? (cont.)
• We’ve learned to describe the behavior of
sample proportions when our sample is random
and large enough to expect at least 10
successes and failures.
• We’ve also learned to describe the behavior of
sample means (thanks to the CLT!) when our
sample is random (and larger if our data come
from a population that’s not roughly unimodal
and symmetric).
7