The Effects of Class Size in Online College Courses

Transcription

The Effects of Class Size in Online College Courses
The Effects of Class Size in Online College
Courses: Experimental Evidence
Eric Bettinger, Christopher Doss, Susanna Loeb and
Eric Taylor
VERY PRELIMINARY
PLEASE DO NOT QUOTE OR CITE
The Effects of Class Size in Online College Courses:
Experimental Evidence
By Eric Bettinger, Christopher Doss, Susanna Loeb, and Eric Taylor
May 12, 2014
1. Introduction
Class size is a perennial issue in the economics of education. It has implications for both the
cost and production of education. In K-12 education, the effects of class size have been
vigorously debated (e.g. Mosteller 1995 Hanushek 2002, Krueger 2002, Hoxby 2000, Krueger
and Whitmore 2000, Angrist and Lavy 1999, Leuven Oosterbeek, Ronning 2008, Gary-Bobo,
Mahjoub 2009, Woesman and West 2006, Dynarski, Hyman, and Dynarski 2011, Chetty,
Friedman, Hilger, Saez, Schazenbach, and Yagan 2010). Additionally, policymakers often view
class size as a policy lever to possibly improve K-12 education, and several states regulate class
size (e.g. Morgan-Hart Class Size Reduction Act in California). Class size in college has also
received some attention (Bettinger and Long 2008).
Our setting focuses on class size in virtual classrooms. Online college courses are becoming
more and more important in higher education. About one-third of US students take at least one
course online during their college career. The proportion of students who take at least one
course online has tripled over the past decade (Allen and Seaman 2013). More than 70 percent of
public colleges now have programs that occur completely online.
Class size online presents a different set of financial and educational production
challenges. For example, the cost of adding an additional student is often negligible in online
settings. No new desk and no new equipment is needed. The new costs are only incurred
through additional staff time. Additionally, whereas class size might affect students through
peers or congestion (e.g. Lazear 2002), interactions substantially change in an online setting
where discussion boards are the primary forum where peers interact. While online courses may
present an opportunity to reduce higher education costs, any adverse impact of class size could
lead to a deterioration in the overall quality of college courses.
Selection issues are also pervasive. Students may be strategic in their courses. Students
choose courses, class sizes, and professors, and this may be done in nonrandom ways that could
also be related to outcomes. For instance, good instructors generally attract more students and
thus have larger classes. Therefore, larger class sizes could be associated with instructor
reputation again making it difficult to compare class sizes.
To measure the effects of collegiate class size while addressing these empirical difficulties,
we track nearly 118,000 of the students who enrolled in DeVry University in 2013. While DeVry
began primarily as a technical school in the 1930s, today 80 percent of the University’s students
are seeking a bachelor’s degree, and most students major in business management, technology,
health, or some combination. Two-thirds of undergraduate courses occur online, the other third
occur at nearly 100 physical campuses throughout the United States. In 2010 DeVry enrolled
over 130,000 undergraduates, or about 5 percent of the for-profit college market, placing it
among the 8-10 largest for-profit institutions that combined represent about 50 percent of the
market.
In 2013, DeVry conducted a unique set of class size experiments across most of their online
course offerings. In each included course, they created both large and small classes. They
modified class sizes by 2 to 5 students in each section. These differences represent changes as
small as 2.9 percent or as large as 25 percent. DeVry University conducted this experiment in an
interesting way. Their registration system makes course capacity visible to students at the time
of registration while not revealing the professor. Rather than revealing the different class sizes to
students, DeVry had the system list all sections as having the same size. At a certain point in
registration, DeVry took new registrants and assigned them to previously created sections. For
example, the first two sections of a particular course typically fill up weeks before the term
started. In the couple of days prior to the term, the experiment started. Five new registrants after
this point would be assigned to one of the first two courses. The changed class would be the
“large” class while the unchanged class would be the “small” class. These changes would occur
throughout all of the existing sections -- every other section became a “large” class.
Modifying class sizes with new registrants hides from students whether they are in a
treatment or control school. However, it also complicates the estimation of class size
effects. Students who register later in the registration process differ systematically from those
who register early. Our prior work (Bettinger, Loeb, and Taylor 2013) suggests that weaker
students are more likely to register closer to the start of the course. If this is the case, then the
class size intervention created both a class size effect and a potentially negative peer effect. As
we discuss below, we use a few techniques to try to separate these effects; however, we note that
these effects at least theoretically should reinforce each other making a negative estimate from
class effect becomes even more negative from peers.
The results suggest that, after addressing issues of selection, small changes in class size
generally have no effect on student learning in online courses. Large classes do not seem to
adversely affect students. This result is consistent across different types of courses where one
could expect a meaningful exception. For example classes which require substantial faculty
interaction or courses where increased class size might crowd out meaningful interactions with
faculty theoretically could generate meaningful class size effects. We find, however, that even in
these courses no class size effect is present. Our finding can either be interpreted as class size
not having an effect in these online settings or that the effects of class size are somewhat static in
the local changes that we examine. We discuss these possibilities at length.
The paper is organized as follows. Section 2 includes the background on class size and
online schooling. Section 3 presents our data and methodology. Section 4 presents our baseline
results. Section 5 presents heterogeneity results. Section 6 presents robustness checks. Section
7 concludes.
2. Background on Class Size
Economists have long been interested in the effects of class size on student
outcomes. However, much of this research suffers from selection bias by comparing the
outcomes of students in small and large classes without taking into account confounding factors
that may influence both class size and outcomes. However, several studies focus on sources of
exogenous variation in class size such as the Tennessee STAR experiment. This intervention
assigned students in kindergarten through third grade to class sizes ranging from 15 to
22. Exploiting the random assignment research design, Mosteller (1995) finds positive effects
on early achievement. Krueger and Whitmore (2000) provide evidence of longer-term effects:
students who had been placed in a smaller class were more likely to attend college later and
performed better on their SAT scores. Dynarski, Hyman, and Schazenbach (2011) extend this
analysis to college completion showing that college completion rates increase by 1.6 percent and
students who were assigned to small classes are more likely than students assigned to large
classes to major in STEM, business, or economics. Chetty et al (2010) show mixed results as to
whether the small classes led to higher earnings for students later in life.
One of the criticisms of research based on STAR (e.g. Hoxby 2000, Lehrer and Ding 2011) is
that teachers knew that they were part of an experiment – an experiment that could have led to a
more desirable outcome for the teachers regardless of the impact on students. Therefore,
subsequent papers use alternative sources of variation rather than policy experiments. Angrist
and Lavy (1999) estimate the impact of class size on student outcomes in Israel. Following
Maimonides' teachings centuries earlier, Israeli schools create an additional class once class size
reaches 40. In small schools, this creates dramatic variation as a school with 40 students should
have two classes with an average of 20 students per class while a school with 39 students will
only have one class. Using this exogenous variation, Angrist and Lavy (1999) estimate that
smaller classes lead to better student test scores. Hoxby (2000) instead exploits year-to-year
changes in the size of the student population. Due to randomness in the number of children
ready for school each year, there is natural variation in class size over time. Hoxby uses an
instrumental variables strategy based on this variation and does not find a class size effect on
student achievement.
There are several possible reasons for the different estimated effects of class size found in
the K-12 literature. First, the respective authors look at different populations and contexts. Class
size may matter in some settings but not others. A second reason for the differing results is that
the source of variation for each of these studies is from different parts of the size distribution,
and the effects of class size could be very nonlinear; there could be positive effects over some
parts of the distribution while no effects in other parts.
While the debate continues over class size in primary and secondary settings, there are few
studies that evaluate the effects of class size on student outcomes in college. The small literature
that exists focuses only on limited environments (i.e. only one institution or one subject), often
does not address issue of bias, and has also found conflicting results. For example, Dillon,
Kokkelenberg, and Christy (2002) compare small and large classes at one institution and find
that class size affects the grade distribution of a course. However, their paper does not fully
address potential selection bias in how students sort into classes. Using a national database
(TUCE III), Kennedy and Siegfried (1997) instead examine the average student outcomes in 69
economics classes representing 53 different universities. They find class size is not related to
student achievement. Becker and Powers (2001) find that class size is negatively related to
student learning in economics courses. Bettinger and Long (2008) use differences in yields from
year to year as an instrument for class size in introductory courses. They find that student
outcomes are worse in large classes. They also demonstrate that student ability and professor
characteristics differ in large and small collegiate classes. These selection issues can compound
simple comparisons.
Theoretical Framework: Why might Class Size Matter?
There could be a number of mechanisms by which assignment to a large class affects student
outcomes. These include the direct peer effects of the number in a classroom as well as the
indirect effects that stem from the impact of size on faculty and student behavior.
The most cited model in class size papers is Lazear’s disruptive peer model. Lazear
(1999) presents a model where disruptive peers create a negative externality, which reduces other
students' learning. Large classes could potentially have more disruptive students than a smaller
class thereby suggesting more disruptions and lower achievement in larger courses. The
likelihood of disruptions may also vary by the types of students in a specific class (e.g. honors
sections) and this may explain why not all large classes appear to have negative effects on their
students. College students may disturb classes by arriving late, using cell phones, playing
computer games, or asking repetitive questions.
Lazear's model holds more generally for any interaction between students or faculty that
crowds out productive learning. For example, larger classes may also suffer from congestion
effects. Not only can disruptions crowd out learning, but the pace of instruction can vary
dramatically depending on congestion. Students who learn slowly may cause the class to move
slower through topics. By contrast, students who learn quickly may inhibit slower learning
students from asking productive questions. These differences in the paces of learning can crowd
out productive learning. In online settings, many of the discussions occur through asynchronous
posting in discussion boards. Students may struggle to get personalized responses to their
postings if congestion crowds out productive discussion. This could occur if students’ postings
are not complimentary.
Class size may also affect the relationship students have with their professors. For
example, if a professor has limited time to devote to a class, as the size of the course increases,
each student will have less personal time. In this way, class size could affect student engagement
with the professor. Topp (1984) suggests that large class sizes early in a student's academic
career may alienate students leading to disassociation with the institution and consequently
student withdrawal. Class size may also affect the instructor's behavior. As class size increases,
professors may change the way in which they teach or the teaching technologies they
employ. For example, professors may rely less on classroom discussion in a large section of a
course. If classroom discussion helps students learn (or helps a student feel integrated in the
university), then changes in class size may affect student learning and dropout behavior. . In the
context of online courses, many of the classes have large products with multiple submission
deadlines. Students may be dependent on professor input to improve their assignments, but it
may be more difficult for professors to provide quality feedback in larger classes.
Additionally, a number of papers on class size have claimed that larger class size affects
the acquisition of higher order cognitive thought processes. In the mid-to-late 1990's, the
collection of data on introductory economic student test scores spurred a number of articles on
the effects of class size on student outcomes (e.g. Kennedy and Siegfried 1995; Becker and
Powers 2001). Some of the research during this time focused on pedagogy (e.g. McKeachie
1986) while others focused on identifying scenarios where class size might matter. While
generally these papers find that increased class size did not affect learning, the research showed
that new cognitive skills were less likely to be assimilated by students in large classes. The small
sample sizes and lack of exogenous variation make it difficult to interpret the conclusions from
these papers as causal.
3. Data and Methodology
In this paper we capitalize on an experiment conducted by DeVry University to address
the empirical question of whether class size affects student outcomes in the online, college
context.
In particular we ask if increasing online class sizes affects student GPA, credits
received in the next term, and persistence in the next term.
In addition, we investigate
heterogeneity in this effect by course type (Science/Mathematics courses, Social Science
courses, etc…), and by assignment type (courses that include projects, laboratories, both, or
neither).
The treatment-control contrast in this study combines both a contrast in class size, and a
contrast in peer quality. Class size was directly manipulated, and then differences in peer quality
arose from the normal patterns of student registration at the university. Students in 111 online
college courses were quasi-randomly assigned to either a small or large class section. Table 1
presents the descriptive statistics for the sample. Panel A indicates that classes, on average,
contained about 32 students; though enrollment ranged between 16 and 40 students. Panel B
shows that “large” sections on average contained over 33 students, in contrast to “small”
sections, which contained on average over 30 students.
Each “large” section of a course
therefore enrolled 3 more students, or 10 percent more students, on average, than the “small”
sections of the same course. This increase in class size, however, ranged from 2.9 to 25 percent.
Appendix Figure 1 shows the distribution of class size changes across students.
Registration for online courses at DeVry follows a few simple rules and patterns,
ignoring for a moment the experimental manipulation. Students register themselves for courses
and sections. The enrollment window starts six months before the term begins and ends a few
days into the eight-week term; if demand exceeds the University’s projections additional sections
are added. During registration, online course sections have no differentiating characteristics:
meeting time and location are irrelevant, class sizes are identical, and professors are not
identified. These features generate a simple but strong pattern: section 1 fills up with students
first, then section 2 fills, and so on. Students who choose to deviate from the pattern are few.
Additionally, there is a correlation between observable student characteristics and registration
date. Notably, for example, students with higher prior GPAs register earlier (a correlation of
about 0.30 in any given term), thus generating important between-section variation in mean prior
GPA.
During the four experimental terms, student registration began exactly as described in the
previous paragraph. All sections of the same course had the same class size. Then, two or six
weeks before the start of the term, DeVry changed the class sizes of all odd numbered sections.
This “change day” was six weeks out for the November and January terms (two-thirds of the
sample) and two weeks out for the July and September terms. For nine out of ten courses, class
sizes were increased in the odd numbered sections. A university administrator simply increased
the enrollment cap in those sections, and new registrants began filling those slots. In one out of
ten courses, class sizes were decreased in the odd numbered sections. The administrator removed
students from those sections and enrolled them in new sections.1 In the final weeks before the
term, additional students registered filling the now large and small sections.
The change in enrollment caps created differences between sections in class size, but also
created differences between sections in peer quality. Consider the courses where class size was
increased in the odd numbered sections. Absent the experiment, students in section 1 and section
2 would have experienced the same class size and same distribution of peer quality. During the
experiment, section 1 had both more students and the added students were likely lessacademically-prepared students. The added students were drawn from students who registered in
the final weeks before the term began, students who registered only after the enrollment caps had
been raised. By contrast, students in the last sections created, say ݉ and ݉ െ ͳ, experienced
similar peer quality even though section ݉ had more students. Because of the importance of the
assignment date, we distinguish in our analysis between those students who registered prior to
the class size assignment date and those who registered subsequently. Among “incumbent”
students (i.e. students assigned prior to the class size assignment date), our only identifying
assumption is that there is randomness locally around the registration time. For late registrants,
many were quasi-randomly assigned in that the administrator reassigned them somewhat
1
The selection of students to be moved was arbitrary, but not, strictly speaking, random. The administrator who
carried out this task had only one consideration when selecting whom to move: students with a hold on their account
for financial or academic reasons could not be moved.
arbitrarily. In the case that students registered late enough to see a choice between sections, then
the late registrant’s choice of section size may be endogenous.
Identification Strategy:
We employ a fixed effects strategy in order to estimate the effect of class size on a
variety of student outcomes. The key insight is that student characteristics covary with the time
at which they enroll in a course. Generally those students who register for a course earlier have
stronger prior academic records. However, since assignment to a “large” classroom was quasirandomly assigned, there will be variation in class size among students who registered for the
same course at similar times. Moreover, in a short enough time window, that class size variation
should be spread among students with similar observed and unobserved characteristics. The
following regression model summarizes the above:
ܻ௜௦௖௧ ൌ ߚ଴ ൅ ܶ௜௦௖௧ ߚଵ ൅ ܺ௜௦௖௧ ߚଶ ൅ ߙ௦௖௧ ൅ ߝ௜௦௖௧
Here Yisct represents the outcome of interest for student, i, enrolled in course, c, during
session, s, and registered at time, t. The primary outcomes of interest are the grade the student
received in the course (measured on 0-4 scale), the credits obtained in the next quarter and
enrollment in the next quarter. Tisct represents the “treatment” of interest, which is the intended
class size of the section to which the student was assigned. We model this treatment in two
ways: as a binary indicator for “small” or “large” section and as the log of class size. Our
preferred specification is the log class size since this takes into account both the increase in class
size and the enrollment in the small section. That is, the log of class size lends itself to a percent
increase interpretation, which depends on the enrollment in the small class size and the increase
in class size, whereas the binary indicator hides that heterogeneity in treatment. Xisct represents a
vector of student characteristics that includes prior cumulative GPA at DeVry, an indicator for
being a new student, an indicator for missing prior GPA if not a new student, and an indicator for
previously having failed a course.
Dsct represents session-by-course-by-student group fixed
effect. There are many ways to conceptualize these fixed effects. At the core, we order students
by the time they registered for a course in a particular session. Each grouping of students, in
order they registered, for a particular course in a particular session is its own fixed effect. In
practice we did this in two ways: we either created groups of 15 students or divided the course
into 20 quantiles.2 Again, the identifying assumption is that within these groupings students are
similar on all observable and unobservable characteristics and were randomly assigned to
different class sizes based on the scheme explained above.
As stated earlier, students who registered after the class caps were changed, and therefore
typically had weaker prior academic outcomes, were randomly assigned to previously full
sections. By enlarging these previously full sections with later registrants, class sizes were not
only increased, but previously lower performing peers were mixed with previously higher
performing peers. There could therefore be a class size effect and a peer effect that would
differentially affect a student based on their prior academic success. To test if students who
registered before and after the cap were differentially affected by this treatment, we interacted a
binary indicator for registering after the cap changed with the treatment.
ܻ௜௦௖௧ ൌ ߚ଴ ൅ ܶ௜௦௖௧ ߚଵ ൅ ܺ௜௦௖௧ ߚଶ ൅ ܲ‫ݐ݊݁݉݊݃݅ݏݏܣݐݏ݋‬௜௦௖௧ ߚଷ
൅ܶ௜௦௖௧ ‫ݐ݊݁݉݊݃݅ݏݏܣݐݏ݋ܲ כ‬௜௦௖௧ ߚସ ൅ ߙ௦௖௧ ൅ ߝ௜௦௖௧
4. Main Results
Covariate Balance
2
These groupings are arbitrary and our results are robust to a variety of groupings and quantiles divisions.
The identifying assumptions can be partially tested by looking at the covariate balance
across class size within these fixed effects. Table 2 provides those results for both groups of 15
students and 20 quantiles. Panel A looks at the entire sample and Panels B and C disaggregate
the sample by those who registered before the cap on the sections were changed, and those who
registered after the cap change, respectively. When disaggregating, all covariates are balanced
except one, which we would expect given the number of tests we are conducting. On the full
sample there is a marginally significant imbalance with regards to prior cumulative GPA (in
some models). However there are two things to note. Firstly these imbalances are quantitatively
small given that prior cumulative GPA is on a 4 point scale. With the 20 quantile fixed effects,
for example, students in small classrooms have a GPA that is greater by just 0.010 points on a 4
point scale.
In addition this slight imbalance would indicate previously higher achieving
students are more likely to be in smaller classrooms. To the extent that past academic success is
a powerful predictor of future academic success, this would potentially bias our estimates in the
positive direction. Given that we find no effect of small classrooms, this bias seems to be
negligible.
Baseline Results:
Table 3 presents baseline results from four models, and two different fixed effects
strategies. Models 1 and 3 present the main effects of enrolling in a small class, as described in
Section 2. It is immediately clear that in all cases the point estimates are quantitatively small and
statistically insignificant. Small class sizes do not seem to affect student grades, the number of
credits they attempt in the session, nor the probability that they enroll in the next session.
Models 2 and 4 present those results when interacting with a binary indicator for a small
class and with the log of class size, respectively. Again the results are insignificant across both
fixed effect models and across all student outcomes. Small classes do not seem to affect student
grades or persistence regardless of when the student registered, and therefore by proxy,
regardless of prior academic success. This point is especially salient because it likely rules out
both pure class size effects and peer effects. For students who registered before the cap changed,
and had stronger previous academic records, we would expect both the increased class size, and
the introduction of lower performing peers to negatively affect their outcomes. They would
therefore be negatively affected by two factors.
In contrast, later registrants would also
potentially be negatively affected by larger class sizes, but could be positively affected by having
classes with stronger performing peers. In our results then, should differ by student registration
date. Models 2 and 4 show that, while the point estimates on grades for those who registered
before the cap change are generally negative and those for students who registered after the cap
change are more positive, they again are small and insignificant. This may indicate that both the
class size and the peer effects had negligible effects on the students.
While it is evident that the point estimates in our results are insignificant, it is useful to
estimate precisely how small an effect we would be able to detect. To do so we will concentrate
on Model 4 of Table 3, which can be used to estimate the effect size on both those that registered
before the class size cap was changed, and for those that registered after. Furthermore we will
estimate the effect size for a 10 percent change in class size, since that was the average change in
our sample. The exact point estimate varies to some extent, depending on the student group used
in the fixed effects. The 20 quantile fixed effect model indicates that for those students who
registered before the class size cap was change, the 95 percent confidence interval of the effect
size of a 10 percent increase in class size on class grade will range from -0.0299 to 0.0049 (the
standard deviation of class grade for students who registered before the cap changed was 1.16).
This provides a more conservative estimate. A similar calculation with the 15 student group
fixed effect model yields a point estimate range of -0.0240 to 0.0111. Looking at the students
who registered after the registration cap changed (the standard deviation of the class grade is
1.31) the 20 quantile fixed effect model yields a range of -0.0150 to 0.0218 and the 15 student
fixed effect model yields a range of -0.0123 to 0.0249.
Table 4 presents the 95 percent
confidence interval of the effect size for all outcomes for both the 15 student group and 20
quantile fixed effects models.
These are clearly small effects on outcomes. To put these effect sizes in a clearer
context, we can rule out effects typically found in the higher education literature. For example,
Bandiera, Larcinese, and Rasul found that a 1 standard deviation increase in university class size
had a -0.108 effect size on end of year test scores.
Similarly, De Giorgi, Pellizzari, and
Woolston found that a standard deviation increase in class size (approximately 20 students in
classes that on average contained 131 students) had an effect size of -0.140 on grades. Finally, in
their 2009 paper Angrist, Lang, and Oreopolis found that assigning college students a fellowship
and support services had an effect size on fall grades of 0.227, most of which was concentrated
on women where the effect size was 0.346. Effect sizes of these magnitudes would have been
easily detected given the power in our sample.
5. Heterogeneity of Results
The effect of class size on student outcomes need not be constant for all classes. To see
if there is any heterogeneity in the class size effect we divided the sample in two ways. First, we
divided the sample by academic discipline and separated courses that could be described as
Business/Marketing,
Computer/Technology,
General
Education,
Humanities,
Science/Mathematics, or Social Sciences. Secondly, we divided the sample by the types of
assignments each class required of the students: project only, laboratories only, both projects
and laboratories, or neither projects nor laboratories. There are many reasons to think the effect
of a class size increase could vary based on discipline type and assignment type. Firstly, the peer
effect could change. Peers could have a greater effect on classes that required more interaction
such as projects, laboratories, and perhaps computer technology. This is especially relevant in
this study where the class size increase and the quality of the one’s peers were intimately related.
Similarly, science and mathematics classes that typically have problem sets may be affected by
peer quality if students informally form study groups. In larger classes there is also more
competition for the professor’s time (if students contact the professor electronically), and
professors may change the way they organize and structure the class. The likelihood that
students contact professors may depend on the discipline or assignment structure of the class.
Similarly, the likelihood that the professor changes the structure of the class may depend on the
assignment type and/or discipline. For example in larger classes humanities professors may
change the length of written assignments due. The quantity and rigor of projects, laboratories,
and problem sets may also change with increasing class sizes.
Table 5 shows the covariate balance for each of these different samples. Though there
are a few covariates that are significantly different than zero at the 10 percent or 5 percent level,
this is expected given the number of tests we are conducting. The most imbalanced sample
contains both project and laboratory oriented courses, as we will see below, despite these
imbalances we still fail to find a significant effect of class size on student outcomes.
Tables 6 and 7 show the effects of class size on courses in different disciplines and with
different assignment types, respectively. These models are analogous to Model 4 of Table 3
where student outcomes are regressed on the log of class size interacted with an indicator
variable for registering after the cap was changed. It is immediately evident that in almost all
cases there is no significant effect of class size on student outcomes. Almost all categories
follow a familiar pattern where there is a small, negative insignificant effect on students who
registered before the class cap was changed, and a small, insignificant positive effect on those
who registered after the cap was changed. One notable exception is in the social sciences where
we find a significant, positive effect on students who registered after the cap size was changed.
In the 15 student group fixed effect model the total effect for a 10 percent increase in class size is
to increase student grades by 0.0821 grade points (on a scale from 0-4). There is no effect,
however, on credits obtained in the next session, or enrollment in the next session. This positive
point estimate is likely to come from the peer effect given that those who registered after the cap
was changed were quasi-randomly assigned to higher performing peers.3
In total, there seems to be little heterogeneity of effect by discipline or assignment type.
Figures 1-3 summarize the point estimates on all three student outcomes with the 15 student
group fixed effect model. Effects are separated for those who registered before and after the
class size cap was changed. As is evident, all three outcomes are precisely measured in the full
sample. The point estimates remain close to zero for all other subsamples, though due to the
smaller sample size, the confidence intervals can be large in some cases. Nevertheless, it is
evident that there is little to no class size effect, except for perhaps students who registered after
the cap size changed in social science courses.
3
The point estimate of the effect of class size on students who registered before the cap changed is less stable. In
the 15 student group fixed effect model the point estimate is positive, but in the 20 quantile fixed effect model it is
negative. Table 5 of the appendix shows that for the 20 student group model and the 50 quantile group model the
point estimate is very close to zero but slightly positive. In all these indicate that there is likely no effect on class
size for students who registered before the cap size was changed. This serves as further evidence that the effect seen
on students who registered after the cap changed was likely a peer effect.
6. Robustness Checks
As stated before, the previous results are robust to fixed effects constructed of groups of
15 students per course and 20 quantiles. These groupings, however, are arbitrary. To provide
further evidence of robustness Tables A1-A6 in the appendix repeat the above analysis but with
fixed effects constructed from 20 groups of students and 50 quantiles. Again, the results remain
largely the same with no change in interpretation. To illustrate the stability of the results,
Figures 4 and 5 show the effect of class size on grade, both for students who registered before
and after the class cap size was changed, for student groups of 5 to 50 students and 10 to 50
quantiles respectively. Again, the point estimate remains relatively stable, with no change of
interpretation for this wide range of fixed effects.
7. Conclusions and Policy Implications
In this paper we present evidence of the effect of online class size on a variety of student
outcomes. For online classes that range from 16 to 40 students, increasing the class size as much
as 25 percent does not significantly effect student grades, credits earned in the next session, or
enrollment in the next session. At the sample average of a 10 percent increase in class size, we
can confidently rule out effect sizes of similar class size increases in the higher education
literature on brick and mortar classes. Further the results seem to rule out both pure class size
effects and peer effects.
There could be many reasons why this is the case. Firstly, peer-to-peer interactions may
be significantly reduced in the online context. Students cannot distract each other while the
professor is lecturing, and students have more flexibility in reading peer posts than in listening to
peer discussion during the class. Even in classes where peer-to-peer interactions would be
plausibly more important, such as classes that include projects and laboratories, students do not
seem to be affected by class size or a change in peer composition. Secondly, the nature of online
classes may already reduce professor-student interactions, or make them more efficient, such that
increasing class sizes may not necessarily reduce professor availability. For example professors
may easily be able to answer questions via email, and professors may be able to respond to all
emails whether there are 30 or 33 students in the class.
These results could have important policy and financial implications. Online courses
have the advantage of making courses more widely available to the general public. Prospective
students who live too far from a university, or whose schedules do not fit with traditional school
schedules, can use online classes to access previously unavailable content. There are also no
physical plant limitations on how many people can enroll in a course at a time. As long as the
university can preserve the quality of the product, online courses have the potential to broaden
access to higher education. This study gives the first evidence that increasing class sizes in the
online context may not degrade the quality of the class. This also has financial implications for
universities.
In our sample alone, DeVry would need to provide 4,146 small sections to
accommodate all the students in the study. In sharp contrast, DeVry would only need to provide
3,763 large sections to accommodate the same number of students -- thereby saving 383 sections,
or 9.2 percent of the small sections. From a financial perspective the cost savings is likely to
come from professor compensation, as the marginal cost of adding a section is negligible
otherwise. Many professors are adjunct and get paid by the section. Assuming the section wage
rate is constant for the class size increases of this range, and assuming that enrollment is not a
function of class size, an average 10 percent increase in class size will therefore reduce
expenditures by approximately 9.2 percent, a significant savings.
Though the class size increases in this study are significant, the main limitation of the
study is that the sample is composed of relatively small online classes. We cannot speak to how
similar class size increases in larger online classes would affect student outcomes. The structure
of larger online classes, the peer-to-peer dynamics, and the allocation of professor time may be
significantly different in those circumstances such that a 10 percent increase in class size could
have a very different effect. However, in this context, we can confidently say that increasing
class size seems to have little effect on student outcomes.
References
Allen, I. E., & Seaman, J. (2013). Changing Course: Ten Years of Tracking Online Education in the
United States. Sloan Consortium. Newburyport, MA. Retrieved From:
http://eric.ed.gov/?id=ED541571
Angrist, J., Lang, D., & Oreopoulos, P. (2009). Incentives and services for college achievement:
Evidence from a randomized trial. American Economic Journal: Applied Economics, 136-163.
Angrist, J., Lavy, V. (1999). Using Maimonides’ rule to estimate the effect of class size on scholastic
achievement. Quarterly Journal of Economics, 114, 533-575.
Bandiera, O., Larcinese, V., & Rasul, I. (2010). Heterogeneous class size effects: New evidence from a
panel of university students. The Economic Journal, 120(549), 1365-1398.
Becker, W. E., & Powers, J. R. (2001). Student performance, attrition, and class size given missing
student data. Economics of Education Review,20(4), 377-388.
Bettinger, E., Loeb, S., & Taylor, E. (2013). Remote but Influential: Peer Effects in Online Higher
Education Classrooms. Retrieved from: http://cepa.stanford.edu/content/remote-influentialpeer-effects-and-reflection-online-higher-education-classrooms
De Giorgi, G., Pellizzari, M., & Woolston, W. G. (2012). Class size and class heterogeneity. Journal of
the European Economic Association, 10(4), 795-830.
Dillon, M., Kokkelenberg, E. C., & Christy, S. M. (2002). The effects of class size on student
achievement in higher education: applying an earnings function. Cornell Higher Education
Research Institute (CHERI), 15.
Hanushek, E. A. (2002). Evidence, politics, and the class size debate. The class size debate, 37-65,
Retrieved From: http://hanushek.stanford.edu/sites/default/files/publications/
Hanushek%202002%20ClassSizeDebate.pdf
Hoxby, C. M. (2000). The effects of class size on student achievement: New evidence from population
variation. Quarterly Journal of economics, 1239-1285.
Kennedy, P. E., & Siegfried, J. J. (1997). Class size and achievement in introductory economics:
Evidence from the TUCE III data. Economics of Education Review, 16(4), 385-394.
Krueger, A. B. (2002). Understanding the magnitude and effect of class size on student
achievement. The class size debate, 7-35.
Krueger, A. B., & Whitmore, D. M. (2001). The effect of attending a small class in the early grades on
college-test taking and middle school test results: Evidence from Project STAR. The Economic
Journal, 111(468), 1-28.
Lazear, E. (1999). Educational production (No. w7349). National bureau of economic research.
Lazear, E. (2001). “Educational Production.” The Quarterly Journal of Economics, 16(3), 777- 803.
McKeachie, W. J. (1986). Teaching and learning in the college classroom: A review of the research
literature (Vol. 86). University of Michigan Press.
Mosteller, F. (1995). The Tennessee study of class size in the early school grades.
The future of children, 113-127.
Siegfried, J. J., & Kennedy, P. E. (1995). Does pedagogy vary with class size in introductory
economics?. The American Economic Review, 347-351.
Topp, Robert F. (1998) "Education of an Undergraduate: Action Before Reaction." College and
University, 45(2), pp. 123-127.
Table 1: Descriptive Statistics
Panel A: Full Sample
Mean
St. Dev.
Min
Max
Observations
Small Class (Binary Indicator)
0.485
Class Size (# of Students)
31.952
4.676
16
40
118442
Log of Class Size
3.452
0.161
2.773
3.689
118442
Prior Cumulative GPA
2.948
0.954
0
4
New Student (Indicator)
0.099
118442
Missing Prior GPA (Indicator)
0.058
118442
Failed Prior Class (Indicator)
0.270
106723
118442
99873
Panel B: Means By Large and Small Classrooms
Full Sample
Large
Small
Classroom Classroom
(s.d) [N]
(s.d.) [N]
Sample Registered
Before Cap Change
Large
Small
Classroom Classroom
(s.d) [N]
(s.d.) [N]
Sample Registered
After Cap Change
Large
Small
Classroom Classroom
(s.d) [N]
(s.d.) [N]
Class Size (# students)
33.405
(4.533)
[61032]
30.406
(4.316)
[57410]
33.627
(4.302)
[28385]
30.768
(4.095)
[28706]
33.287
(4.671)
[28710]
30.059
(4.466)
[25202]
Log of Class Size
3.498
(0.148)
[61032]
3.403
(0.160)
[57410]
3.506
(0.141)
[28385]
3.416
(0.152)
[28706]
3.494
(0.152)
[28710]
3.39
(0.165)
[25202]
Prior Cumulative GPA
2.943
(0.958)
[51305]
2.954
(0.950)
[48568]
3.128
(0.838)
[26772]
3.132
(0.822)
[27138]
2.835
(0.975)
[21629]
2.818
(0.994)
[18778]
New Student
0.101
[61032]
0.097
[57410]
0.038
[28385]
0.035
[28706]
0.154
[28710]
0.159
[25202]
Missing Prior GPA
0.058
[61032]
0.057
[57410]
0.019
[28385]
0.019
[28706]
0.093
[28710]
0.096
[25202]
Failed Prior Class
0.271
[54873]
0.270
[51850]
0.229
[27309]
0.229
[27696]
0.299
[24285]
0.305
[21197]
Panel C: Proportion Of Sample By Course Category
Observations
Proportion
Business/Management
41003
0.346
Science/Mathematics
18350
0.155
Computer/Technology
11356
0.096
Social Sciences
14656
0.124
Humanities
20353
0.172
General
12724
0.107
Projects Only
30871
0.261
Laboratories Only
29022
0.245
Both Projects and Laboratories
16808
0.142
Neither Projects Nor Labs
41741
0.352
Table 2: Covariate Balance on Full Sample
Course-by-Session-by-15 Student Fixed
Effects
Panel A: Full Sample
Prior Cumulative GPA
New Student
Missing GPA
Failed Prior Class
(1)
(2)
(3)
(4)
(5)
(6)
Small Class
(Indicator)
Intended
Class Size
(Levels)
Log of
Intended
Class Size
Small Class
(Indicator)
Intended
Class Size
(Levels)
Log of
Intended
Class Size
0.007
(0.006)
[99873]
-0.001
(0.002)
[118442]
-0.001
(0.001)
[118442]
-0.002
(0.003)
[106723]
-0.003+
(0.002)
[99873]
0.000
(0.000)
[118442]
0.001+
(0.000)
[118442]
0.001
(0.001)
[106723]
-0.087
(0.056)
[99873]
0.008
(0.015)
[118442]
0.018
(0.013)
[118442]
0.026
(0.028)
[106723]
0.010+
(0.006)
[94317]
-0.001
(0.002)
[111003]
-0.001
(0.001)
[111003]
-0.002
(0.003)
[100487]
-0.003+
(0.002)
[94317]
0.000
(0.000)
[111003]
0.000
(0.000)
[111003]
0.001
(0.001)
[100487]
-0.094+
(0.055)
[94317]
-0.002
(0.016)
[111003]
0.012
(0.013)
[111003]
0.021
(0.027)
[100487]
0.006
(0.007)
[53910]
0.001
(0.001)
[57091]
0.000
(0.001)
[57091]
-0.003
(0.004)
[55005]
-0.002
(0.002)
[53910]
0.000
0.000
[57091]
0.000
0.000
[57091]
0.001
(0.001)
[55005]
-0.063
(0.069)
[53910]
-0.010
(0.014)
[57091]
-0.007
(0.010)
[57091]
0.030
(0.035)
[55005]
0.013
(0.010)
[40407]
-0.004
(0.003)
[40407]
-0.119
(0.090)
[40407]
Panel B: Sample Of Students Who Registered Before Class Cap Changed
Prior Cumulative GPA
0.002
-0.001
-0.016
(0.007)
(0.002)
(0.070)
[53910]
[53910]
[53910]
New Student
0.001
-0.000
-0.011
(0.001)
(0.000)
(0.012)
[57091]
[57091]
[57091]
Missing GPA
0.000
-0.000
-0.005
(0.001)
(0.000)
(0.009)
[57091]
[57091]
[57091]
Failed Prior Class
-0.002
0.001
0.021
(0.004)
(0.001)
(0.036)
[55005]
[55005]
[55005]
Panel C: Sample Of Students Who Registered After Class Cap Changed
Prior Cumulative GPA
0.004
-0.002
-0.069
(0.010)
(0.003)
(0.093)
[40407]
[40407]
[40407]
New Student
Course-by-Session-by-20 Quantiles Fixed
Effects
-0.003
0.001
0.018
-0.002
0.000
0.006
(0.003)
(0.001)
(0.026)
(0.003)
(0.001)
(0.028)
[53912]
[53912]
[53912]
[53912]
[53912]
[53912]
Missing GPA
-0.002
0.001+
0.035
-0.002
0.001
0.028
(0.003)
(0.001)
(0.024)
(0.003)
(0.001)
(0.024)
[53912]
[53912]
[53912]
[53912]
[53912]
[53912]
Failed Prior Class
0.001
0.000
0.008
-0.001
0.000
0.006
(0.005)
(0.001)
-0.044
(0.004)
(0.001)
(0.042)
[45482]
[45482]
[45482]
[45482]
[45482]
[45482]
Note: Course sections included in class size pilot as identified by DeVry records. Each cell contains the coefficient from a
separate OLS regression of the covariate (identified in each row) on the treatment (identified in each column). "Prior"
includes all sessions from summer 2009 to present. Columns 1-3 include course-by-session-15 student fixed effects. In each
session-course cell, students were ordered by the time they registered and split into groups of 15. Columns 4-6 include courseby-session-by 20 quantile fixed effects. In each session-course cell, students were ordered by the time they registered and
divided into 20 quantiles. Standard errors allow for clustering in sections in all cases. Standard errors in parentheses and
sample in brackets + indicates p < 0.10, * < 0.05, ** < 0.01
Table 3: Intent To Treat Estimates of Class Size on Student
Outcomes
Course-by-Session-by-15 Student Fixed Effects
(1)
(2)
(3)
Persistence
Grade (0-4)
To Next
Grade
Persistence
Session
Treatment
(0-4)
Sample
(Credits)
(4)
Persistence
To Next
Year
(Enrolled)
Treatment
Course-by-Session-by-20 Quantiles Fixed Effects
(5)
(6)
(7)
Persistence
Grade (0-4)
To Next
Grade
Persistence
Session
(0-4)
Sample
(Credits)
(8)
Persistence
To Next
Year
(Enrolled)
Model
(1)
Small Classroom
-0.003
(0.009)
-0.004
(0.011)
-0.019
(0.023)
-0.003
(0.003)
Small Classroom
0.003
(0.009)
0.003
(0.011)
-0.019
(0.022)
-0.002
(0.003)
Model
(2)
Small Classroom
0.000
(0.011)
0.002
(0.014)
-0.013
(0.031)
-0.004
(0.004)
Small Classroom
0.005
(0.011)
0.008
(0.014)
-0.004
(0.030)
-0.002
(0.004)
0.030
(0.039)
-0.008
(0.050)
-0.108
(0.137)
-0.016
(0.017)
0.006
(0.038)
-0.014
(0.046)
0.142
(0.114)
0.026+
(0.016)
-0.010
(0.016)
-0.016
(0.020)
-0.008
(0.047)
0.002
(0.006)
-0.006
(0.016)
-0.013
(0.020)
-0.033
(0.045)
0.002
(0.006)
-0.014
(0.090)
0.003
(0.117)
0.118
(0.239)
0.019
(0.030)
-0.059
(0.089)
-0.057
(0.115)
0.154
(0.230)
0.009
(0.029)
-0.075
(0.104)
-0.100
(0.134)
0.045
(0.308)
0.033
(0.037)
-0.145
(0.103)
-0.135
(0.133)
0.160
(0.298)
0.041
(0.036)
-0.526
(0.488)
-0.834
(0.641)
-0.479
(1.482)
0.090
(0.195)
-0.657
(0.488)
-0.603
(0.606)
0.218
(1.413)
0.270
(0.175)
0.158
(0.139)
0.235
(0.183)
0.105
(0.427)
-0.030
(0.056)
0.190
(0.138)
0.169
(0.173)
-0.026
(0.408)
-0.070
(0.051)
Registered After
Cap Changed
Small*After
Model
(3)
Log(Intended Class
Size)
Model
(4)
Log(Intended Class
Size)
Registered After
Cap Changed
Log(Csize)*After
Registered After
Cap Changed
Small*After
Log(Intended Class
Size)
Log(Intended Class
Size)
Registered After
Cap Changed
Log(Csize)*After
Observations
92,910
61,108
61,108
61,108
Observations
92,910
61,108
61,108
61,108
Note: Course sections included in class size pilot as identified by DeVry records. Each model represents the cofficient on the respective treatment variable from a separate OLS
regression of the dependent variable on the treatment status. Dependent variables are the column headers. All models control for prior cumulative GPA, an indicator for being a
new student, and an indicator for missing prior GPA but not being a new student, and for having failed a course previously. "Prior" includes all sessions from summer 2009 to
present. When prior GPA is missing it is set to zero. All regressions include either course-by-session-by-15 student fixed effects or course-by-session-by-20 quantiles fixed
effects, as indicated in the column headers. In the former fixed effects case, in each session-course cell, students were ordered by the time they registered and split into groups
of 15. In the latter fixed effects case, in each session-course cell, students were ordered by the time they registered and divided into 20 quantile. Standard errors allow for
clustering in sections in all cases.. Standard errors allow for clustering in sections. + indicates p < 0.10, * < 0.05, ** < 0.01
Table 4: Effect Sizes Of a 10 Percent Increase of Class Size On Student Outcomes -- 95% Confidence Interval
Course-by-Session-by-15 Student
Fixed Effects
Min
Max
Panel A: Students Who Registered Before Class Cap Changed
Grades
-0.024
0.0111
Enrollment Next Session
-0.0103
0.0276
Credits Completed Next Session
-0.0195
0.0226
Panel B: Students Who Registered After Class Cap Changed
Grades
-0.0123
0.0249
Enrollment Next Session
-0.0203
0.0219
Credits Completed Next Session
-0.0175
0.028
Course-by-Session-by-20 Quantiles
Fixed Effects
Min
Max
-0.0299
-0.0774
-0.0147
0.0049
0.0292
0.0259
-0.015
-0.0263
-0.0171
0.0218
0.0123
0.026
Table 5: Descriptive Statistics By Course Category (Treatment is Log of Intended Class Size)
Course-by-Session-by-15 Student Fixed Effects
Course-by-Session-by-20 Quantiles Fixed Effects
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
Prior
Prior
Missing
Failed Prior
Missing
Cumulative
Cumulative
New Student
New Student
Failed Prior Class
GPA
Class
GPA
GPA
GPA
Panel A: Subject Categories:
Business/Management
-0.066
0.042*
0.001
-0.001
-0.082
0.024
-0.004
0.016
(0.081)
(0.020)
(0.011)
(0.047)
(0.081)
(0.021)
(0.011)
(0.046)
[37,475]
[41,003]
[41,003]
[37,976]
[35,875]
[39,057]
[39,057]
[36,341]
Computer Science/Technology
0.115
-0.013
-0.037
-0.128
-0.141
-0.022
-0.014
-0.052
(0.263)
(0.065)
(0.076)
(0.114)
(0.282)
(0.071)
(0.075)
(0.117)
[9,026]
[11,356]
[11,356]
[10,285]
[8,370]
[10,431]
[10,431]
[9,482]
General Classes
-0.320
-0.038
0.003
0.135
-0.199
-0.018
-0.023
0.074
(0.263)
(0.069)
(0.045)
(0.106)
(0.236)
(0.070)
(0.045)
(0.095)
[5,725]
[12,724]
[12,724]
[6,847]
[5,414]
[11,665]
[11,665]
[6,389]
Humanities
-0.111
-0.017
0.005
0.056
-0.098
-0.029
0.015
-0.019
(0.123)
(0.028)
(0.030)
(0.062)
(0.126)
(0.027)
(0.027)
(0.064)
[18,646]
[20,353]
[20,353]
[19,592]
[17,547]
[19,091]
[19,091]
[18,406]
Science/Mathematics
-0.253+
-0.001
0.053
0.075
-0.190
-0.013
0.047
0.065
(0.145)
(0.025)
(0.034)
(0.065)
(0.134)
(0.024)
(0.035)
(0.063)
[16,598]
[18,350]
[18,350]
[17,472]
[15,318]
[16,899]
[16,899]
[16,351]
Social Sciences
0.211
0.028
0.092+
-0.021
0.110
0.011
0.063
0.011
(0.183)
(0.028)
(0.053)
(0.070)
(0.165)
(0.027)
(0.050)
(0.059)
[12,403]
[14,656]
[14,656]
[14,281]
[11,789]
[13,855]
[13,855]
[13,513]
Panel B: Assignment Type
Project Only
-0.122
0.009
0.020
0.014
-0.094
0.002
0.006
0.005
(0.101)
(0.033)
(0.022)
(0.052)
(0.100)
(0.035)
(0.023)
(0.051)
[22,961]
[30,871]
[30,871]
[24,306]
[21,973]
[29,062]
[29,062]
[23,148]
Laboratory Only
-0.094
-0.006
0.042
0.001
-0.132
-0.017
0.046
0.032
(0.122)
(0.023)
(0.028)
(0.057)
(0.117)
(0.023)
(0.030)
(0.056)
[25,112]
[29,022]
[29,022]
[27,397]
[23,274]
[26,773]
[26,773]
[25,322]
Both Projects and Laboratories
-0.219+
0.002
-0.026
0.147*
-0.198
-0.022
-0.023
0.128*
(0.132)
(0.032)
(0.021)
(0.069)
(0.123)
(0.029)
(0.019)
(0.063)
[15,740]
[16,808]
[16,808]
[16,010]
[15,016]
[15,989]
[15,989]
[15,258]
Neither Projects Nor Laboratories
0.022
0.022
0.021
-0.005
-0.004
0.016
0.010
-0.030
(0.103)
(0.023)
(0.027)
(0.049)
(0.105)
(0.024)
(0.027)
(0.048)
[36,060]
[41,741]
[41,741]
[39,010]
[34,054]
[39,179]
[39,179]
[36,759]
Note: Course sections included in class size pilot as identified by DeVry records. Each cell contains the coefficient from a separate OLS regression of the covariate (identified in each row) on the treatment (log
of intended class size in all cases). "Prior" includes all sessions from summer 2009 to present. Columns 1-4 include course-by-session-15 student fixed effects. In each session-course cell, students were ordered
by the time they registered and split into groups of 15. Columns 5-8 include course-by-session-by 20 quantile fixed effects. In each session-course cell, students were ordered by the time they registered and
divided into 20 quantile. Standard errors allow for clustering in sections in all cases. Standard errors in parentheses and sample in brackets + indicates p < 0.10, * < 0.05, ** < 0.01
Table 6: Intent To Treat Estimates of Class Size on Student Outcomes Heterogeneity By Discipline
Course-by-Session-by-15 Student Fixed Effects
(1)
(2)
(3)
(4)
Persistance
Persistance
Grade (0-4)
To Next
To Next
Persistence
Session
Year
Grade (0-4)
Sample
(Credits)
(Enrolled)
Treatment
Business/Marketing
Log(Intended Class Size)
-0.156
-0.044
0.213
0.043
(0.158)
(0.194)
(0.410)
(0.047)
Computer/Technology
General
Humanities
Course-by-Session-by-20 Quantiles Fixed Effects
(1)
(2)
(3)
(4)
Persistance
Grade (0-4)
Persistance To
To Next
Grade (0Persistence
Next Year
Session
4)
Sample
(Enrolled)
(Credits)
-0.182
(0.160)
-0.147
(0.193)
0.395
(0.429)
0.036
(0.048)
Registered After Cap Changed
-1.181
(0.739)
-0.797
(0.956)
-0.486
(2.281)
0.057
(0.268)
-1.190
(0.734)
-1.249
(0.924)
-0.586
(2.446)
0.016
(0.256)
Log(Csize)*After
0.362+
(0.213)
0.239
(0.275)
0.050
(0.663)
-0.027
(0.077)
0.354+
(0.211)
0.363
(0.265)
0.163
(0.702)
-0.004
(0.074)
Observations
Log(Intended Class Size)
33,782
0.021
(0.388)
22,763
0.528
(0.575)
22,763
0.722
(1.543)
22,763
-0.057
(0.195)
33,782
-0.195
(0.369)
22,763
0.430
(0.555)
22,763
0.433
(1.591)
22,763
-0.053
(0.194)
Registered After Cap Changed
-2.257
(2.218)
-1.924
(3.196)
-0.192
(7.886)
-1.111
(1.015)
-1.960
(2.285)
-0.662
(3.182)
1.555
(7.804)
-0.696
(1.028)
Log(Csize)*After
0.610
(0.623)
0.496
(0.898)
0.189
(2.239)
0.306
(0.287)
0.581
(0.649)
0.199
(0.901)
-0.330
(2.221)
0.209
(0.294)
Observations
Log(Intended Class Size)
8,707
0.019
(0.396)
6,389
-0.094
(0.508)
6,389
0.657
(1.381)
6,389
-0.016
(0.153)
8,707
-0.212
(0.352)
6,389
-0.318
(0.473)
6,389
0.201
(1.169)
6,389
-0.015
(0.130)
Registered After Cap Changed
0.457
(1.971)
-1.030
(2.425)
-0.794
(5.496)
0.021
(0.777)
-0.682
(1.520)
-0.957
(1.869)
-3.499
(4.406)
0.096
(0.562)
Log(Csize)*After
-0.209
(0.562)
0.211
(0.704)
0.175
(1.637)
0.016
(0.221)
0.094
(0.439)
0.196
(0.552)
1.138
(1.331)
0.013
(0.167)
Observations
Log(Intended Class Size)
5,827
-0.292
(0.264)
4,019
-0.491
(0.334)
4,019
-0.929
(0.749)
4,019
0.040
(0.088)
5,827
-0.182
(0.269)
4,019
-0.225
(0.348)
4,019
-0.906
(0.757)
4,019
0.058
(0.092)
Registered After Cap Changed
0.154
(1.095)
0.231
(1.343)
-0.538
(3.054)
0.053
(0.442)
0.278
(1.092)
1.593
(1.322)
0.838
(3.054)
0.267
(0.411)
Log(Csize)*After
-0.058
(0.313)
-0.083
(0.387)
0.219
(0.886)
-0.001
(0.126)
-0.098
(0.315)
-0.470
(0.391)
-0.064
(0.918)
-0.055
(0.123)
Observations
16,913
10,842
10,842
10,842
16,913
10,842
10,842
10,842
Science/Mathematics
Social Sciences
Log(Intended Class Size)
0.047
(0.257)
-0.259
(0.378)
0.451
(0.975)
0.094
(0.116)
0.063
(0.248)
-0.110
(0.372)
0.105
(0.868)
0.105
(0.868)
Registered After Cap Changed
0.697
(1.376)
-1.619
(1.957)
1.119
(4.663)
0.752
(0.605)
1.158
(1.333)
-1.056
(1.961)
-0.898
(4.300)
-0.898
(4.300)
Log(Csize)*After
-0.216
(0.384)
0.452
(0.546)
-0.394
(1.309)
-0.221
(0.170)
-0.322
(0.372)
0.296
(0.549)
0.161
(1.199)
0.161
(1.199)
Observations
Log(Intended Class Size)
14,908
0.158
(0.314)
9,329
-0.168
(0.431)
9,329
-0.518
(0.984)
9,329
-0.024
(0.128)
14,908
-0.162
(0.276)
9,329
-0.369
(0.391)
9,329
-0.677
(0.794)
9,329
-0.050
(0.105)
Registered After Cap Changed
-2.076
(1.367)
-3.611+
(1.957)
-3.691
(4.819)
-0.180
(0.584)
-2.535*
(1.285)
-3.632*
(1.791)
-4.582
(4.200)
-0.005
(0.517)
Log(Csize)*After
0.663+
(0.385)
1.040+
(0.554)
1.022
(1.367)
0.049
(0.166)
0.739*
(0.363)
1.003*
(0.508)
1.341
(1.197)
0.014
(0.148)
Observations
12,773
7,766
7,766
7,766
12,773
7,766
7,766
7,766
Note: Course sections included in class size pilot as identified by DeVry records. Each model represents the cofficient on the respective treatment variable from a separate OLS regression of the dependent
variable on the treatment status. Dependent variables are the column headers. All models control for prior cumulative GPA, an indicator for being a new student, and an indicator for missing prior GPA but not
being a new student, and for having failed a course previously. "Prior" includes all sessions from summer 2009 to present. When prior GPA is missing it is set to zero. All regressions include either course-bysession-by-15 student fixed effects or course-by-session-by-20 quantiles fixed effects, as indicated in the column headers. In the former fixed effects case, in each session-course cell, students were ordered by
the time they registered and split into groups of 15. In the latter fixed effects case, in each session-course cell, students were ordered by the time they registered and divided into 20 quantile. Standard errors
allow for clustering in sections in all cases.. Standard errors allow for clustering in sections. + indicates p < 0.10, * < 0.05, ** < 0.01
Figure 1
Figure 3
Figure 2
Figure 4
Figure 5
Note: Course sections included in class size pilot as identified by DeVry records. Each cell was calculated as follows. A
regression was run of the outcome (the first cell of each row) on log of class size. The point estimates and standard errors were
adjusted to represent a 10% increase in class size. The minimum and maximum point estimate on a 95% confidence interval were
then calculated and divided by the standard deviation of the outcome. Columns 1 and 2 include course-by-session-15 student fixed
effects. In each session-course cell, students were ordered by the time they registered and split into groups of 15. Columns 3 and 4
include course-by-session-by 20 quantile fixed effects. In each session-course cell, students were ordered by the time they
registered and divided into 20 quantiles. Standard errors allowed for clustering in sections in all cases.
Appendix
Table A1: Covariate Balance on Full Sample With Alternate Fixed Effects
Course-by-Session-by-20 Student Fixed Effects
(1)
(2)
(3)
Small Class
Intended Class
Log of Intended
(Indicator)
Size (Levels)
Class Size
Panel A: Full Sample
Prior Cumulative GPA
0.008
-0.003+
-0.098+
(0.006)
(0.002)
(0.054)
[99873]
[99873]
[99873]
Course-by-Session-by-50 Quantiles Fixed Effects
(4)
(5)
(6)
Small Class
Intended Class
Log of Intended
(Indicator)
Size (Levels)
Class Size
0.004
(0.007)
[94317]
-0.002
(0.002)
[94317]
-0.061
(0.064)
[94317]
New Student
-0.001
(0.001)
[118442]
0.000
(0.000)
[118442]
0.006
(0.014)
[118442]
-0.001
(0.002)
[111003]
0.000
(0.000)
[111003]
0.001
(0.017)
[111003]
Missing GPA
-0.001
(0.001)
[118442]
0.000
(0.000)
[118442]
0.012
(0.012)
[118442]
-0.000
(0.002)
[111003]
0.000
(0.000)
[111003]
0.009
(0.014)
[111003]
Failed Prior Class
-0.002
(0.003)
[106723]
0.001
(0.001)
[106723]
0.031
(0.026)
[106723]
-0.000
(0.003)
[100487]
0.000
(0.001)
[100487]
0.009
(0.030)
[100487]
Panel B: Sample Of Students Who Registered Before Class Cap Changed
Prior Cumulative GPA
0.003
-0.001
-0.023
(0.007)
(0.002)
(0.066)
[53910]
[53910]
[53910]
-0.001
(0.008)
[53910]
-0.000
(0.002)
[53910]
-0.005
(0.078)
[53910]
New Student
0.000
(0.001)
[57091]
-0.000
(0.000)
[57091]
-0.005
(0.012)
[57091]
0.001
(0.001)
[57091]
-0.000
(0.000)
[57091]
-0.007
(0.015)
[57091]
Missing GPA
0.001
(0.001)
[57091]
-0.000
(0.000)
[57091]
-0.009
(0.009)
[57091]
0.001
(0.001)
[57091]
-0.000
(0.000)
[57091]
-0.010
(0.012)
[57091]
Failed Prior Class
-0.002
(0.004)
[55005]
0.001
(0.001)
[55005]
0.018
(0.034)
[55005]
-0.001
(0.004)
[55005]
0.001
(0.001)
[55005]
0.019
(0.038)
[55005]
Panel C: Sample Of Students Who Registered After Class Cap Changed
Prior Cumulative GPA
0.008
-0.003
-0.102
(0.009)
(0.003)
(0.088)
[40407]
[40407]
[40407]
0.012
(0.011)
[40407]
-0.004
(0.003)
[40407]
-0.133
(0.105)
[40407]
New Student
-0.002
(0.003)
[53912]
0.000
(0.001)
[53912]
0.010
(0.025)
[53912]
-0.002
(0.003)
[53912]
0.000
(0.001)
[53912]
0.009
(0.031)
[53912]
Missing GPA
-0.001
(0.002)
[53912]
0.001
(0.001)
[53912]
0.022
(0.022)
[53912]
-0.002
(0.003)
[53912]
0.001
(0.001)
[53912]
0.027
(0.026)
[53912]
Failed Prior Class
-0.002
(0.004)
[45482]
0.001
(0.001)
[45482]
0.035
(0.042)
[45482]
0.001
(0.005)
[45482]
-0.000
(0.001)
[45482]
-0.003
(0.047)
[45482]
Note: Course sections included in class size pilot as identified by DeVry records. Each cell contains the coefficient from a separate OLS regression of
the covariate (identified in each row) on the treatment (identified in each column). "Prior" includes all sessions from summer 2009 to present. Columns
1-3 include course-by-session-20 student fixed effects. In each session-course cell, students were ordered by the time they registered and split into
groups of 20. Columns 4-6 include course-by-session-by 50 quantile fixed effects. In each session-course cell, students were ordered by the time they
registered and divided into 50 quantile. Standard errors allow for clustering in sections in all cases. Standard errors in parentheses and sample in
brackets + indicates p < 0.10, * < 0.05, ** < 0.01
Table A2: Descriptive Statistics By Course Category With Alternate Fixed Effects (Treatment is Log of Intended Class Size)
Course-by-Session-by-20 Student Fixed Effects
Course-by-Session-by-50 Quantiles Fixed Effects
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
Prior
Prior
New
Missing
Failed Prior
New
Missing
Failed Prior
Cumulative
Cumulative
Student
GPA
Class
Student
GPA
Class
GPA
GPA
Panel A: Subject Categories:
Business/Management
-0.069
(0.076)
[37,475]
0.038+
(0.020)
[41,003]
-0.001
(0.011)
[41,003]
0.006
(0.043)
[37,976]
-0.119
(0.093)
[35,875]
0.034
(0.024)
[39,057]
-0.004
(0.013)
[39,057]
0.021
(0.053)
[36,341]
0.190
(0.264)
[9,026]
0.011
(0.064)
[11,356]
-0.008
(0.070)
[11,356]
-0.140
(0.117)
[10,285]
0.046
(0.317)
[8,370]
-0.041
(0.081)
[10,431]
-0.068
(0.086)
[10,431]
-0.113
(0.136)
[9,482]
General Classes
-0.123
(0.252)
[5,725]
-0.037
(0.064)
[12,724]
-0.021
(0.042)
[12,724]
0.061
(0.104)
[6,847]
-0.175
(0.265)
[5,414]
-0.021
(0.067)
[11,665]
-0.016
(0.044)
[11,665]
0.090
(0.103)
[6,389]
Humanities
-0.194
(0.121)
[18,646]
-0.030
(0.028)
[20,353]
0.020
(0.028)
[20,353]
0.069
(0.061)
[19,592]
0.003
(0.151)
[17,547]
-0.033
(0.030)
[19,091]
0.007
(0.030)
[19,091]
-0.030
(0.072)
[18,406]
Science/Mathematics
-0.224+
(0.133)
[16,598]
-0.004
(0.022)
[18,350]
0.040
(0.034)
[18,350]
0.077
(0.062)
[17,742]
-0.113
(0.149)
[15,318]
-0.020
(0.025)
[16,899]
0.057
(0.036)
[16,899]
0.038
(0.066)
[16,351]
0.066
(0.172)
[12,403]
Panel B: Assignment Type
Project Only
-0.088
(0.096)
[22,961]
0.029
(0.027)
[14,656]
0.062
(0.051)
[14,656]
0.031
(0.067)
[14,281]
0.147
(0.196)
[11,789]
0.039
(0.030)
[13,855]
0.056
(0.054)
[13,855]
-0.035
(0.070)
[13,513]
0.006
(0.032)
[30,871]
0.009
(0.020)
[30,871]
0.001
(0.050)
[24,306]
-0.079
(0.110)
[21,973]
0.006
(0.036)
[29,062]
0.003
(0.024)
[29,062]
0.003
(0.056)
[23,148]
Laboratory Only
-0.071
(0.114)
[25,112]
-0.001
(0.021)
[29,022]
0.037
(0.028)
[29,022]
0.012
(0.054)
[27,397]
-0.049
(0.132)
[23,274]
-0.024
(0.025)
[26,773]
0.042
(0.032)
[26,773]
0.011
(0.061)
[25,322]
-0.202
(0.127)
[15,740]
0.002
(0.030)
[16,808]
-0.022
(0.020)
[16,808]
0.133*
(0.066)
[16,010]
-0.234
(0.144)
[15,016]
-0.006
(0.035)
[15,989]
-0.017
(0.021)
[15,989]
0.119+
(0.069)
[15,258]
Computer
Science/Technology
Social Sciences
Both Projects and
Laboratories
Neither Projects Nor
Laboratories
-0.073
0.011
0.014
0.023
0.047
0.018
0.006
-0.047
(0.098)
(0.023)
(0.026)
(0.047)
(0.127)
(0.027)
(0.031)
(0.057)
[36,060]
[41,741]
[41,741]
[39,010]
[34,054]
[39,179]
[39,179]
[36,759]
Note: Course sections included in class size pilot as identified by DeVry records. Each cell contains the coefficient from a separate OLS regression of
the covariate (identified in each row) on the treatment (log of intended class size in all cases). "Prior" includes all sessions from summer 2009 to present.
Columns 1-4 include course-by-session-20 student fixed effects. In each session-course cell, students were ordered by the time they registered and split
into groups of 20. Columns 5-8 include course-by-session-by 50 quantile fixed effects. In each session-course cell, students were ordered by the time
they registered and divided into 50 quantile. Standard errors allow for clustering in sections in all cases. Standard errors in parentheses and sample in
brackets + indicates p < 0.10, * < 0.05, ** < 0.01
Table A3: Intent To Treat Estimates of Class Size on Student Outcomes With Alternate Fixed Effects
Course-by-Session-by-20 Student Fixed Effects
(1)
(2)
(3)
(4)
Persistance
To Next
Session
(Credits)
Persistance
To Next
Year
(Enrolled)
Course-by-Session-by-50 Quantiles Fixed Effects
(5)
(6)
(7)
Grade
(0-4)
Grade (0-4)
Persistence
Sample
Persistance
To Next
Session
(Credits)
(8)
Grade (0-4)
Grade (0-4)
Persistence
Sample
Model (1)
Small Classroom
-0.002
(0.009)
-0.006
(0.011)
-0.011
(0.022)
-0.001
(0.003)
Small Classroom
-0.006
(0.010)
-0.012
(0.012)
-0.032
(0.025)
-0.004
(0.003)
Model (2)
Small Classroom
0.002
(0.011)
0.001
(0.013)
-0.011
(0.029)
-0.004
(0.004)
Small Classroom
0.000
(0.012)
-0.001
(0.015)
-0.035
(0.033)
-0.007
(0.004)
0.005
(0.035)
-0.038
(0.045)
-0.083
(0.113)
-0.009
(0.014)
0.059
(0.062)
0.044
(0.071)
0.203
(0.205)
0.007
(0.024)
Small*After
-0.012
(0.016)
-0.020
(0.019)
0.007
(0.045)
0.005
(0.006)
Small*After
-0.014
(0.017)
-0.025
(0.022)
0.008
(0.050)
0.007
(0.007)
Model (3)
Log(Intended Class Size)
-0.032
(0.088)
0.016
(0.115)
0.060
(0.229)
0.013
(0.029)
Log(Intended Class Size)
0.018
(0.096)
0.082
(0.125)
0.219
(0.262)
0.018
(0.032)
Model (4)
Log(Intended Class Size)
-0.106
(0.101)
-0.100
(0.130)
0.090
(0.303)
0.036
(0.036)
Log(Intended Class Size)
-0.101
(0.114)
-0.095
(0.144)
0.193
(0.345)
0.059
(0.040)
-0.692
(0.463)
-1.001+
(0.604)
0.310
(1.466)
0.149
(0.182)
-0.868
(0.573)
-1.277+
(0.712)
0.050
(1.716)
0.313
(0.208)
0.198
(0.132)
0.274
(0.173)
-0.112
(0.421)
-0.045
(0.052)
0.265
(0.163)
0.380+
(0.204)
0.046
(0.495)
-0.088
(0.060)
Treatment
Registered After Cap
Changed
Registered After Cap
Changed
Log(Csize)*After
Treatment
Registered After Cap
Changed
Registered After Cap
Changed
Log(Csize)*After
Persistance
To Next Year
(Enrolled)
Observations
92,910
61,108
61,108
61,108
Observations
92,910
61,108
61,108
61,108
Note: Course sections included in class size pilot as identified by DeVry records. Each model represents the cofficient on the respective treatment variable from a separate OLS regression of the dependent
variable on the treatment status. Dependent variables are the column headers. All models control for prior cumulative GPA, an indicator for being a new student, and an indicator for missing prior GPA but not
being a new student, and for having failed a course previously. "Prior" includes all sessions from summer 2009 to present. When prior GPA is missing it is set to zero. All regressions include either course-bysession-by-20 student fixed effects or course-by-session-by-50 quantiles fixed effects, as indicated in the column headers. In the former fixed effects case, in each session-course cell, students were ordered by
the time they registered and split into groups of 20. In the latter fixed effects case, in each session-course cell, students were ordered by the time they registered and divided into 50 quantiles. Standard errors
allow for clustering in sections in all cases.. Standard errors allow for clustering in sections. + indicates p < 0.10, * < 0.05, ** < 0.01
Table A4: Intent To Treat Estimates of Class Size on Student Outcomes Heterogeneity By Discipline
Course-by-Session-by-15 Student Fixed Effects
(1)
(2)
(3)
(4)
Grade (0Persistence
Persistence
To Next
To Next
4)
Session
Year
Persistence
(Credits)
(Enrolled)
Sample
Grade (0-4)
Business/Marketing
Treatment
Log(Intended Class Size)
Computer/Technology
Course-by-Session-by-20 Student Fixed Effects
(1)
(2)
(3)
(4)
Grade (0Persistence
Persistence
To Next
To Next
4)
Session
Year
Persistence
(Credits)
(Enrolled)
Sample
Grade (0-4)
Course-by-Session-by-20 Quantile Fixed Effects
(1)
(2)
(3)
(4)
Grade (0Persistence
Persistence
To Next
To Next
4)
Session
Year
Persistence
(Credits)
(Enrolled)
Sample
Grade (0-4)
Course-by-Session-by-50 Quantiles Fixed Effects
(2)
(3)
(4)
Grade (0Persistence
To Next
4)
Session
Persistence
Persistence To
(Credits)
Sample
Year (Enrol
Grade (0-4)
(1)
-0.013
(0.146)
0.053
(0.177)
0.261
(0.333)
0.034
(0.042)
-0.025
(0.144)
0.050
(0.177)
0.243
(0.325)
0.028
(0.040)
-0.039
(0.148)
-0.001
(0.179)
0.458
(0.342)
0.034
(0.042)
-0.001
(0.162)
0.138
(0.194)
0.460
(0.393)
0.043
(0.047)
Observations
Log(Intended Class Size)
33,938
0.198
(0.333)
22,864
0.700
(0.463)
22,864
0.993
(1.095)
22,864
0.109
(0.137)
33,938
-0.068
(0.339)
22,864
0.739
(0.461)
22,864
0.901
(1.063)
22,864
0.078
(0.124)
33,780
0.066
(0.328)
22,763
0.525
(0.473)
22,763
0.289
(1.183)
22,763
0.047
(0.135)
33,782
0.347
(0.357)
22,763
0.906+
(0.519)
22,763
0.074
(1.271)
22,763
0.064
(0.161)
General
Observations
Log(Intended Class Size)
8,800
-0.067
(0.313)
6,454
0.006
(0.389)
6,454
0.711
(0.946)
6,454
-0.006
(0.107)
8,800
-0.182
(0.286)
6,454
-0.179
(0.361)
6,454
0.989
(0.909)
6,454
0.036
(0.104)
8,707
-0.179
(0.283)
6,389
-0.224
(0.356)
6,389
0.842
(0.810)
6,389
-0.002
(0.098)
8,707
-0.128
(0.305)
6,389
-0.242
(0.392)
6,389
0.469
(0.857)
6,389
-0.031
(0.103)
Humanities
Observations
Log(Intended Class Size)
5,866
-0.313
(0.227)
4,044
-0.503
(0.307)
4,044
-0.774
(0.616)
4,044
0.033
(0.074)
5,827
-0.232
(0.227)
4,019
-0.392
(0.310)
4,019
-1.025+
(0.572)
4,019
0.010
(0.072)
5,827
-0.231
(0.231)
4,019
-0.474
(0.303)
4,019
-0.911
(0.582)
4,019
0.033
(0.072)
5,827
-0.162
(0.254)
4,019
-0.465
(0.345)
4,019
-1.019
(0.672)
4,019
0.010
(0.082)
Science/Mathematics
Observations
Log(Intended Class Size)
17,037
-0.101
(0.211)
10,921
-0.067
(0.299)
10,921
0.179
(0.637)
10,921
-0.037
(0.083)
17,037
-0.152
(0.202)
10,921
-0.038
(0.278)
10,921
0.127
(0.629)
10,921
-0.015
(0.082)
16,911
-0.085
(0.201)
10,842
0.038
(0.290)
10,842
0.163
(0.591)
10,842
-0.036
(0.079)
16,913
-0.073
(0.213)
10,842
0.136
(0.306)
10,842
0.608
(0.674)
10,842
0.017
(0.087)
Social Sciences
Observations
Log(Intended Class Size)
15,154
0.493*
(0.247)
9,493
0.363
(0.343)
9,493
0.093
(0.717)
9,493
0.013
(0.093)
15,154
0.534*
(0.240)
9,493
0.450
(0.334)
9,493
-0.082
(0.648)
9,493
-0.037
(0.087)
14,908
0.184
(0.234)
9,329
0.142
(0.322)
9,329
-0.004
(0.598)
9,329
-0.045
(0.079)
14,908
0.435+
(0.244)
8,707
0.525
(0.333)
6,389
0.396
(0.717)
6,389
-0.012
(0.093)
Projects Only
Observations
Log(Intended Class Size)
12,890
0.057
(0.157)
7,856
0.165
(0.192)
7,856
0.387
(0.416)
7,856
-0.009
(0.046)
12,773
-0.047
(0.152)
7,766
0.041
(0.186)
7,766
0.175
(0.395)
7,766
-0.025
(0.044)
12,773
-0.052
(0.151)
7,766
0.024
(0.182)
7,766
0.324
(0.395)
7,766
-0.005
(0.045)
12,773
0.005
(0.164)
7,766
0.145
(0.198)
7,766
0.271
(0.444)
7,766
-0.019
(0.049)
Labs Only
Observations
Log(Intended Class Size)
21,686
-0.175
(0.187)
14,849
-0.026
(0.264)
14,849
0.158
(0.572)
14,849
-0.026
(0.073)
21,686
-0.295+
(0.178)
14,849
0.041
(0.248)
14,849
0.156
(0.564)
14,849
0.004
(0.072)
21,582
-0.168
(0.179)
14,776
0.082
(0.258)
14,776
0.079
(0.548)
14,776
-0.028
(0.071)
21,582
-0.088
(0.190)
14,776
0.224
(0.270)
14,776
0.565
(0.608)
14,776
0.031
(0.079)
Projects and Labs
Observations
Log(Intended Class Size)
22,996
-0.053
(0.210)
15,426
0.041
(0.287)
15,426
0.226
(0.578)
15,426
-0.008
(0.069)
22,996
0.051
(0.212)
15,426
0.139
(0.297)
15,426
-0.126
(0.549)
15,426
-0.051
(0.065)
22,679
-0.052
(0.212)
15,211
-0.023
(0.297)
15,211
0.384
(0.546)
15,211
-0.008
(0.066)
22,679
-0.039
(0.228)
15,211
0.076
(0.330)
15,211
0.633
(0.659)
15,211
-0.024
(0.077)
Observations
14,388
9,154
9,154
9,154
14,388
9,154
9,154
9,154
14,306
9,102
9,102
9,102
14,306
9,102
9,102
9,102
Neither Projects Nor
Labs
Log(Intended Class Size)
0.058
(0.174)
-0.183
(0.228)
-0.292
(0.413)
0.093
(0.058)
0.128
(0.173)
-0.084
(0.226)
-0.057
(0.399)
0.092
(0.056)
0.015
(0.179)
-0.264
(0.229)
-0.155
(0.408)
0.058
(0.057)
0.160
(0.194)
-0.097
(0.252)
-0.350
(0.463)
0.080
(0.066)
Observations
34,615
22,203
22,203
22,203
34,615
22,203
22,203
22,203
34,343
22,019
22,019
22,019
34,343
22,019
22,019
22,019
Note: Course sections included in class size pilot as identified by DeVry records. Each model represents the cofficient on the respective treatment variable from a separate OLS regression of the dependent variable on the treatment status. Dependent variables are the column headers. All models control for prior cumulative
an indicator for being a new student, and an indicator for missing prior GPA but not being a new student, and for having failed a course previously. "Prior" includes all sessions from summer 2009 to present. When prior GPA is missing it is set to zero. All regressions include either course-by-session-by-15 student fixed ef
or course-by-session-by-20 quantiles fixed effects, as indicated in the column headers. In the former fixed effects case, in each session-course cell, students were ordered by the time they registered and split into groups of 15. In the latter fixed effects case, in each session-course cell, students were ordered by the time they
registered and divided into 20 quantile. Standard errors allow for clustering in sections in all cases.. Standard errors allow for clustering in sections. + indicates p < 0.10, * < 0.05, ** < 0.01
Table A5: Intent To Treat Estimates of Class Size on Student Outcomes
Heterogeneity By Discipline
Course-by-Session-by-20 Student Fixed Effects
(1)
(2)
(3)
(4)
Persistence
Persistence
To Next
To Next
Grade (0-4)
Session
Grade
Year
Persistence
(Credits)
(0-4)
(Enrolled)
Sample
Treatment
Log(Intended
Class Size)
Business/Marketing
-0.141
-0.056
0.494
0.036
(0.157)
(0.193)
(0.423)
(0.046)
Registered After
Cap Changed
Computer/Technology
0.074
(0.054)
0.053
(0.247)
-1.537+
(0.884)
-1.300
(1.096)
0.515
(2.803)
0.281
(0.302)
Log(Csize)*After
0.296
(0.201)
0.274
(0.259)
-0.644
(0.682)
-0.024
(0.071)
0.489+
(0.254)
0.405
(0.315)
-0.091
(0.807)
-0.078
(0.088)
Observations
Log(Intended
Class Size)
33,782
22,763
22,763
22,763
33,782
22,763
22,763
22,763
-0.098
(0.381)
0.888
(0.538)
0.770
(1.425)
-0.067
(0.175)
0.006
(0.427)
0.581
(0.640)
0.203
(1.637)
-0.088
(0.235)
-1.370
(2.124)
0.455
(2.931)
0.307
(6.941)
-1.097
(0.910)
-2.668
(2.568)
-2.282
(3.504)
1.928
(8.432)
-1.024
(1.216)
Log(Csize)*After
0.382
(0.601)
-0.154
(0.830)
0.028
(1.986)
0.295
(0.260)
0.768
(0.732)
0.686
(0.995)
-0.267
(2.403)
0.322
(0.347)
Observations
Log(Intended
Class Size)
8,707
6,389
6,389
6,389
8,707
6,389
6,389
6,389
-0.048
(0.352)
-0.129
(0.470)
0.753
(1.320)
0.126
(0.138)
-0.210
(0.371)
-0.507
(0.490)
-0.160
(1.220)
-0.016
(0.137)
1.054
(1.934)
0.220
(2.419)
-1.582
(5.412)
0.707
(0.672)
-0.443
(1.873)
-1.645
(2.329)
-3.697
(4.706)
0.146
(0.656)
Log(Csize)*After
-0.304
(0.538)
-0.083
(0.678)
0.437
(1.555)
-0.202
(0.195)
0.160
(0.532)
0.515
(0.671)
1.213
(1.416)
-0.031
(0.194)
Observations
Log(Intended
Class Size)
5,827
4,019
4,019
4,019
5,827
4,019
4,019
4,019
-0.245
(0.252)
-0.354
(0.332)
-1.131
(0.702)
0.039
(0.087)
-0.088
(0.304)
-0.390
(0.383)
-0.607
(0.889)
0.072
(0.107)
-0.109
(0.955)
0.350
(1.219)
-0.446
(2.833)
0.161
(0.403)
0.431
(1.343)
0.472
(1.594)
2.850
(4.135)
0.453
(0.508)
Log(Csize)*After
0.031
(0.273)
-0.107
(0.351)
0.158
(0.820)
-0.034
(0.115)
-0.156
(0.381)
-0.147
(0.471)
-0.825
(1.251)
-0.125
(0.151)
Observations
Log(Intended
Class Size)
16,913
10,842
10,842
10,842
16,913
10,842
10,842
10,842
-0.017
(0.253)
-0.387
(0.360)
0.606
(0.945)
0.124
(0.118)
0.030
(0.267)
-0.077
(0.391)
0.989
(1.017)
0.160
(0.115)
0.832
(1.372)
-2.374
(1.889)
2.764
(4.553)
0.844
(0.598)
0.875
(1.486)
-1.238
(2.067)
2.230
(5.150)
0.917
(0.627)
Log(Csize)*After
-0.203
(0.384)
0.729
(0.530)
-0.803
(1.279)
-0.232
(0.169)
-0.216
(0.414)
0.397
(0.575)
-0.712
(1.426)
-0.270
(0.175)
Observations
Log(Intended
Class Size)
14,908
9,329
9,329
9,329
14,908
9,329
9,329
9,329
0.051
(0.300)
-0.197
(0.403)
-1.026
(0.876)
-0.080
(0.120)
0.014
(0.299)
-0.080
(0.406)
-0.716
(0.977)
-0.046
(0.127)
-3.510**
-4.856*
-5.610
-0.210
-3.189*
-4.322*
-7.685
-0.289
Registered After
Cap Changed
Social Sciences
0.489
(0.499)
1.906
(2.363)
Registered After
Cap Changed
Science/Mathematics
-0.027
(0.213)
-1.003
(0.896)
Registered After
Cap Changed
Humanities
-0.199
(0.181)
-1.050
(0.698)
Registered After
Cap Changed
General
Course-by-Session-by-50 Quantiles Fixed Effects
(1)
(2)
(3)
(4)
Persistence
To Next
Grade (0-4)
Persistence
Session
Persistence
To Next Year
(Credits)
Grade (0-4)
Sample
(Enrolled)
Registered After
Cap Changed
Log(Csize)*After
(1.347)
(1.908)
(4.438)
(0.575)
(1.420)
(1.979)
(4.955)
(0.605)
0.995**
(0.380)
1.305*
(0.543)
1.615
(1.266)
0.061
(0.163)
0.894*
(0.403)
1.164*
(0.565)
2.170
(1.416)
0.059
(0.172)
Observations
12,773
7,766
7,766
7,766
12,773
7,766
7,766
7,766
Note: Course sections included in class size pilot as identified by DeVry records. Each model represents the cofficient on the respective treatment variable from a separate
OLS regression of the dependent variable on the treatment status. Dependent variables are the column headers. All models control for prior cumulative GPA, an indicator for
being a new student, and an indicator for missing prior GPA but not being a new student, and for having failed a course previously. "Prior" includes all sessions from summer
2009 to present. When prior GPA is missing it is set to zero. All regressions include either course-by-session-by-20 student fixed effects or course-by-session-by-50 quantiles
fixed effects, as indicated in the column headers. In the former fixed effects case, in each session-course cell, students were ordered by the time they registered and split into
groups of 20. In the latter fixed effects case, in each session-course cell, students were ordered by the time they registered and divided into 50 quantile. Standard errors allow
for clustering in sections in all cases.. Standard errors allow for clustering in sections. + indicates p < 0.10, * < 0.05, ** < 0.01
Table A6: Intent To Treat Estimates of Class Size on Student Outcomes Heterogeneity By
Assignment Type With Alternate FE
Course-by-Session-by-20 Student Fixed Effects
(1)
(2)
(3)
(4)
Persistance
Persistance
To Next
To Next
Grade (0-4)
Session
Year
Persistence
(Credits)
(Enrolled)
Grade (0-4)
Sample
Treatment
Projects
Log(Intended
-0.125
-0.080
0.187
0.014
Only
Class Size)
(0.166)
(0.200)
(0.507)
(0.051)
Registered After
Cap Changed
Labs Only
-0.059
(0.215)
0.043
(0.558)
0.020
(0.055)
-1.247
(1.034)
0.178
(2.659)
0.341
(0.289)
-1.143
(0.973)
-1.544
(1.149)
-1.691
(2.913)
0.364
(0.332)
Log(Csize)*After
0.197
(0.244)
0.328
(0.305)
-0.085
(0.776)
-0.104
(0.086)
0.339
(0.280)
0.488
(0.338)
0.544
(0.858)
-0.098
(0.098)
Observations
Log(Intended
Class Size)
21,582
14,776
14,776
14,776
21,582
14,776
14,776
14,776
-0.282
(0.220)
-0.051
(0.315)
0.198
(0.823)
0.041
(0.105)
-0.184
(0.237)
0.136
(0.337)
0.641
(0.887)
0.070
(0.107)
-0.200
(1.155)
-0.827
(1.625)
0.169
(3.893)
0.165
(0.522)
-0.597
(1.254)
-0.445
(1.762)
0.883
(4.391)
0.295
(0.564)
Log(Csize)*After
0.082
(0.324)
0.263
(0.457)
-0.027
(1.100)
-0.047
(0.148)
0.201
(0.352)
0.167
(0.495)
-0.160
(1.234)
-0.076
(0.159)
Observations
22,679
15,211
15,211
15,211
22,679
15,211
15,211
15,211
-0.064
(0.248)
-0.059
(0.364)
0.423
(0.694)
-0.036
(0.083)
-0.165
(0.273)
-0.135
(0.414)
1.063
(0.818)
0.025
(0.096)
-0.985
(1.077)
-1.538
(1.441)
4.137
(3.224)
0.157
(0.358)
-0.959
(1.378)
-1.857
(1.777)
3.738
(3.907)
0.385
(0.476)
Log(Csize)*After
0.286
(0.314)
0.429
(0.418)
-1.138
(0.939)
-0.035
(0.106)
0.289
(0.396)
0.469
(0.509)
-0.952
(1.144)
-0.106
(0.138)
Observations
14,306
9,102
9,102
9,102
14,306
9,102
9,102
9,102
0.086
(0.198)
-0.115
(0.258)
-0.268
(0.530)
0.103
(0.074)
0.065
(0.240)
-0.312
(0.295)
-0.313
(0.625)
0.125
(0.088)
-0.322
(0.809)
-0.277
(1.102)
-1.158
(2.409)
0.039
(0.365)
-0.635
(1.083)
-1.459
(1.387)
0.326
(2.987)
0.290
(0.407)
0.097
(0.229)
0.060
(0.313)
0.274
(0.690)
-0.016
(0.103)
0.199
(0.307)
0.428
(0.395)
-0.072
(0.872)
-0.091
(0.118)
Log(Intended
Class Size)
Registered After
Cap Changed
Neither
Projects Nor
Labs
-0.134
(0.183)
-0.801
(0.837)
Registered After
Cap Changed
Both
Projects and
Labs
Course-by-Session-by-50 Quantiles Fixed Effects
(1)
(2)
(3)
(4)
Persistance
Persistance
To Next
To Next
Grade (0-4)
Session
Year
Persistence
(Credits)
(Enrolled)
Grade (0-4)
Sample
Log(Intended
Class Size)
Registered After
Cap Changed
Log(Csize)*After
Observations
34,343
22,019
22,019
22,019
34,343
22,019
22,019
22,019
Note: Course sections included in class size pilot as identified by DeVry records. Each model represents the cofficient on the respective treatment variable from a
separate OLS regression of the dependent variable on the treatment status. Dependent variables are the column headers. All models control for prior cumulative
GPA, an indicator for being a new student, and an indicator for missing prior GPA but not being a new student, and for having failed a course previously. "Prior"
includes all sessions from summer 2009 to present. When prior GPA is missing it is set to zero. All regressions include either course-by-session-by-20 student
fixed effects or course-by-session-by-50 quantiles fixed effects, as indicated in the column headers. In the former fixed effects case, in each session-course cell,
students were ordered by the time they registered and split into groups of 20. In the latter fixed effects case, in each session-course cell, students were ordered by
the time they registered and divided into 50 quantile. Standard errors allow for clustering in sections in all cases.. Standard errors allow for clustering in sections.
+ indicates p < 0.10, * < 0.05, ** < 0.01
Appendix Figure 1. Distribution of Class Size Changes Across Students