Explaining Subgroup Effects: The New York City Voucher Experiment

The last three-plus years of my professional life have been consumed by attempting to determine the extent to which a privately funded need-based grant program affected the outcomes of college students from low-income families in the state of Wisconsin. Although the overall impact on university students in the program’s first cohort are effectively zero, this masks a substantial amount of heterogeneity in outcomes across different types of people. One of the greatest challenges that we have faced in interpreting the results is determining whether program impacts are truly different across subgroups (such as academic preparation and type of initial institution attended), something which is sorely lacking in many studies. (See our latest findings here.)

Matthew Chingos of the Brookings Institution and Paul Peterson of Harvard had to face a similar challenge in explaining subgroup effects in their research on a voucher program in New York City. They concluded that, although the overall impact of offering vouchers to disadvantaged students on college enrollment was null, there were positive and statistically significant effects for black students. This research got a great deal of publicity, including in the right-leaning Wall Street Journal. (I’m somewhere in the political middle on vouchers in K-12 education—I am strongly in favor of open enrollment across public school districts and support vouchers for qualified non-profit programs, but am much more hesitant to support vouchers for faith-based and unproven for-profit programs.)

This research got even more attention today with a report by Sara Goldrick-Rab of the University of Wisconsin-Madison (my dissertation chair) which sought to downplay the subgroup effects (see this summary of the debate in Inside Higher Ed). This brief, released through the left-leaning National Education Policy Center (here is the review panel), notes that the reported impacts for black students are in fact not statistically different from Hispanic students and that the impacts for black students may not even be statistically significant due to data limitations (word to the wise: the National Student Clearinghouse is not a perfect data source). I share Sara’s concerns about statistical significance and subgroup effects. [UPDATE: Here is the authors’ response to Sara’s report, which is not surprising. If you like snark with your policy debates, I recommend checking out their Twitter discussion.]

I am generally extremely hesitant to make much out of differences in impacts by race (as well as other characteristics like parental education and family income) for several reasons. First, it is difficult to consistently measure race. (How are multiracial students classified? Why do some states classify students differently?) Second, although researchers should look at differences in outcomes by race, the question then becomes, “So what?” If black students do benefit more from a voucher program than Hispanic students, the policy lever isn’t clear. It is extremely difficult to target toward one race and not another. Chingos and Peterson were correct in their WSJ piece to make the correct comparison—if vouchers worked for black students in New York City, they might work in Washington, DC. Finally, good luck enacting a policy that makes opportunities available only for people of a certain racial background; this is much less of a problem when considering family income or parental education.

Although the true effects of the voucher program for black students may not have been statistically significant, the program is still likely to be cost-effective given the much lower costs of the private schools. Researchers and educators should carefully consider what these private schools are doing to generate similar educational outcomes at a lower cost—and also consider whether private schools are spending less money per student due to educating students with fewer special needs. I would like to see more of a discussion of cost-effectiveness in both of these pieces.

About Robert

I am an assistant professor of higher education at Seton Hall University. All opinions are my own.
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

One Response to Explaining Subgroup Effects: The New York City Voucher Experiment

  1. Any effort to identify a subgroup effect involving a dichotomy must be undertaken with recognition that the assumption that, absent a subgroup effect, one will observe the same relative effect across different baseline rates is fundamentally unsound, since a factor cannot cause equal proportionate changes in different baseline rates of experiencing an outcome while at the same time causing equal proportionate changes in rates of avoiding the outcome. Rather, one should expect that, absent a subgroup effect, a factor will cause a larger proportionate change in the baseline rate of experiencing an outcome for the group with the lower baseline rate for the outcome while causing a larger proportionate change in the rate of avoiding the outcome for the other group. The benchmark for discovering a meaningful subgroup effect should be the assumption that, absent a subgroup effect, a factor will shift the underlying distribution of each group being compared an equal distance.
    See the Subgroup Effects sub-page of the Scanlan’s Rule page of jpscanlan.com: http://www.jpscanlan.com/scanlansrule/subgroupeffects.html
    See also Interpreting Differential Effects in Light of Fundamental Statistical Tendencies, JSM 2009: http://www.jpscanlan.com/images/Scanlan_JSM_2009.ppt

Comments are closed.