Is “Overborrowing” for College an Epidemic?

As the Senate Health, Education, Labor, and Pensions Committee continues to slowly move toward Higher Education Act reauthorization, the committee held a hearing this week on the possibility of institutional risk-sharing with respect to federal student financial aid programs. This idea, which has bipartisan support at least in principle, would require at least some low-performing colleges to be responsible for a portion of loans not repaid to the federal government. (I’ve written about this idea in the past.)

Sen. Lamar Alexander (R-TN), the committee chair, began his opening statement with a discussion of “overborrowing,” which he defines as students borrowing more than they need to in order to attend college. Along with Sen. Michael Bennet (D-CO) and other colleagues, he is sponsoring the FAST Act, which contains a provision that would prorate the amount of funds part-time students can borrow for living expenses. Financial aid administrators are also concerned about overborrowing, as evidenced by their professional association’s push to allow colleges to offer students less than the maximum loan amount. This is also something that Sen. Alexander discussed in his opening statement.

But there is no commonly-accepted definition of “overborrowing,” nor is there empirical research that clearly defines how much borrowing is too much. I can see why policymakers want to limit the amount of money that part-time students can borrow for living expenses while in college, as students may hit their lifetime loan caps before completing their degrees as part-time students. But, as research that I’ve conducted with Sara Goldrick-Rab at Wisconsin and Braden Hosch at Stony Brook shows, about one-third of all colleges set living expenses at least $3,000 below what it likely costs to live. This effectively limits student borrowing, as they cannot have a financial aid package exceeding the cost of attendance.

Some people have said that high student loan default rates are a clear indicator that overborrowing is a common concern. Yet students with a small amount of debt are at a higher risk of default, as many of them dropped out of college without a degree and were unable to find gainful employment. It could be the case that borrowing more money would be a better decision, as that money might help students stay in college and complete degrees. However, a substantial percentage of students from low-income families are loan-averse—either completely unwilling to take on debt or only willing to take on a bare minimum as a last resort. Underborrowing is the concern in higher education funding that few people are talking about, and it deserves additional study.

Finally, it is worth a reminder that the typical student graduating with a bachelor’s degree has about $30,000 in debt, although there are huge differences by race/ethnicity and family income. This is in spite of media reports that focus on borrowers with atypically high debt burdens. While I’m concerned about the substantial percentage of students borrowing large amounts of money for graduate school (and particularly the implications for taxpayers due to the presence of income-based repayment programs), it’s hard to convincingly argue that overborrowing for an undergraduate degree is truly an epidemic.

Posted in Uncategorized | Tagged , , | Leave a comment

How Should State Higher Education Funding Effort Be Measured?

The question of whether states adequately fund public higher education has been a common discussion over the last few decades—and the typical answer from the higher education community is a resounding “No.” This is evident in two recent pieces that have gotten a lot of attention in recent weeks.

The first piece is a chart put out by the venerable Tom Mortensen at the Pell Institute that shows that higher education funding effort (as measured by appropriations per $1,000 in state personal income) has fallen to 1966 levels, which was then picked up by the Washington Post with the breathless headline, “How quickly will states get to zero in funding for higher education?” (The answer—based on trendlines—no later than 2050.) The second piece is from Demos and claims that state funding cuts are responsible for between 78% and 79%1 of the increase in tuition at public universities between 2001 and 2011.

Meanwhile, state higher education appropriations are actually up over the last five fiscal years, according to the annual Grapevine survey of states. In Fiscal Year 2010 (during the recession), state funding was approximately $73.9 billion, falling slightly to $72.5 billion by FY 2013. But the last two fiscal years have been better to states, and higher education appropriations have risen to nearly $81 billion. Higher education has traditionally served as a balancing wheel for state budgets, facing big cuts in tough times and getting at least some increases in good times. However, this survey is not adjusted for inflation, making funding increases look slightly larger than they actually are.

So far, I’ve alluded to four different ways to measure state higher education funding effort:

(1) Total funding, not adjusted for inflation (the measure state legislatures often prefer to discuss).

(2) Total funding, adjusted for inflation.

(3) Per-full time equivalent student funding, adjusted for inflation (the most common measure used in the research community).

(4) Funding “effort” per $1,000 in state income (a measure popular with education advocates).

So which measure is the right measure? State legislatures tend not to care about inflation-adjusted or per-student metrics because their revenue streams (primarily taxes) don’t necessarily increase alongside inflation or population growth. Additionally, enrollment for the next year or two can be difficult to accurately predict when budgets are being made, so a perfect per-FTE funding ratio is virtually impossible. But on the other hand, colleges have to make state funding work to educate an often-growing number of students, so the call for the maintenance of funding ratios makes perfect sense.

I raise these points because policymakers and education advocates often seem to talk past each other in terms of what funding effort for higher education should look like. It’s important that both sides understand where the other is coming from in terms of their definition in order to work to find common ground. And I’d love to hear your preferred method of defining ‘appropriate’ funding effort, as well as why you chose that method.

———-

1 I question the exact percentage here, as it’s the result of a correlational study. To claim causality (as they do in Table 6), the author needs to establish causality—some way to separate the effects of dropping per-student state support from other confounding factors (such as changing preferences toward research). This can be done by using panel regression techniques to essentially compare states with big funding drops to those without, after controlling for other factors that would be affecting higher education across states. But it’s hard to imagine a situation in which per-student state funding cuts aren’t responsible for at least some of the tuition increases over the last decade.

Posted in Uncategorized | Tagged , , | Leave a comment

Comments on the Brookings Value-Added Rankings

Jonathan Rothwell and Siddharth Kulkarni of the Metropolitan Policy Program at Brookings made a big splash today with the release of a set of college “value-added” rankings (link to full study and Inside Higher Ed summary) focused primarily on labor market outcomes. Value-added measures, which adjust for student and institutional characteristics to get a better handle on a college’s contribution to student outcomes, are becoming increasingly common in higher education. (I’ve written about college value-added in the past, which led to me taking the reins as Washington Monthly’s rankings methodologist.) Pretty much all of the major college rankings at this point include at least one value-added component, and this set of rankings actually shares some similarities with Money’s rankings. And the Brookings report does mention correlations with the U.S. News, Money, and Forbes rankings—but not Washington Monthly. (Sigh.)

The Brookings report uses five different outcome measures, which are then adjusted for available student characteristics and institutional characteristics such as the sector of the college and where it is located:

(1) Mid-career salary of alumni: This measures the median salary of full-time workers with a degree from a particular college and at least ten years of experience. The data are from PayScale, which suffers from being self-reported data for a subset of students, but the data likely still have value for two reasons. First, the authors do a careful job of trying to decompose any biases in the data—for example, correlating PayScale reported earnings with data from other sources. Second, even if there is an upward bias in the data, it should be similar across institutions. As I’ve written about before, I trust the order of colleges in PayScale data more than I trust the dollar values—which are likely inflated.

But there are still a few concerns with this measure. Some of the concerns, such as limiting just to graduates (excluding dropouts) and dropping students with an advanced degree, are fairly well-known. And the focus on salary definitely rewards colleges with large engineering programs, as evidenced by those colleges’ dominance of the value-added list (while art schools look horrible). However, given that ACT and SAT math scores are the other academic preparation measure used, the bias favoring engineering schools may actually be smaller than if verbal/reading scores were also used. I would also have estimated models separately for two-year and four-year colleges instead of putting them in the same model with a dummy variable for sector, but that’s just my preference.

(2) Student loan repayment rate: This represents the opposite of the average three-year student loan cohort default rate over the last three years (so a 10% default rate is framed as a 90% repayment rate). This measure is pretty straightforward, although I do have to question the value-added estimates for colleges with very high repayment rates. Value-added estimates are difficult to conceptualize for colleges with a high probability of success, as there is typically little room for improvement. But here, the highest predicted repayment rate is 96.8% for four-year colleges, while several dozen colleges have actual repayment rates in excess of 96.8%. It appears that linear regressions were used, while some type of robust generalized linear model should have also been considered. (In the Washington Monthly rankings, I use simple linear regressions for graduation rate performance, but very few colleges are so close to the ceiling of 100%.)

(3) Occupational earnings potential: This is a pretty nifty measure that uses LinkedIn data to get a handle of which occupations a college’s graduates pursue during their career. This mix of occupations is then tied to Bureau of Labor Statistics data to estimate the average salary of a college’s graduate, where advanced degree holders are also included. The value-added measure attempts to control for student and institutional characteristics, although it doesn’t control for the preferences of students toward certain majors when entering college.

I’m excited by the potential to use LinkedIn data (warts and all) to look at students’ eventual outcomes. However, it should be noted that LinkedIn is more heavily used in some fields that might be expected (business and engineering) and others that might not be expected (communication and cultural studies). The authors adjust for these differences in representation and are very transparent about it in the appendix. This appendix is definitely on the technical side, but I welcome their transparency.

They also report five different quality measures which are not included in the value-added estimate: ‘curriculum value’ (the value of the degrees offered by the college), the value of skills alumni list on LinkedIn, the percentage of graduates deemed STEM-ready, completion rates within 200% of normal time (8 years for a 4-year college, or 4 years for a 2-year college), and average institutional grant aid. These measures are not input-adjusted, but generally reflect what people think of as quality. However, average institutional grant aid is a lousy measure to include as it rewards colleges with a high-tuition, high-aid model over colleges with a low-tuition, low-aid model—even if students pay the exact same price.

In conclusion, the Brookings report tells readers some things we already know (engineering programs are where to go to make money), but provides a good—albeit partial—look at outcomes across an unusually broad swath of American higher education. I would advise readers to focus on comparing colleges with similar missions and goals, given the importance of occupation in determining earnings. I would also be more hesitant to use the metrics for very small colleges, where all of these measures can be influenced by a relatively small number of people. But the transparency of the methodology and use of new data sources make these value-added rankings a valuable contribution to the public discourse.

Posted in Uncategorized | Tagged , , , | Leave a comment

Review of “Designing the New American University”

Since Michael Crow became the president of Arizona State University in 2002, he has worked to reorganize and grow the institution into his vision of a `New American University.’ ASU has grown to over 80,000 students during his time as president through a commitment to admit all students who meet a relatively modest set of academic qualifications. At the same time, the university has embarked upon a number of significant academic reorganizations that have gotten rid of many traditional academic departments and replacing them with larger interdisciplinary schools. Crow has also attracted his fair share of criticism over the years, including for alleged micromanaging and his willingness to venture into online education. (I’ve previously critiqued ASU Online’s program with Starbucks, although many of my concerns have since been alleviated.)

Crow partnered with William Dabars, an ASU professor, to write Designing the New American University (Johns Hopkins Press, $34.95 hardcover) to more fully explain how the ASU model works. The first several chapters of the book, although rather verbose, focus on the development of the American research university. A key concept that the authors raise is isomorphism—the tendency of organizations to resemble a leading organization in the market. Crow and Dabars contend that research universities have largely followed the lead of elite private universities such as Harvard and the big Midwestern land-grant universities that developed following the Civil War. Much has changed since then, so they argue that a new structure is needed.

Chapter 7 is the key chapter of the book, in which the authors detail the design of Arizona State as a ‘New American University’ (and make a nice sales pitch for the university in the process). Crow and Dabars celebrate the growth of Arizona State, which has been matched by only a small number of public research universities. They note that a stronger focus on access has hurt them in the U.S. News rankings, a key measure of prestige—while celebrating their ranking as an ‘Up and Coming School.’ (In the Washington Monthly rankings that I compile, ASU is a very respectable 28th.) The scale of ASU allows the possibility for cost-effective operations, something which the university is trying to measure through their Center for Measuring University Performance.

It certainly seems like some elements of the changes at ASU could potentially be adopted at other research universities, but it is worth noting that research universities make up only about 200-300 of the over 7,500 postsecondary institutions in the United States. I am left wondering what the `New American’ model would look like in other sectors of higher education, which is beyond the scope of this book but an important question to answer. Some other questions to consider are the following:

(1) How would a commitment to growth happen at colleges without the prestige or market power to attract significant numbers of out-of-state students?

(2) ASU seems to have done more academic reorganizations in research-intensive departments. How would this work at a more teaching-oriented institution?

(3) How will the continuing growth of ASU Online, as well as the multiple branch campuses in the Phoenix metropolitan area, affect the organizational structure? At what point, if any, does a university reach the maximum optimal size?

(4) Will ASU’s design remain the same once Michael Crow is not president? (And is that a good thing?)

Overall, this is a solid book that is getting a substantial amount of attention for good reason. While the book could have been about 50 pages shorter while still conveying all of the important information, the final chapter is highly recommended reading. I plan to assign that chapter to my organization and governance classes in the future so they can understand how ASU is growing and succeeding through an atypical higher education model.

Posted in Uncategorized | Tagged | Leave a comment

Analyzing the Heightened Cash Monitoring Data Release

NOTE: This post was updated April 3 to reflect the Department of Education’s latest release of data on heightened cash monitoring.

In my previous post, I wrote about the U.S. Department of Education’s release of a list of 544 colleges subject to heightened cash monitoring standards due to various academic, financial, and administrative concerns. I constructed a dataset of the 512 U.S. colleges known to be facing heightened cash monitoring (HCM) along with two other key accountability measures: the percentage of students who default on loans within three years (cohort default rates) and an additional measure of private colleges’ financial strength (financial responsibility scores). In this post, I examine the reasons why colleges face heightened cash monitoring, as well as whether HCM correlates with the other accountability metrics.

The table below shows the number of colleges facing HCM-1 (shorter delays in ED’s disbursement of student financial aid dollars, although colleges not facing HCM have no delays) and HCM-2 (longer delays) by type of institution (public, private nonprofit, and for-profit).

Table 1: HCM status by institutional type.
Sector HCM-1 HCM-2
Public 68 6
Private nonprofit 97 18
Private for-profit 284 39
Total 449 63

 

While only six of 74 public colleges are facing HCM-2, more than one in ten private nonprofit (18 of 115) and for-profit colleges (39 of 323) are facing this higher standard of oversight. The next table shows the various reasons listed for why colleges are facing HCM.

Table 2: HCM status by reason for additional oversight.
Reason HCM-1 HCM-2
Low financial responsibility score 320 4
Financial statements late 66 9
Program review 1 21
Administrative capability 22 7
Accreditation concerns 1 12
Other 39 10

 

More than two-thirds (320) of the 449 colleges facing HCM-1 are included due to low financial responsibility scores (below a 1.5 on a scale ranging from -1 to 3), but only four colleges are facing HCM-2 for that reason. The next most common reason, affecting 75 colleges, is a delayed submission of required financial statements or audits. This affected 43 public colleges in Minnesota, which are a majority of the public colleges subject to HCM. Program review concerns were a main factor for HCM-2, with 21 colleges in this category (including many newly released institutions) facing HCM-2. Other serious concerns included administrative capability (22 in HCM-1 and 7 in HCM-2), accreditation (2 in HCM-1 and 12 in HCM-2), and a range of other factors (39 in HCM-1 and 10 in HCM-2).

The next table includes three of the most common or serious reasons for facing HCM (low financial responsibility scores, administrative capacity concerns, and accreditation issues) and examines their median financial responsibility scores and cohort default rates.

Table 3: Median outcome values on other accountability metrics.
Reason for inclusion in HCM Financial responsibility score Cohort default rate
Low financial responsibility score 1.2 12.1%
Administrative capability 1.6 20.3%
Accreditation issues 2.0 2.8%

 

Not surprisingly, the typical college subject to HCM for a low financial responsibility score had a financial responsibility score of 1.2 in Fiscal Year 2012, which would require additional federal oversight. Although the median cohort default rate was 12.1%, which is slightly lower than the national default rate of 13.7%, some of these colleges do not participate in the federal student loan program and are thus counted as zeroes. The median college with administrative capability concerns barely passed the financial responsibility test (with a score of 1.6), while 20.3% of students defaulted. Colleges with accreditation issues (either academic or financial) had higher financial responsibility scores (2.0) and lower cohort default rates (2.8%).

What does this release of heightened cash monitoring data tell us? Since most colleges are on the list for known concerns (low financial responsibility scores or accreditation issues) or rather silly errors (forgetting to submit financial statements on time), the value is fairly limited. But there is still some value, particularly in the administrative capability category. These colleges deserve additional scrutiny, and the release of this list will do just that.

Posted in Uncategorized | Tagged , , | 1 Comment

New Data on Heightened Cash Monitoring and Accountability Policies

Earlier this week, I wrote about the U.S. Department of Education’s pending release of a list of colleges that are currently subject to heightened cash monitoring requirements. On Tuesday morning, ED released the list of 556 colleges (updated to 544 on Friday), thanks to dogged reporting by Michael Stratford at Inside Higher Ed (see his take on the release here).

My interest lies in comparing the colleges facing heightened cash monitoring (HCM) to two other key accountability measures: the percentage of students who default on loans within three years (cohort default rates) and an additional measure of private colleges’ financial strength (financial responsibility scores). I have compiled a dataset with all of the domestic colleges known to be facing HCM, their cohort default rates, and their financial responsibility scores.

That dataset is available for download on my site, and I hope it is useful for those interested in examining these new data on federal accountability policies. I will have a follow-up post with a detailed analysis, but at this point it is more important for me to get the data out in a convenient form to researchers, policymakers, and the public.

DOWNLOAD the dataset here.

Posted in Uncategorized | Tagged , , | 1 Comment

Why is it So Difficult to Sanction Colleges for Poor Performance?

The U.S. Department of Education has the ability to sanction colleges for poor performance in several ways. A few weeks ago, I wrote about ED’s most recent release of financial responsibility scores, which require colleges deemed financially unstable to post a bond with the federal government before receiving financial aid dollars. ED can also strip a college’s federal financial aid eligibility if too high of a percentage of students default on their federal loans, if data are not provided on key measures such as graduation rates, or if laws such as Title IX (prohibiting discrimination based on sex) are not followed.

The Department of Education can also sanction colleges by placing them on Heightened Cash Monitoring (HCM), requiring additional documentation and a hold on funds before student financial aid dollars are released. Corinthian Colleges, which partially collapsed last summer, blames suddenly imposed HCM requirements for its collapse as they were left short on cash. Notably, ED has the authority to determine which colleges should face HCM without relying upon a fixed and transparent formula.

In spite of the power of the HCM designation, ED has previously refused to release a list of which colleges are subject to HCM. The outstanding Michael Stratford at Inside Higher Ed tried to get the list for nearly a year through a Freedom of Information Act request (which was mainly denied due to concerns about hurting colleges’ market positions), finally making this dispute public in an article last week. This sunlight proved to be a powerful disinfectant, as ED relinquished late Friday and will publish a list of the names this week.

The concerns about releasing HCM scores is but one of many difficulties the Department of Education has had in sanctioning colleges for poor performance across different dimensions. Last fall, the cohort default rate measures were tweaked at the last minute, which had the effect of allowing more colleges to pass and retain access to federal aid. Financial responsibility scores have been challenged over concerns that ED’s calculations are incorrect. Gainful employment metrics are still tied up in court, and tying any federal aid dollars to college ratings appears to have no chance of passing Congress at this point. Notably, these sanctions are rarely due to direct concerns about academics, as academic matters are left to accreditors.

Why is it so difficult to sanction poorly-performing colleges, and why is the Department of Education so hesitant to release performance data? I suggest three reasons below, and I would love to hear your thoughts in the comments section.

(1) The first reason is the classic political science axiom of concentrated benefits (to colleges) and diffuse costs (to students and the general public). Since there is a college in every Congressional district (Andrew Kelly at AEI shows the median district had 11 colleges in 2011-12), colleges and their professional associations can put forth a fight whenever they feel threatened.

(2) Some of these accountability measures are either all-or-nothing in nature (such as default rates) or incredibly costly for financially struggling colleges (HCM or posting a letter of credit for a low financial responsibility score). More nuanced systems with a sliding scale might make some sanctions possible, and this is a possible reform under Higher Education Act reauthorization.

(3) The complex relationship between accrediting bodies and the Department of Education leaves ED unable to directly sanction colleges for poor academic performance. A 2014 GAO report suggested accrediting bodies also focus more on finances than academics and called for a greater federal role in accreditation, something that will not sit well with colleges.

I look forward to seeing the list of colleges facing Heightened Cash Monitoring be released later this week (please, not Friday afternoon!) and will share my thoughts on the list in a future piece.

Posted in Uncategorized | Tagged , , | 1 Comment

The 2015 Net Price Madness Bracket

Every year, I take the 68 teams in the 2015 NCAA Division I men’s basketball tournament and fill out a bracket based on colleges with the lowest net price of attendance (defined as the total cost of attendance less all grant aid received). My 2014 and 2013 brackets are preserved for posterity, with Louisiana-Lafayette and North Carolina A&T emerging victorious for having the lowest net price without having won a single game.

In 2015, the final four teams standing (based on net price) are:

MIDWEST REGION: Wichita State [WINNER] (net price of $9,039*, 46% graduation rate, 36% Pell)

WEST REGION: North Carolina (net price of $11,994, 90% graduation rate, 21% Pell)

[An earlier version of this post incorrectly had BYU beating North Carolina. My apologies for that error, which has been corrected.]

EAST REGION: Wyoming (net price of $11,484, 54% graduation rate, 24% Pell)

SOUTH REGION: San Diego State (net price of $9,856, 66% graduation rate, 40% Pell)

netprice

All data for the bracket can be found here.

*NOTE: Wichita State has a reported net price of $9,039, but the net prices for each household income bracket are higher than $9,039. Something isn’t right here, but what would March Madness be without any controversy?

Indiana deserves special plaudits for having a net price for the lowest-income students of just $4,632—although the 19% Pell enrollment rate is quite low.

Also, thanks to Andy Saultz for catching an error in the VCU/Ohio State game. Much appreciated!

Posted in Uncategorized | Tagged , , | Leave a comment

Do Financial Responsibility Scores Reflect Colleges’ Financial Strength?

In spite of the vast majority of federal government operations being closed on Thursday due to snow (it’s been a rough end to winter in this part of the country), the U.S. Department of Education released financial responsibility scores for private nonprofit and for-profit colleges and universities based on 2012-2013 data. These scores are based on calculations designed to measure a college’s financial strength in three key areas: primary reserve ratio (liquidity), equity ratio (ability to borrow additional funds) and net income (profitability or excess revenue).

A college can score between -1 and 3, and colleges that score over 1.5 are considered financially responsible without any qualifications and can access federal funds. Colleges scoring between 1.0 and 1.4 are considered financially responsible and can access federal funds for up to three years, but are subject to additional Department of Education oversight of its financial aid programs. If a college does not improve its score within three years, it will not be considered financially responsible. Colleges scoring 0.9 or below are not considered financially responsible and must submit a letter of credit and be subject to additional oversight to get access to funds. A college can submit a letter of credit equal to 50% of all federal student aid funds received in the prior year and be deemed financially responsible, or it can submit a letter equal to 10% of all funds received and gain access to funds but still not be fully considered financially responsible.

As Goldie Blumenstyk (who knows more about the topic than any other journalist) and Joshua Hatch of The Chronicle of Higher Education discover in their snap analysis of the data, 158 private degree-granting colleges (108 nonprofit and 50 for-profit) failed to pass the test in 2012-13, down ten colleges from last year. Looking at all colleges eligible to receive federal financial aid, 192 failed outright in 2012-13 by scoring 0.9 or lower and an additional 128 faced additional oversight by scoring between 1.0 and 1.4.

But, as Blumenstyk and Hatch note in their piece, private colleges have repeatedly questioned how financial responsibility scores are determined and whether they are accurate measures of a college’s financial health. I’m working on an article examining whether and how colleges and other stakeholders respond to financial responsibility scores and therefore have a bunch of data at the ready to look at this topic.

Thanks to the help of my sharp research assistant Michelle Magno, I have a dataset of 270 private nonprofit colleges with financial responsibility scores and their Moody’s credit ratings in the 2010-11 academic year. (Colleges only have Moody’s ratings if they seek additional capital, which explains the smaller sample size and why few colleges with low financial responsibility scores are included.) The below scatterplot shows the relationship between Moody’s ratings and financial responsibility scores, with credit ratings observed between Caa and Aaa and financial responsibility scores observed between 1.3 and 3.0.

credit_rating

The correlation between the two measures of fiscal health was just 0.038, which is not significantly different from zero. Of the 57 colleges with the maximum financial responsibility score of 3.0, only three colleges (Northwestern, Stanford, and Swarthmore) had the highest possible credit rating of Aaa. Twenty-five colleges with financial responsibility scores of 3.0 had credit ratings of Baa, seven to nine grades lower than Aaa. On the other hand, six of the 15 colleges with Aaa credit ratings (including Harvard and Yale) had financial responsibility scores of 2.2, well below the maximum possible score.

This suggests that the federal government and private credit agencies measure colleges’ financial health in different ways—at least among colleges with the ability to access credit. Financial responsibility scores can certainly have the potential to affect how colleges structure their finances, but it is unclear whether they accurately reflect a college’s ability to operate going forward.

Posted in Uncategorized | Tagged , , | 3 Comments

Why ASAP Could Harm Some Students

The City University of New York’s Accelerated Study in Associate Programs (ASAP) has gotten a great deal of positive attention in the last few years, and for good reason. The program provides much-needed additional economic, advising, and social supports to community college students from low-income families, and a new evaluation of a randomized trial from MDRC found that ASAP increased three-year associate’s degree completion rates from 22% in the control group to 40% in the treatment group. I’m glad to see that the program will be expanded to three community colleges in Ohio, as this will help address concerns about the feasibility of scaling up the program to cover more students.

But it is important to recognize that ASAP, as currently constituted, is limited to students who are able and willing to attend college full-time. Full-time students are the minority at community colleges, and full-time students tend to be more economically and socially advantaged than their part-time peers. As currently constructed, ASAP would direct a higher percentage of resources to full-time students, even though part-time students likely need support more than full-time students. (However, it’s worth noting that although part-time students count in some states’ performance-based funding systems, they are currently not counted in federal graduation rate metrics.)

Students in ASAP also get priority registration privileges, which can certainly contribute to on-time degree completion. But it is not uncommon for classes (at least at desirable times) to have waiting lists, meaning that ASAP students get access to courses while other students do not. If a part-time student cannot get access to a course that he or she needs, it could mean that the student is forced to stop out of college for a semester—a substantial risk factor for degree completion.

ASAP has many promising aspects, but further study is needed to see if the degree completion gains for full-time students are coming at the expense of part-time students. Some of the ASAP services should be extended to all students, and priority registration should be reconsidered to benefit students who are truly in need to getting into a course instead of those who are able to attend full-time.

Posted in Uncategorized | Tagged , , | Leave a comment