The Rise and Fall of Federal College Ratings

President Obama’s 2013 announcement that a set of federal college ratings would be created and then tied to federal financial aid dollars caught the higher education world by surprise. Some media coverage at the time even expected what came to be known as the Postsecondary Institution Ratings System (PIRS) to challenge U.S. News & World Report’s dominance in the higher education rankings marketplace. But most researchers and people intimately involved in policy discussions saw a substantial set of hurdles (both methodologically and politically) that college ratings would have to clear before being tied to financial aid. This resulted in a number of delays in the development of PIRS, as evidenced by last fall’s delayed release of a general framework for developing ratings.

The U.S. Department of Education’s March announcement that two college ratings systems would be created, one oriented toward consumers and one for accountability purposes, further complicated the efforts to develop a ratings system. As someone who has written extensively on college ratings, I weighed in with my expectation that any ratings were becoming extremely unlikely (due to both political pressures and other pressing needs for ED to address):

This week’s announcement that the Department of Education is dropping the ratings portion of PIRS (is it PIS now?) comes as little surprise to higher education policy insiders—particularly in the face of bipartisan legislation in Congress that sought to block the development of ratings and fierce opposition from much of the higher education community. I have to chuckle at Education Undersecretary Ted Mitchell’s comments on the changes; he told The Chronicle of Higher Education that dropping ratings “is the exact opposite of a collapse” and “a sprint forward.” But politically, this is a good time for ED to focus on consumer information after its recent court victory against the for-profit sector that allows the gainful employment accountability system to go into effect next week.

It does appear that the PIRS effort will not be in vain, as ED has promised that additional data on colleges’ performance will be made available on consumer-friendly websites. Although I am skeptical that federal websites like the College Scorecard and College Navigator directly reach students and their families, I am a believer in the power of information to help students make at least decent decisions, but I think this information will be more effective when packaged by private organizations such as guidance counselors and college access organizations.

On a historical note, the 2013-2015 effort to rate colleges failed to live up to efforts a century ago, in which ratings were actually created but President Taft blocked their release. As Libby Nelson at Vox noted last summer, President Wilson created a ratings committee in 1914, which then came to the conclusion that publishing ratings was not desirable at the time. 101 years later, some things still haven’t changed. College ratings are likely dead for decades at the federal level, but performance-based funding or “risk-sharing” ideas enjoy some bipartisan support and are the next big accountability policy discussion.

I’d love to be able to write more at this time about the path forward for federal higher education accountability policy, but I’ve got to get back to putting together the annual Washington Monthly college rankings (look for them in late August). Hopefully, future versions of the rankings will be able to include some of the new information that has been promised in this new consumer information system.

Posted in Uncategorized | Tagged , , , | Leave a comment

It’s Time to Make Accreditation Reports Public

The higher education world is abuzz about this week’s great piece in The Wall Street Journal questioning the effectiveness of higher education accrediting agencies, whose seal of approval is required for a college to receive federal student financial aid dollars. In the front-page article, Andrea Fuller and Douglas Belkin of the WSJ note that at least 11 accredited four-year colleges had federal graduation rates (excluding part-time and transfer students, among others) below 10%, which leads one to question whether accreditors are doing their job in ensuring institutional quality. A 2014 Government Accountability Office report concluded that accreditors are more likely to yank a college’s accreditation over financial concerns than academic concerns, calling for additional oversight from the U.S. Department of Education.

Congress has also been placing pressure on accreditors in recent weeks due to the collapse of the accredited Corinthian chain of for-profit colleges and the Department of Education’s announcement that at least some Corinthian students will qualify for loan forgiveness. The head of the main accreditation body responsible for most Corinthian campuses got grilled by Senate Democrats in a hearing this week for not pulling the campuses’ accreditation before the chain collapsed. As a part of the (hopefully) impending reauthorization of the Higher Education Act, members of Congress on both sides of the aisle are interested in a potential overhaul of the accreditation system.

Students, their families, policymakers, and the general public have a clear and compelling interest in reading the reports from accrediting agencies and knowing whether colleges are facing sanctions for some aspect of academic or fiscal performance. Yet these reports, which are produced by nonprofit accrediting agencies, are rarely available to the public. For the WSJ piece, the reporters were able to use open-records requests to get accreditation reports for 50 colleges with the lowest graduation rates. I was recently at a conference where the GAO presented on their aforementioned accreditation report and asked whether the data they compiled on accreditor sanctions was available to the public. They suggested I file an open records request, something which I’ve (unsuccessfully) done for another paper.

Basic information about a college’s accreditation status and reports –including any sanctions and key recommendations for improvement—should be readily available to the public as a requirement for federal financial aid eligibility. And this should cover all types of colleges, including private nonprofit and for-profit colleges that accept federal funds. The federal government doesn’t necessarily have to get involved in an accreditation process (a key concern of colleges and universities), but it can use its clout to make additional data available to the public. (Students probably won’t go to the college’s website and read the reports, but third-party groups like guidance counselors and college rankings providers would work to get the information out in more usable form.) A little sunshine in the accreditation process has the potential to be a wonderful disinfectant.

Posted in Uncategorized | Tagged , , , | Leave a comment

What if College Amenities Were Unbundled?

Recent articles by Jeff Selingo in the Washington Post and Matt Reed in Inside Higher Ed have address the idea of “unbundling” college credits. Selingo contends in his piece that two of the reasons why students pay so much for college is that they face the same price if taking 12 or 15 credits per semester (true at many colleges) and that colleges don’t always accept transfer credits in an effort to generate revenue (probably true, but difficult to prove). Reed notes an important distinction regarding transfer credits—although students may get credit for a community college course at a four-year institution, the credit might be granted as an elective that still requires the student to take the course over again.

Both Selingo and Reed refer to the push to allow consumers to unbundle their cable packages as a potential example of what to do (or not to do) in higher education. Currently, consumers have to choose a bundle of channels in order to get the particular channel or two they are the most interested in actually watching. A recent report estimated that cable companies paid an average of $6.04 per month to carry ESPN—and this gets passed along to consumers regardless of whether they actually want to watch the channel. Verizon has recently allowed subscribers to choose what types of channels they want to pay for, and Disney (the owner of ESPN) promptly sued to maintain the bundle. Disney’s fear is that maybe only half of the subscribers would pay $6 per month for ESPN, meaning that the price would have to double in order to match the previous revenue—at which point more customers would likely opt out.

Higher education offers similar examples of bundling that would quite possibly be brought down if students had the choice to select their preferred options. At many colleges, amenities such as recreation centers and intercollegiate athletics programs are funded through mandatory student fees. For example, the typical Big Ten Conference university charges students about $150 per semester in fees to fund recreational activities, regardless of whether a student actually chooses to use any facilities. While students often vote to approve the initial imposition of the fee, students who enroll in later years still have to pay the fee even if they would not have voted for it in the first place.

Fees for supporting intercollegiate athletics can be over $1,000 per year at some colleges, particularly at institutions without large donor bases or other revenue sources. An example is Longwood University in Virginia, which charges $239 per credit hour in tuition alongside over $63 per credit in athletics fees. This means that Longwood students taking 120 credits would be paying about $7,500 to subsidize athletics during their time on campus, something which many students might opt out of it they had a chance.

Higher education could be unbundled in other ways, including removing any requirements that students live on campus or purchase a meal plan, ending provisions requiring students to complete a certain number of credits in residency, or even potentially through the encouragement of open courseware that does not require an expensive subscription through the college. But any such efforts to unbundle will take away important revenue sources, so expect colleges to compensate in any way that they can. There is value in some of the bundling requirements, to be sure—for example, campus mental health services may not be offered if students had to opt into paying for the ability to access services. But it is worth having a conversation about what should be bundled and what should be provided on an a la carte basis.

Posted in Uncategorized | Tagged , , | Leave a comment

Is “Overborrowing” for College an Epidemic?

As the Senate Health, Education, Labor, and Pensions Committee continues to slowly move toward Higher Education Act reauthorization, the committee held a hearing this week on the possibility of institutional risk-sharing with respect to federal student financial aid programs. This idea, which has bipartisan support at least in principle, would require at least some low-performing colleges to be responsible for a portion of loans not repaid to the federal government. (I’ve written about this idea in the past.)

Sen. Lamar Alexander (R-TN), the committee chair, began his opening statement with a discussion of “overborrowing,” which he defines as students borrowing more than they need to in order to attend college. Along with Sen. Michael Bennet (D-CO) and other colleagues, he is sponsoring the FAST Act, which contains a provision that would prorate the amount of funds part-time students can borrow for living expenses. Financial aid administrators are also concerned about overborrowing, as evidenced by their professional association’s push to allow colleges to offer students less than the maximum loan amount. This is also something that Sen. Alexander discussed in his opening statement.

But there is no commonly-accepted definition of “overborrowing,” nor is there empirical research that clearly defines how much borrowing is too much. I can see why policymakers want to limit the amount of money that part-time students can borrow for living expenses while in college, as students may hit their lifetime loan caps before completing their degrees as part-time students. But, as research that I’ve conducted with Sara Goldrick-Rab at Wisconsin and Braden Hosch at Stony Brook shows, about one-third of all colleges set living expenses at least $3,000 below what it likely costs to live. This effectively limits student borrowing, as they cannot have a financial aid package exceeding the cost of attendance.

Some people have said that high student loan default rates are a clear indicator that overborrowing is a common concern. Yet students with a small amount of debt are at a higher risk of default, as many of them dropped out of college without a degree and were unable to find gainful employment. It could be the case that borrowing more money would be a better decision, as that money might help students stay in college and complete degrees. However, a substantial percentage of students from low-income families are loan-averse—either completely unwilling to take on debt or only willing to take on a bare minimum as a last resort. Underborrowing is the concern in higher education funding that few people are talking about, and it deserves additional study.

Finally, it is worth a reminder that the typical student graduating with a bachelor’s degree has about $30,000 in debt, although there are huge differences by race/ethnicity and family income. This is in spite of media reports that focus on borrowers with atypically high debt burdens. While I’m concerned about the substantial percentage of students borrowing large amounts of money for graduate school (and particularly the implications for taxpayers due to the presence of income-based repayment programs), it’s hard to convincingly argue that overborrowing for an undergraduate degree is truly an epidemic.

Posted in Uncategorized | Tagged , , | Leave a comment

How Should State Higher Education Funding Effort Be Measured?

The question of whether states adequately fund public higher education has been a common discussion over the last few decades—and the typical answer from the higher education community is a resounding “No.” This is evident in two recent pieces that have gotten a lot of attention in recent weeks.

The first piece is a chart put out by the venerable Tom Mortensen at the Pell Institute that shows that higher education funding effort (as measured by appropriations per $1,000 in state personal income) has fallen to 1966 levels, which was then picked up by the Washington Post with the breathless headline, “How quickly will states get to zero in funding for higher education?” (The answer—based on trendlines—no later than 2050.) The second piece is from Demos and claims that state funding cuts are responsible for between 78% and 79%1 of the increase in tuition at public universities between 2001 and 2011.

Meanwhile, state higher education appropriations are actually up over the last five fiscal years, according to the annual Grapevine survey of states. In Fiscal Year 2010 (during the recession), state funding was approximately $73.9 billion, falling slightly to $72.5 billion by FY 2013. But the last two fiscal years have been better to states, and higher education appropriations have risen to nearly $81 billion. Higher education has traditionally served as a balancing wheel for state budgets, facing big cuts in tough times and getting at least some increases in good times. However, this survey is not adjusted for inflation, making funding increases look slightly larger than they actually are.

So far, I’ve alluded to four different ways to measure state higher education funding effort:

(1) Total funding, not adjusted for inflation (the measure state legislatures often prefer to discuss).

(2) Total funding, adjusted for inflation.

(3) Per-full time equivalent student funding, adjusted for inflation (the most common measure used in the research community).

(4) Funding “effort” per $1,000 in state income (a measure popular with education advocates).

So which measure is the right measure? State legislatures tend not to care about inflation-adjusted or per-student metrics because their revenue streams (primarily taxes) don’t necessarily increase alongside inflation or population growth. Additionally, enrollment for the next year or two can be difficult to accurately predict when budgets are being made, so a perfect per-FTE funding ratio is virtually impossible. But on the other hand, colleges have to make state funding work to educate an often-growing number of students, so the call for the maintenance of funding ratios makes perfect sense.

I raise these points because policymakers and education advocates often seem to talk past each other in terms of what funding effort for higher education should look like. It’s important that both sides understand where the other is coming from in terms of their definition in order to work to find common ground. And I’d love to hear your preferred method of defining ‘appropriate’ funding effort, as well as why you chose that method.

———-

1 I question the exact percentage here, as it’s the result of a correlational study. To claim causality (as they do in Table 6), the author needs to establish causality—some way to separate the effects of dropping per-student state support from other confounding factors (such as changing preferences toward research). This can be done by using panel regression techniques to essentially compare states with big funding drops to those without, after controlling for other factors that would be affecting higher education across states. But it’s hard to imagine a situation in which per-student state funding cuts aren’t responsible for at least some of the tuition increases over the last decade.

Posted in Uncategorized | Tagged , , | Leave a comment

Comments on the Brookings Value-Added Rankings

Jonathan Rothwell and Siddharth Kulkarni of the Metropolitan Policy Program at Brookings made a big splash today with the release of a set of college “value-added” rankings (link to full study and Inside Higher Ed summary) focused primarily on labor market outcomes. Value-added measures, which adjust for student and institutional characteristics to get a better handle on a college’s contribution to student outcomes, are becoming increasingly common in higher education. (I’ve written about college value-added in the past, which led to me taking the reins as Washington Monthly’s rankings methodologist.) Pretty much all of the major college rankings at this point include at least one value-added component, and this set of rankings actually shares some similarities with Money’s rankings. And the Brookings report does mention correlations with the U.S. News, Money, and Forbes rankings—but not Washington Monthly. (Sigh.)

The Brookings report uses five different outcome measures, which are then adjusted for available student characteristics and institutional characteristics such as the sector of the college and where it is located:

(1) Mid-career salary of alumni: This measures the median salary of full-time workers with a degree from a particular college and at least ten years of experience. The data are from PayScale, which suffers from being self-reported data for a subset of students, but the data likely still have value for two reasons. First, the authors do a careful job of trying to decompose any biases in the data—for example, correlating PayScale reported earnings with data from other sources. Second, even if there is an upward bias in the data, it should be similar across institutions. As I’ve written about before, I trust the order of colleges in PayScale data more than I trust the dollar values—which are likely inflated.

But there are still a few concerns with this measure. Some of the concerns, such as limiting just to graduates (excluding dropouts) and dropping students with an advanced degree, are fairly well-known. And the focus on salary definitely rewards colleges with large engineering programs, as evidenced by those colleges’ dominance of the value-added list (while art schools look horrible). However, given that ACT and SAT math scores are the other academic preparation measure used, the bias favoring engineering schools may actually be smaller than if verbal/reading scores were also used. I would also have estimated models separately for two-year and four-year colleges instead of putting them in the same model with a dummy variable for sector, but that’s just my preference.

(2) Student loan repayment rate: This represents the opposite of the average three-year student loan cohort default rate over the last three years (so a 10% default rate is framed as a 90% repayment rate). This measure is pretty straightforward, although I do have to question the value-added estimates for colleges with very high repayment rates. Value-added estimates are difficult to conceptualize for colleges with a high probability of success, as there is typically little room for improvement. But here, the highest predicted repayment rate is 96.8% for four-year colleges, while several dozen colleges have actual repayment rates in excess of 96.8%. It appears that linear regressions were used, while some type of robust generalized linear model should have also been considered. (In the Washington Monthly rankings, I use simple linear regressions for graduation rate performance, but very few colleges are so close to the ceiling of 100%.)

(3) Occupational earnings potential: This is a pretty nifty measure that uses LinkedIn data to get a handle of which occupations a college’s graduates pursue during their career. This mix of occupations is then tied to Bureau of Labor Statistics data to estimate the average salary of a college’s graduate, where advanced degree holders are also included. The value-added measure attempts to control for student and institutional characteristics, although it doesn’t control for the preferences of students toward certain majors when entering college.

I’m excited by the potential to use LinkedIn data (warts and all) to look at students’ eventual outcomes. However, it should be noted that LinkedIn is more heavily used in some fields that might be expected (business and engineering) and others that might not be expected (communication and cultural studies). The authors adjust for these differences in representation and are very transparent about it in the appendix. This appendix is definitely on the technical side, but I welcome their transparency.

They also report five different quality measures which are not included in the value-added estimate: ‘curriculum value’ (the value of the degrees offered by the college), the value of skills alumni list on LinkedIn, the percentage of graduates deemed STEM-ready, completion rates within 200% of normal time (8 years for a 4-year college, or 4 years for a 2-year college), and average institutional grant aid. These measures are not input-adjusted, but generally reflect what people think of as quality. However, average institutional grant aid is a lousy measure to include as it rewards colleges with a high-tuition, high-aid model over colleges with a low-tuition, low-aid model—even if students pay the exact same price.

In conclusion, the Brookings report tells readers some things we already know (engineering programs are where to go to make money), but provides a good—albeit partial—look at outcomes across an unusually broad swath of American higher education. I would advise readers to focus on comparing colleges with similar missions and goals, given the importance of occupation in determining earnings. I would also be more hesitant to use the metrics for very small colleges, where all of these measures can be influenced by a relatively small number of people. But the transparency of the methodology and use of new data sources make these value-added rankings a valuable contribution to the public discourse.

Posted in Uncategorized | Tagged , , , | Leave a comment

Review of “Designing the New American University”

Since Michael Crow became the president of Arizona State University in 2002, he has worked to reorganize and grow the institution into his vision of a `New American University.’ ASU has grown to over 80,000 students during his time as president through a commitment to admit all students who meet a relatively modest set of academic qualifications. At the same time, the university has embarked upon a number of significant academic reorganizations that have gotten rid of many traditional academic departments and replacing them with larger interdisciplinary schools. Crow has also attracted his fair share of criticism over the years, including for alleged micromanaging and his willingness to venture into online education. (I’ve previously critiqued ASU Online’s program with Starbucks, although many of my concerns have since been alleviated.)

Crow partnered with William Dabars, an ASU professor, to write Designing the New American University (Johns Hopkins Press, $34.95 hardcover) to more fully explain how the ASU model works. The first several chapters of the book, although rather verbose, focus on the development of the American research university. A key concept that the authors raise is isomorphism—the tendency of organizations to resemble a leading organization in the market. Crow and Dabars contend that research universities have largely followed the lead of elite private universities such as Harvard and the big Midwestern land-grant universities that developed following the Civil War. Much has changed since then, so they argue that a new structure is needed.

Chapter 7 is the key chapter of the book, in which the authors detail the design of Arizona State as a ‘New American University’ (and make a nice sales pitch for the university in the process). Crow and Dabars celebrate the growth of Arizona State, which has been matched by only a small number of public research universities. They note that a stronger focus on access has hurt them in the U.S. News rankings, a key measure of prestige—while celebrating their ranking as an ‘Up and Coming School.’ (In the Washington Monthly rankings that I compile, ASU is a very respectable 28th.) The scale of ASU allows the possibility for cost-effective operations, something which the university is trying to measure through their Center for Measuring University Performance.

It certainly seems like some elements of the changes at ASU could potentially be adopted at other research universities, but it is worth noting that research universities make up only about 200-300 of the over 7,500 postsecondary institutions in the United States. I am left wondering what the `New American’ model would look like in other sectors of higher education, which is beyond the scope of this book but an important question to answer. Some other questions to consider are the following:

(1) How would a commitment to growth happen at colleges without the prestige or market power to attract significant numbers of out-of-state students?

(2) ASU seems to have done more academic reorganizations in research-intensive departments. How would this work at a more teaching-oriented institution?

(3) How will the continuing growth of ASU Online, as well as the multiple branch campuses in the Phoenix metropolitan area, affect the organizational structure? At what point, if any, does a university reach the maximum optimal size?

(4) Will ASU’s design remain the same once Michael Crow is not president? (And is that a good thing?)

Overall, this is a solid book that is getting a substantial amount of attention for good reason. While the book could have been about 50 pages shorter while still conveying all of the important information, the final chapter is highly recommended reading. I plan to assign that chapter to my organization and governance classes in the future so they can understand how ASU is growing and succeeding through an atypical higher education model.

Posted in Uncategorized | Tagged | Leave a comment

Analyzing the Heightened Cash Monitoring Data Release

NOTE: This post was updated April 3 to reflect the Department of Education’s latest release of data on heightened cash monitoring.

In my previous post, I wrote about the U.S. Department of Education’s release of a list of 544 colleges subject to heightened cash monitoring standards due to various academic, financial, and administrative concerns. I constructed a dataset of the 512 U.S. colleges known to be facing heightened cash monitoring (HCM) along with two other key accountability measures: the percentage of students who default on loans within three years (cohort default rates) and an additional measure of private colleges’ financial strength (financial responsibility scores). In this post, I examine the reasons why colleges face heightened cash monitoring, as well as whether HCM correlates with the other accountability metrics.

The table below shows the number of colleges facing HCM-1 (shorter delays in ED’s disbursement of student financial aid dollars, although colleges not facing HCM have no delays) and HCM-2 (longer delays) by type of institution (public, private nonprofit, and for-profit).

Table 1: HCM status by institutional type.
Sector HCM-1 HCM-2
Public 68 6
Private nonprofit 97 18
Private for-profit 284 39
Total 449 63

 

While only six of 74 public colleges are facing HCM-2, more than one in ten private nonprofit (18 of 115) and for-profit colleges (39 of 323) are facing this higher standard of oversight. The next table shows the various reasons listed for why colleges are facing HCM.

Table 2: HCM status by reason for additional oversight.
Reason HCM-1 HCM-2
Low financial responsibility score 320 4
Financial statements late 66 9
Program review 1 21
Administrative capability 22 7
Accreditation concerns 1 12
Other 39 10

 

More than two-thirds (320) of the 449 colleges facing HCM-1 are included due to low financial responsibility scores (below a 1.5 on a scale ranging from -1 to 3), but only four colleges are facing HCM-2 for that reason. The next most common reason, affecting 75 colleges, is a delayed submission of required financial statements or audits. This affected 43 public colleges in Minnesota, which are a majority of the public colleges subject to HCM. Program review concerns were a main factor for HCM-2, with 21 colleges in this category (including many newly released institutions) facing HCM-2. Other serious concerns included administrative capability (22 in HCM-1 and 7 in HCM-2), accreditation (2 in HCM-1 and 12 in HCM-2), and a range of other factors (39 in HCM-1 and 10 in HCM-2).

The next table includes three of the most common or serious reasons for facing HCM (low financial responsibility scores, administrative capacity concerns, and accreditation issues) and examines their median financial responsibility scores and cohort default rates.

Table 3: Median outcome values on other accountability metrics.
Reason for inclusion in HCM Financial responsibility score Cohort default rate
Low financial responsibility score 1.2 12.1%
Administrative capability 1.6 20.3%
Accreditation issues 2.0 2.8%

 

Not surprisingly, the typical college subject to HCM for a low financial responsibility score had a financial responsibility score of 1.2 in Fiscal Year 2012, which would require additional federal oversight. Although the median cohort default rate was 12.1%, which is slightly lower than the national default rate of 13.7%, some of these colleges do not participate in the federal student loan program and are thus counted as zeroes. The median college with administrative capability concerns barely passed the financial responsibility test (with a score of 1.6), while 20.3% of students defaulted. Colleges with accreditation issues (either academic or financial) had higher financial responsibility scores (2.0) and lower cohort default rates (2.8%).

What does this release of heightened cash monitoring data tell us? Since most colleges are on the list for known concerns (low financial responsibility scores or accreditation issues) or rather silly errors (forgetting to submit financial statements on time), the value is fairly limited. But there is still some value, particularly in the administrative capability category. These colleges deserve additional scrutiny, and the release of this list will do just that.

Posted in Uncategorized | Tagged , , | 1 Comment

New Data on Heightened Cash Monitoring and Accountability Policies

Earlier this week, I wrote about the U.S. Department of Education’s pending release of a list of colleges that are currently subject to heightened cash monitoring requirements. On Tuesday morning, ED released the list of 556 colleges (updated to 544 on Friday), thanks to dogged reporting by Michael Stratford at Inside Higher Ed (see his take on the release here).

My interest lies in comparing the colleges facing heightened cash monitoring (HCM) to two other key accountability measures: the percentage of students who default on loans within three years (cohort default rates) and an additional measure of private colleges’ financial strength (financial responsibility scores). I have compiled a dataset with all of the domestic colleges known to be facing HCM, their cohort default rates, and their financial responsibility scores.

That dataset is available for download on my site, and I hope it is useful for those interested in examining these new data on federal accountability policies. I will have a follow-up post with a detailed analysis, but at this point it is more important for me to get the data out in a convenient form to researchers, policymakers, and the public.

DOWNLOAD the dataset here.

Posted in Uncategorized | Tagged , , | 1 Comment

Why is it So Difficult to Sanction Colleges for Poor Performance?

The U.S. Department of Education has the ability to sanction colleges for poor performance in several ways. A few weeks ago, I wrote about ED’s most recent release of financial responsibility scores, which require colleges deemed financially unstable to post a bond with the federal government before receiving financial aid dollars. ED can also strip a college’s federal financial aid eligibility if too high of a percentage of students default on their federal loans, if data are not provided on key measures such as graduation rates, or if laws such as Title IX (prohibiting discrimination based on sex) are not followed.

The Department of Education can also sanction colleges by placing them on Heightened Cash Monitoring (HCM), requiring additional documentation and a hold on funds before student financial aid dollars are released. Corinthian Colleges, which partially collapsed last summer, blames suddenly imposed HCM requirements for its collapse as they were left short on cash. Notably, ED has the authority to determine which colleges should face HCM without relying upon a fixed and transparent formula.

In spite of the power of the HCM designation, ED has previously refused to release a list of which colleges are subject to HCM. The outstanding Michael Stratford at Inside Higher Ed tried to get the list for nearly a year through a Freedom of Information Act request (which was mainly denied due to concerns about hurting colleges’ market positions), finally making this dispute public in an article last week. This sunlight proved to be a powerful disinfectant, as ED relinquished late Friday and will publish a list of the names this week.

The concerns about releasing HCM scores is but one of many difficulties the Department of Education has had in sanctioning colleges for poor performance across different dimensions. Last fall, the cohort default rate measures were tweaked at the last minute, which had the effect of allowing more colleges to pass and retain access to federal aid. Financial responsibility scores have been challenged over concerns that ED’s calculations are incorrect. Gainful employment metrics are still tied up in court, and tying any federal aid dollars to college ratings appears to have no chance of passing Congress at this point. Notably, these sanctions are rarely due to direct concerns about academics, as academic matters are left to accreditors.

Why is it so difficult to sanction poorly-performing colleges, and why is the Department of Education so hesitant to release performance data? I suggest three reasons below, and I would love to hear your thoughts in the comments section.

(1) The first reason is the classic political science axiom of concentrated benefits (to colleges) and diffuse costs (to students and the general public). Since there is a college in every Congressional district (Andrew Kelly at AEI shows the median district had 11 colleges in 2011-12), colleges and their professional associations can put forth a fight whenever they feel threatened.

(2) Some of these accountability measures are either all-or-nothing in nature (such as default rates) or incredibly costly for financially struggling colleges (HCM or posting a letter of credit for a low financial responsibility score). More nuanced systems with a sliding scale might make some sanctions possible, and this is a possible reform under Higher Education Act reauthorization.

(3) The complex relationship between accrediting bodies and the Department of Education leaves ED unable to directly sanction colleges for poor academic performance. A 2014 GAO report suggested accrediting bodies also focus more on finances than academics and called for a greater federal role in accreditation, something that will not sit well with colleges.

I look forward to seeing the list of colleges facing Heightened Cash Monitoring be released later this week (please, not Friday afternoon!) and will share my thoughts on the list in a future piece.

Posted in Uncategorized | Tagged , , | 1 Comment