Top Ten (and Not Top Ten) Nominees Wanted!

As we get close to the end of 2014, I’m looking for suggestions regarding two year-in-review pieces in higher education policy that I’ll post the week of December 15. The first is my review of the ten most newsworthy happenings (or non-happenings) from the past year, and the second is my take on the worst events during the last year. My posts from last year are below:

Top 10 most newsworthy happenings

“Not top 10” list

Thank you in advance for your suggestions, and I’m looking forward to sharing the posts in a few weeks!

Posted in Uncategorized | Tagged | Leave a comment

How to Calculate–and Not Calculate–Net Prices

Colleges’ net prices, which the U.S. Department of Education defines as the total cost of attendance (tuition and fees, room and board, books and supplies, and other living expenses) less all grant and scholarship aid, have received a lot of attention in the last few years. All colleges are required by the Higher Education Opportunity Act to have a net price calculator on their website, where students can get an estimate of their net price by inputting financial and academic information. Net prices are also used for accountability purposes, including in the Washington Monthly college rankings that I compile, and are likely to be included in the Obama Administration’s Postsecondary Institution Ratings System (PIRS) that could be released in the next several weeks.

Two recently released reports have looked at the net price of attendance, but only one of them is useful to either researchers or families considering colleges. A new Brookings working paper by Phillip Levine makes a good contribution to the net price discussion by making a case for using the median net price (instead of the average) for both consumer information and accountability purposes. He uses data from Wellesley College’s net price calculator to show that the median low-income student faces a net price well below the listed average net price. The reason why the average is higher than the median at Wellesley is because a small number of low-income students pay a high net price, while a much larger number of students pay a relatively low price. The outlying values for a small number of students bring up the average value.

I used data from the 2011-12 National Postsecondary Student Aid Study, a nationally-representative sample of undergraduate students, to compare the average and median net prices for dependent and independent students by family income quartile. The results are below:

Comparing average and median net prices by family income quartile.
Average 10th %ile 25th %ile Median 75th %ile 90th %ile
Dependent students: Parents’ income ($1,000s)
<30 10,299 2,500 4,392 8,113 13,688 20,734
30-64 13,130 3,699 6,328 11,077 17,708 24,750
65-105 16,404 4,383 8,178 14,419 21,839 30,174
106+ 20,388 4,753 9,860 18,420 27,122 39,656
Independent students: student and spouse’s income ($1,000s)
<7 10,972 3,238 5,000 8,889 14,385 22,219
7-19 11,114 3,475 5,252 9,068 14,721 22,320
20-41 10,823 3,426 4,713 8,744 14,362 21,996
42+ 10,193 3,196 4,475 7,931 13,557 20,795
SOURCE: National Postsecondary Student Aid Study 2011-12.

 

Across all family income quartiles for both dependent and independent students, the average net price is higher than the median net price. About 60% of students pay a net price at or below the average net price reported to IPEDS, suggesting that switching to reporting the median net price might improve the quality of available information.

The second report was the annual Trends in College Pricing report, published by the College Board. The conclusion the report reached was that net prices are modest and have actually decreased several years during the last decade. However, their definition of “net price” suffers from two fatal flaws:

(1) “Net price” doesn’t include all cost of attendance components. They publicize a “net tuition” measure and a “net tuition, fees, room and board” measure, but the cost of attendance also includes books and supplies as well as other living expenses such as transportation, personal care, and a small entertainment allowance. (For more on living costs, see this new working paper on living costs I’ve got out with Braden Hosch of Stony Brook and Sara Goldrick-Rab of Wisconsin.) This understates what students and their families should actually expect to pay for college, although living costs can vary across individuals.

(2) Tax credits are included with grant aid in their “net price” definition. Students and their families do not receive the tax credit until they file their taxes in the following year, meaning that costs incurred in August may be partially reimbursed the following spring. That does little to help families pay for college upfront, when the money is actually needed. Additionally, not all families that qualify for education tax credits actually claim them. In this New America Foundation blog post, Stephen Burd notes that about 25% of families don’t claim tax credits—and this takeup rate is likely lower among lower-income families.

Sadly, the College Board report has gotten a lot of attention in spite of its inaccurate net price definitions. I would like to see a robust discussion about the important Brookings paper and how we can work to improve net price data—with the correct definition used.

Posted in Uncategorized | Tagged , , , | Leave a comment

Gainful Employment and the Federal Ability to Sanction Colleges

The U.S. Department of Education’s second attempt at “gainful employment” regulations, which apply to the majority of vocationally-oriented programs at for-profit colleges and certain nondegree programs at public and private nonprofit colleges, was released to the public this morning. The Department’s first effort in 2010 was struck down by a federal judge after the for-profit sector challenged a loan repayment rate metric on account of it requiring additional student data collection that would be illegal under current federal law.

The 2014 measure was widely expected to contain two components: a debt-to-earning s ratio that required program completers to have annual loan debt be less than 8% of total income or 20% of “discretionary income” above 150% of the poverty line, and a cohort default rate measure that required fewer than 30% of program borrowers (regardless of completion status) to default on federal loans in less than three years. As excellent articles on the newly released measure in The Chronicle of Higher Education and Inside Higher Ed this morning detail, the cohort default rate measure was unexpectedly dropped from the final regulation. This change in rules, Inside Higher Ed reports, would reduce the number of affected programs from 1,900 to 1,400 and the number of affected students from about one million to 840,000.

There will be a number of analyses of the exact details of gainful employment over the coming days (I highly recommend anything written by Ben Miller at the New America Foundation), but I want to briefly discuss on what the changes to the gainful employment rule mean for other federal accountability policies. Just over a month ago, the Department of Education released cohort default rate data, but they tweaked a calculation at the last minute that had the effect of allowing more colleges to get under the 30% default rate threshold at least once in three years to avoid sanctions.

The last-minute changes to both gainful employment and cohort default rate accountability measures highlight the political difficulty of the current sanctioning system, which is on an all-or-nothing basis. When the only funding lever the federal government uses is so crude, colleges have a strong incentive to lobby against rules that could effectively shut them down. It is long past time for the Department of Education to consider sliding sanctions against colleges with less-than-desirable outcomes if the goal is to eventually cut off financial aid to the poorest performing institutions.

Finally, the successful lobbying efforts of different sectors of higher education make it appear less likely that the Obama Administration’s still-forthcoming Postsecondary Institution Ratings System (PIRS) will be able to tie financial aid to college ratings. This measure still requires Congressional approval, but the Department of Education’s willingness to propose sanctions has been substantially weakened over the last month. It remains to be seen if the Department of Education under the current administration will propose how PIRS will be tied to aid before the clock runs out on the Obama presidency.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Comments on the CollegeNET-PayScale Social Mobility Index

The last two years have seen a great deal of attention being placed on the social mobility function that many people expect colleges to perform. Are colleges giving students from lower-income families the tools and skills they need in order to do well (and good) in society? The Washington Monthly college rankings (which I calculate) were the first entrant in this field nearly a decade ago, and we also put out lists of the Best Bang for the Buck and Affordable Elite colleges in this year’s issue. The New York Times put out a social mobility ranking in September, which essentially was a more elite version of our Affordable Elite list, which looked at only about 100 colleges with a 75% four-year graduation rate.

The newest entity in the cottage industry of social mobility rankings comes from PayScale and CollegeNET, an information technology and scholarship provider. Their Social Mobility Index (SMI) includes five components for 539 four-year colleges, with the following weights:

Tuition (lower is better): 126 points

Economic background (percent of students with family incomes below $48,000): 125 points

Graduation rate (apparently six years): 66 points

Early career salary (from PayScale data): 65 points

Endowment (lower is better): 30 points

The top five colleges in the rankings are Montana Tech, Rowan , Florida A&M, Cal Poly-Ponoma, and Cal State-Northridge, while the bottom five are Oberlin, Colby, Berklee College of music, Washington University, and the Culinary Institute of America.

Many people will critique the use of PayScale’s data in rankings, and I would partially agree—although it’s the best data that is available nationwide at this point until the ban on unit record data is eliminated. My two main critiques of these rankings are the following:

Tuition isn’t the best measure of college affordability. Judging by the numbers used in the rankings, it’s clear that the SMI uses posted tuition and fees for affordability. This doesn’t necessarily reflect what the typical lower-income student would actually pay for two reasons, as it excludes room, board, and other necessary expenses while also excluding any grant aid. The net price of attendance (the total cost of attendance less all grant aid) is a far better measure of what students from lower-income families may pay, even though the SMI measure does capture sticker shock.

The weights are justified, but still arbitrary. The SMI methodology includes the following howler of a sentence:

“Unlike the popular periodicals, we did not arbitrarily assign a percentage weight to the five variables in the SMI formula and add those values together to obtain a score.”

Not to put my philosopher hat on too tightly, but any weights given in college rankings are arbitrarily assigned. A good set of rankings is fairly insensitive to changes in the weighting methodology, while the SMI does not answer that question.

I’m pleased to welcome another college rankings website to this increasingly fascinating mix of providers—and I remain curious the extent to which these rankings (along with many others) will be used as either an accountability or a consumer information tool.

Posted in Uncategorized | Tagged , , | Leave a comment

Do Student Loans Result in Tuition Increases? Why It’s So Hard to Tell

One of the longstanding questions in higher education finance is whether access to federal financial aid dollars is one of the factors behind tuition increases. This was famously stated by Education Secretary William Bennett in a 1987 New York Times editorial:

“If anything, increases in financial aid in recent years have enabled colleges and universities blithely to raise their tuitions, confident that Federal loan subsidies would help cushion the increase. In 1978, subsidies became available to a greatly expanded number of students. In 1980, college tuitions began rising year after year at a rate that exceeded inflation. Federal student aid policies do not cause college price inflation, but there is little doubt that they help make it possible.”

Since Secretary Bennett made his statement (now called the Bennett Hypothesis), more students are receiving federal financial aid. In 1987-1988, the average full-time equivalent student received $2,414 in federal loans, which rose to $6,374 in 2012-2013. The federal government has also increased spending on Pell Grants during this period, although the purchasing power of the grant has eroded due to large increases in tuition.

The Bennett Hypothesis continues to be popular in certain circles, as illustrated by comments by Dallas Mavericks owner and technology magnate Mark Cuban. In 2012, he wrote:

“The point of the numbers is that getting a student loan is easy. Too easy.

You know who knows that the money is easy better than anyone ? The schools that are taking that student loan money in tuition. Which is exactly why they have no problems raising costs for tuition each and every year.

Why wouldn’t they act in the same manner as real estate agents acted during the housing bubble? Raise prices and easy money will be there to pay your price. Good business, right ? Until its not.”

Recently, Cuban called for limiting student loans to $10,000 per year, as reported by Inc.:

“If Mark Cuban is running the economy, I’d go and say, ‘Sallie Mae, the maximum amount that you’re allowed to guarantee for any student in a year is $10,000, period, end of story.’  

We can talk about Republican or Democratic approaches to the economy but until you fix the student loan bubble–and that’s where the real bubble is–we don’t have a chance. All this other stuff is shuffling deck chairs on the Titanic.”

Cuban’s plan wouldn’t actually affect the vast majority of undergraduate students, as loan limits are often below $10,000 per year. Dependent students are limited to no more than $7,500 per year in subsidized and unsubsidized loans and independent students are capped at $12,500 per year. But this would affect graduate students, who can borrow $20,500 per year in unsubsidized loans, as well as students and their families taking out PLUS loans, which are only capped by the cost of attendance.

Other commentators do not believe in the Bennett Hypothesis. An example of this is from David Warren, president of the National Association of Independent Colleges and Universities (the professional association for private nonprofit colleges). In 2012, he wrote that “the hypothesis is nothing more than an urban legend,” citing federal studies that did not find a relationship.

The research on the Bennett Hypothesis can best be classified as mixed, with some studies finding a modest causal relationship between federal financial aid and tuition increases and others finding no relationship. (See this Wonkblog piece for a short overview or Donald Heller’s monograph for a more technical treatment.) But for data reasons, the studies of the Bennett Hypothesis either focus on all financial aid lumped together (which is broader than the original hypothesis) or just Pell Grants.

So do student loans result in tuition increases? There is certainly a correlation between federal financial aid availability and college tuition, but the first rule of empirical research is that correlation does not imply causation. And establishing causality is extremely difficult given the near-universal nature of student loans and the lack of change in program rules over time. It is essential to have some change in the program in order to identify effects separate from other types of financial aid.

In an ideal world (from a researcher’s perspective), some colleges would be randomly assigned to have lower loan limits than others and then longer-term trends in tuition could be examined. That, of course, is politically difficult to do. Another methodological possibility would be to look at the colleges that do not participate in federal student loan programs, which are concentrated among community colleges in several states. But the low tuition charges and low borrowing rates at community colleges make it difficult to even postulate that student loans could potentially drive tuition increases at community colleges.

A potential natural experiment (in which a change is introduced to a system unexpectedly) could have been the short-lived credit tightening of parent PLUS loans, which hit some historically black colleges hard. Students who could no longer borrow the full cost of attendance had to scramble to find other funding, which put pressure on colleges to find additional money for students. But the credit changes have partially been reversed before colleges had to make long-term decisions about pricing.

I’m not too concerned about student loans driving tuition increases at the vast majority of institutions. I think the Bennett Hypothesis is likely the strongest (meaning a modest relationship between loans and tuition) at the most selective undergraduate institutions and most graduate programs, as loan amounts can be substantial and access to credit is typically good. But, without a way to identify variations in loan availability across similar institutions, that will remain a postulation.

Posted in Uncategorized | Tagged , , , | Leave a comment

Analyzing the New Cohort Default Rate Data

The U.S. Department of Education today released cohort default rates (CDR) by college, which reflects the percentage of students who default on their loans within three years of entering repayment. This is a big deal for colleges, as any college that had a CDR of more than 30% for three consecutive years could lose its federal financial aid eligibility. I analyzed what we can learn from CDRs—a limited amount—in a blog post earlier this week.

And then things got interesting in Washington. The Department of Education put out a release yesterday noting that some students with loans from multiple servicers (known as “split servicers”) were current on some loans and defaulting on others. In this release, ED noted that the split servicer students were being dropped from CDRs over the last three years—but only if a college was close to the eligibility threshold. This led many to question whether ED was serious about using CDRs as an accountability tool, as well as trying to glean implications for the upcoming college ratings system.

The summary data for cohort default rates by year and sector is available here, and shows a decline from a 14.7% default rate in Fiscal Year 2010 to 13.7% in FY 2011. Default rates in each major sector of higher education also fell, led by a decline from 21.8% to 19.1% in the for-profit sector. However, a comparison of the FY 2009 and 2010 data in this release with the FY 2009 and 2010 data in last year’s release shows no changes from last year–before the split servicer change was adopted. Something doesn’t seem to be right there.

Twenty-one colleges are subject to sanctions under the new CDRs, all but one of which (Ventura Adult and Continuing Education) are for-profit. Most of the colleges subject to sanctions are small beauty or cosmetology institutions and reflect a very small percentage of total enrollment. We don’t know how many other colleges would have crossed over 30%, if not for the split servicer changes.

This year’s data show some very fortunate colleges. Among colleges with a sufficiently high participation rate, six institutions had CDRs of between 29 and 29.9 percent after being over 30% in the previous two years. They are led by Paris Junior College, with a 29.9% CDR in FY 2011 after being over 40% in the previous years. Other colleges weren’t so lucky. For example, the Aviation Institute of Maintenance was at 38.9% in FY 2009, 36.1% in FY 2010, and improved to 31.1% to 2011—but is still subject to sanctions.

FY 2011 CDRs, FY 2009 & 2010 above 30%
Name FY 2011 FY 2010 FY 2009
SEARCY BEAUTY COLLEGE 9.3 30.7 38.2
NEW CONCEPT MASSAGE AND BEAUTY SCHOOL 9.7 30.1 35.2
UNIVERSITY OF ANTELOPE VALLEY 12 31.8 30.6
PAUL MITCHELL THE SCHOOL ESCANABA 12.1 40 68.7
SAFFORD COLLEGE OF BEAUTY CULTURE 13.1 36.8 36.3
COMMUNITY CHRISTIAN COLLEGE 13.9 33.3 38.8
UNIVERSITY OF SOUTHERNMOST FLORIDA 14.6 30.8 35.1
SOUTHWEST UNIVERSITY AT EL PASO 15.5 36.1 37.5
CENTRO DE ESTUDIOS MULTIDISCIPLINARIOS 15.6 39.2 50.9
VALLEY COLLEGE 17.2 36.9 32.7
AMERICAN BROADCASTING SCHOOL 17.5 30.8 44.6
SUMMIT COLLEGE 17.6 30.9 30.5
VALLEY COLLEGE 19.4 56.5 37.5
AMERICAN UNIVERSITY OF PUERTO RICO 21 31.2 36.6
BRYAN UNIVERSITY 21.1 30.2 30.4
SOUTH CENTRAL CAREER CENTER 22 32.6 35.1
PAUL MITCHELL THE SCHOOL ARKANSAS 22 37.5 30
D-JAY’S SCHOOL OF BEAUTY, ARTS & SCIENCES 22.2 37.5 41.9
PAUL MITCHELL THE SCHOOL GREAT LAKES 22.2 34.6 33.9
KILGORE COLLEGE 22.7 30.2 33.5
ANTONELLI COLLEGE 22.8 33 35.1
OLD TOWN BARBER COLLEGE 23 37.7 40
OZARKA COLLEGE 23.1 41.8 35
TESST COLLEGE OF TECHNOLOGY 23.4 33.7 32
CENTURA COLLEGE 23.7 32 35
RUST COLLEGE 23.7 32 31.6
CARSON CITY BEAUTY ACADEMY 23.8 31.8 43.3
BACONE COLLEGE 24.1 32 30
KAPLAN CAREER INSTITUTE 24.1 32.5 33.6
TECHNICAL CAREER INSTITUTES 24.3 38.8 34.9
VICTOR VALLEY COMMUNITY COLLEGE 24.6 32.6 31
SOUTHWESTERN CHRISTIAN COLLEGE 24.6 32.7 43.1
AMERICAN BEAUTY ACADEMY 24.8 35.7 34.6
CENTURA COLLEGE 24.8 31.5 34.7
DENMARK TECHNICAL COLLEGE 25 30.8 31.6
MILAN INSTITUTE OF COSMETOLOGY 25 32.4 41.5
TREND BARBER COLLEGE 25 43.5 60.5
JACKSONVILLE BEAUTY INSTITUTE 25.2 33.3 41.7
CONCEPT COLLEGE OF COSMETOLOGY 25.3 41.5 34.2
EASTERN OKLAHOMA STATE COLLEGE 25.4 31.8 30
OTERO JUNIOR COLLEGE 25.5 34.2 38.2
LANGSTON UNIVERSITY 25.5 32.5 32.9
COLLEGEAMERICA DENVER 25.5 34.8 38.3
AVIATION INSTITUTE OF MAINTENANCE 25.8 36.9 39.6
EMPLOYMENT SOLUTIONS 26 38.5 30
SANFORD-BROWN COLLEGE 26.2 31.6 31.5
CAMBRIDGE INSTITUTE OF ALLIED HEALTH AND TECHNOLOGY 26.6 33.3 35
ANTELOPE VALLEY COLLEGE 26.9 32.6 33.2
UNIVERSITY OF ARKANSAS COMMUNITY COLLEGE AT BATESVILLE 26.9 30.6 31.6
CC’S COSMETOLOGY COLLEGE 27.4 40.3 35.9
MILWAUKEE CAREER COLLEGE 27.6 34.1 32.7
NTMA TRAINING CENTERS OF SOUTHERN CALIFORNIA 27.8 32.1 34.2
CONCORDIA COLLEGE ALABAMA 27.9 31.4 37.5
NORTH AMERICAN TRADE SCHOOLS 28 31 31.1
AVIATION INSTITUTE OF MAINTENANCE 28.1 37.9 39.8
MEDIATECH INSTITUTE 28.4 33.3 33.3
SEBRING CAREER SCHOOLS 29 54.1 57.5
MOHAVE COMMUNITY COLLEGE 29.3 32.7 36.7
CHERYL FELL’S SCHOOL OF BUSINESS 29.4 38 31.2
AVIATION INSTITUTE OF MAINTENANCE 29.4 36.1 38.9
KLAMATH COMMUNITY COLLEGE 29.4 33 31.7
PARIS JUNIOR COLLEGE 29.9 40.7 41.5
STYLEMASTERS COLLEGE OF HAIR DESIGN 30.6 46.6 37
LASSEN COLLEGE 30.8 37.1 37.7
AVIATION INSTITUTE OF MAINTENANCE 31.1 37.5 32.2
CHARLESTON SCHOOL OF BEAUTY CULTURE 31.7 37.5 34
PALLADIUM TECHNICAL ACADEMY 33 39.4 46.2
L T INTERNATIONAL BEAUTY SCHOOL 38.1 37.7 38
TIDEWATER TECH 38.6 42.7 55
JAY’S TECHNICAL INSTITUTE 40.6 53.8 51.5
OHIO STATE COLLEGE OF BARBER STYLING 41.1 37.8 32.9
MEMPHIS INSTITUTE OF BARBERING 44.7 47.2 44.4
FLORIDA BARBER ACADEMY 46.5 41.7 32.5
SAN DIEGO COLLEGE 49.3 34 35.7

Fully 35 colleges with sufficient participation rates had CDRs between 29.0% and 29.9% in FY 2011, including a mix of small for-profit colleges, HBCUs, and community colleges. The University of Arkansas-Pine Bluff, a designated minority-serving institution, has had CDRs of 29.9%, 29.2%, and 29.8% in the last three years. Mt. San Jacinto College and Harris-Stowe State University also had CDRs just under 30% in each of the last three years. Only 19 colleges, representing a mix of institutional types, had CDRs between 30.0% and 30.9%. This includes Murray State College in Oklahoma, which was at 30.0% in FY 2011, 28.9% in FY 2010, and 31.1% in FY 2009. Forty-three colleges were between 28.0% and 28.9%.

FY 2011 CDRs between 29 and 31 percent
Name FY 2011 FY 2010 FY 2009
OHIO TECHNICAL COLLEGE 29 24.1 21.3
DAYMAR COLLEGE 29 28.9 46.2
SEBRING CAREER SCHOOLS 29 54.1 57.5
L’ESPRIT ACADEMY 29.1 0 0
BLACK RIVER TECHNICAL COLLEGE 29.1 27.9 26.6
NEW SCHOOL OF RADIO & TELEVISION 29.1 26.2 28.1
LOUISBURG COLLEGE 29.2 28.7 24.7
MOHAVE COMMUNITY COLLEGE 29.3 32.7 36.7
HARRIS SCHOOL OF BUSINESS 29.3 25.6 17.8
INTELLITEC MEDICAL INSTITUTE 29.3 27.1 24.7
GALLIPOLIS CAREER COLLEGE 29.3 33.9 29.4
CHERYL FELL’S SCHOOL OF BUSINESS 29.4 38 31.2
COLLEGE OF THE SISKIYOUS 29.4 27.7 27.1
AVIATION INSTITUTE OF MAINTENANCE 29.4 36.1 38.9
KLAMATH COMMUNITY COLLEGE 29.4 33 31.7
COLORLAB ACADEMY OF HAIR, THE 29.4 24.3 12.5
DIGRIGOLI SCHOOL OF COSMETOLOGY 29.4 21.6 23.5
VIRGINIA SCHOOL OF MASSAGE 29.4 14.8 22
WASHINGTON COUNTY COMMUNITY COLLEGE 29.5 20.5 12.7
MT. SAN JACINTO COLLEGE 29.5 29.9 26.5
WEST TENNESSEE BUSINESS COLLEGE 29.5 32.6 21.8
BRITTANY BEAUTY SCHOOL 29.5 31.9 26.4
JOHN PAOLO’S XTREME BEAUTY INSTITUTE, GOLDWELL PRODUCTS ARTISTRY 29.5 25 0
HARRIS – STOWE STATE UNIVERSITY 29.6 27.9 26.5
CARIBBEAN UNIVERSITY 29.6 29.9 29.9
GUILFORD TECHNICAL COMMUNITY COLLEGE 29.7 26 19
WARREN COUNTY CAREER CENTER 29.7 22.9 25
STARK STATE COLLEGE 29.7 24.5 17.2
STRAND COLLEGE OF HAIR DESIGN 29.7 17.9 11.1
INDEPENDENCE COLLEGE OF COSMETOLOGY 29.8 21.6 18.4
FRANK PHILLIPS COLLEGE 29.8 25.2 29.1
MEDICAL ARTS SCHOOL (THE) 29.8 21.6 13.1
NEW MEXICO JUNIOR COLLEGE 29.8 24.1 23.1
PARIS JUNIOR COLLEGE 29.9 40.7 41.5
UNIVERSITY OF ARKANSAS AT PINE BLUFF 29.9 29.2 29.8
MURRAY STATE COLLEGE 30 28.9 31.1
JARVIS CHRISTIAN COLLEGE 30 36.5 29.3
BUSINESS INDUSTRIAL RESOURCES 30.1 19.1 20.9
LONG BEACH CITY COLLEGE 30.1 24.2 19
EASTERN GATEWAY COMMUNITY COLLEGE 30.1 0 0
MARTIN UNIVERSITY 30.2 19.8 18.7
LANE COMMUNITY COLLEGE 30.2 30.6 19.5
CAREER QUEST LEARNING CENTER 30.2 24.1 16.1
NIGHTINGALE COLLEGE 30.3 25 16.6
EMPIRE BEAUTY SCHOOL 30.4 31.6 25.2
NATIONAL ACADEMY OF BEAUTY ARTS 30.4 20.6 5.6
BAR PALMA BEAUTY CAREERS ACADEMY 30.5 35.8 26.8
WEST VIRGINIA UNIVERSITY – PARKERSBURG 30.5 25.8 24.1
ENSACOLA SCHOOL OF MASSAGE THERAPY & HEALTH CAREERS 30.5 17.3 10
PROFESSIONAL MASSAGE TRAINING CENTER 30.6 14.8 13
UNIVERSAL THERAPEUTIC MASSAGE INSTITUTE 30.6 23.5 17.2
STYLEMASTERS COLLEGE OF HAIR DESIGN 30.6 46.6 37
CCI TRAINING CENTER 30.8 26.5 26.7
INSTITUTE OF AUDIO RESEARCH 30.8 29.7 17
LASSEN COLLEGE 30.8 37.1 37.7
KAPLAN CAREER INSTITUTE 30.8 34.6 29.7
TRANSFORMED BARBER AND COSMETOLOGY ACADEMY 30.9 66.6 0
MAYSVILLE COMMUNITY AND TECHNICAL COLLEGE 30.9 26.4 24.5
TRI-COUNTY TECHNICAL COLLEGE 30.9 27.2 16.1

Some of the larger for-profits fared better, potentially due to split servicers. The University of Phoenix’s CDR was 19.0% in FY 2011, down from 26.0% in FY 2010 and 26.4%. DeVry University was at 18.5% in FY 2011, down from 23.4% in FY 2010 and 24.1% in FY 2009. ITT Technical Institute also improved, going from 33.3% in FY 2009 to 28.6% and then 22.4% this year. (Everest College disaggregates its data by campus, but the results are similar.)

The CDR data are not without controversy, but they are an important accountability tool going forward. It will be interesting to see whether and how these data will be used in the draft Postsecondary Institution Ratings System later this fall.

Posted in Uncategorized | Tagged , , , | 2 Comments

What Are Cohort Default Rates Good For?

Today marks the start of U.S. Department of Education’s annual release of cohort default rates (CDR), which reflects the percentage of students who default on their loans within three years of entering repayment. Colleges were informed of their rates today, with a release to the public coming sometime soon. This release, tracking students who entered repayment in Fiscal Year 2011, will the third year that three-year CDRs have been collected and completes a shift from two-year to three-year CDRs for accountability purposes.

Before this year, colleges were subject to sanctions based on their two-year CDRs. Any college that had a two-year CDR of more than 40% in one year could lose its federal student loan eligibility, while any college with a two-year CDR of over 25% for three consecutive years could lose all federal financial aid eligibility. (Colleges with a very small percentage of borrowers can get an exemption.) While this was a rare occurrence (fewer than ten colleges were impacted last year), the switch to a three-year CDR has worried colleges even as the allowed CDR over three years rose from 25% to 30%.

But as the methodology changes, we need to consider what CDR data are actually good for. Colleges take cohort default rates very seriously, and the federal government is likely to use default rates as a component of the often-discussed (and frequently delayed) Postsecondary Institution Ratings System (PIRS). But should the higher education community, policymakers, or the general public take CDRs seriously? Below are some reasons why the default data are far from complete.

(1) Students are tracked over only three years, and income-based repayment makes the data less valuable. I have previously written about these two issues—and why it’s absurd that the Department of Education doesn’t track students over at least ten years. Income-based repayment means that students can be current on their payments even if their payments are zero, which is good for the student but isn’t exactly a ringing endorsement of a given college’s quality.

(2) Individual campuses are often aggregated to the system level, but this isn’t consistent. One of the biggest challenges as a researcher in higher education finance is that data on loan and grant volumes and student loan default rates come from Federal Student Aid instead of the National Center for Education Statistics. This may sound trivial, but some colleges aggregate FSA data to the system level for reporting purposes while all NCES data are at the campus level. This means that while default data on individual campuses within the University of Wisconsin System are available, data from all of the Penn State campuses are aggregated. Most for-profit systems also aggregate data, likely obscuring some individual branches that would otherwise face sanctions.

(3) Defaults are far from the only adverse outcome, but it’s the only one with reported data. Students are not counted as being in default until no payment has been made for at least 271 days, but we have no idea of delinquency rates, hardship deferments, or forbearances related to financial problems by campus. As I recently wrote in a guest post for Access to Completion, the percentage of students having repayment difficulties ranges between 17% and 51%, depending on assumptions made. But we don’t have data on delinquency rates by campus, something which a lot of stakeholders would have interest in.

Does this mean cohort default rates are good for absolutely nothing? No. They’re still useful in identifying colleges (or systems) where a large percentage of borrowers default quickly and a substantial percentage of students borrow. And very low default rates can be a sign that either students are doing well in the labor market after leaving college or that they have the knowledge to enter income-based repayment programs. But for many colleges with middling default rates, far more data are needed (that the Department of Education collects and doesn’t release) to get a better picture of performance.

When the CDR data come out, I’ll have part 2 of this post—focusing on the colleges that are subject to sanctions and what that means for current and future accountability systems.

Posted in Uncategorized | Tagged , , | 1 Comment

Testimony to the Advisory Committee on Student Financial Assistance

Below is a copy of my September 12, 2014 testimony at the Advisory Committee on Student Financial Assistance’s hearing regarding the Postsecondary Institution Ratings System (PIRS):

Good morning, members of the Advisory Committee on Student Financial Assistance, Department of Education officials, and other guests. My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University and the methodologist for Washington Monthly magazine’s annual college rankings. All opinions expressed in this testimony are my own, and I thank the Committee for the opportunity to present.

I am focusing my testimony on PIRS as an accountability mechanism, as that appears to be the Obama Administration’s stated goal of developing ratings. A student-friendly rating tool can have value, but I am confident that third parties can use the Department’s data to develop a better tool. The Department should not simultaneously develop a consumer-oriented ratings system, nor should they release a draft of PIRS without providing information about where colleges stand under the proposed system. I am also not taking an opinion on the utility of PIRS as an accountability measure, as the value of the system depends on details that have not yet been decided.

The Department has a limited number of potential choices for metrics in PIRS regarding access, affordability, and outcomes. While I will submit comments on a range of metrics for the record, I would like to discuss earnings metrics today. In order to not harm colleges that educate large numbers of teachers, social workers, and others who have important but lower-salary jobs, I encourage the Department to adopt an earnings metric indexed to federal poverty guideline. For example, the cutoff could be 150% of the federal poverty line for a family of two, or roughly $23,000 per year.

There are a number of methodological decisions that the Department must make in developing PIRS. I focus on five in this testimony.

The first decision is whether to classify colleges into peer groups. While supporters of the idea state it is necessary in order to have more fair comparisons of similar colleges, I do not feel this is necessary in a well-designed accountability system. I suggest combining all four-year institutions into one group and then separating two-year institutions based on whether more associate’s degrees or certificates were awarded, as this distinction affects graduation rates.

Instead of placing colleges into peer groups, some outcomes should be adjusted for inputs such as student characteristics and selectivity. This partially controls for important differences across colleges that are correlated with outcomes, providing an estimate of a college’s “value-added” to students. But colleges should also be held to minimum outcome standards (such as a 25% graduation rate) in addition to minimum value-added standards.

The scoring system and number of colleges in each rating tier are crucial to the potential feasibility and success of PIRS. A simple system with three or four carefully named tiers (no A-F grades, please!) is sufficient to identify the lowest-performing and highest-performing colleges. I would suggest three tiers with the lowest 10% in the bottom tier, the middle 80% in the next tier, and the highest 10% in the top tier. While the scores all have error due to data limitations, focusing on the bottom 10% makes it unlikely any college in the lowest tier has a true performance outside the bottom one-third of colleges. Using multiple years of data will also help reduce randomness in data; I use three years of data for the Washington Monthly rankings.

Finally, the Department must carefully consider how to weight individual metrics. While I would expect access, affordability, and outcomes to be equally weighted, the colleges in the top and bottom tier should not change much when different weights are used for each metric. If the Department finds the results are highly sensitive to model specifications, the utility of PIRS comes into question.

I conclude with three recommendations—two for the Department and one for the policy community. The Department must be willing to adjust ratings criteria as needed and accept feedback on the draft ratings from a wide variety of stakeholders. They also must start auditing IPEDS data from a random sample of colleges in order to make sure the data are accurate, as the implications of incorrectly-reported or falsely-reported data are substantial. Finally, the policy community needs to continue to push for better higher education data. The Student Achievement Measure project has the potential to improve graduation rate reporting, and overturning the federal ban on unit record data will greatly improve the Department’s ability to accurately measure colleges’ performance.

Thank you once again for the opportunity to present and I look forward to answering any questions.

Posted in Uncategorized | Leave a comment

Rankings, Rankings, and More Rankings!

We’re finally reaching the end of the college rankings season for 2014. Money magazine started off the season with its rankings of 665 four-year colleges based on “educational quality, affordability, and alumni earnings.” (I generally like these rankings, in spite of the inherent limitations of using Rate My Professor scores and Payscale data in lieu of more complete information.) I jumped in the fray late in August with my friends at Washington Monthly for our annual college guide and rankings. This was closely followed by a truly bizarre list from the Daily Caller of “The 52 Best Colleges In America PERIOD When You Consider Absolutely Everything That Matters.

But like any good infomercial, there’s more! Last night, the New York Times released its set of rankings focusing on how elite colleges are serving students from lower-income families. They examined the roughly 100 colleges with a four-year graduation rate of 75% or higher, only three of which (University of North Carolina-Chapel Hill, University of Virginia, and the College of William and Mary) are public. By examining the percentage of students receiving Pell Grants in the past three years and the net price of attendance (the total sticker price less all grant aid) for 2012-13, they created a “College Access Index” looking at how many standard deviations from the mean each college was.

My first reaction upon reading the list is that it seems a lot like what we introduced in Washington Monthly’s College Guide this year—a list of “Affordable Elite” colleges. We looked at the 224 most selective colleges (including many public universities) and ranked them using graduation rate, graduation rate performance (are they performing as well as we would expect given the students they enroll?), and student loan default rates in addition to percent Pell and net price. Four University of California colleges were in our top ten, with the NYT’s top college (Vassar) coming in fifth on our list.

I’m glad to see the New York Times focusing on economic diversity in their list, but it would be nice to look at a slightly broader swath of colleges that serve more than a handful of lower-income students. As The Chronicle of Higher Education notes, the Big Ten Conference enrolls more Pell recipients than all of the colleges ranked by the NYT. Focusing on the net price for families making between $30,000 and $48,000 per year is also a concern at these institutions due to small sample sizes. In 2011-12 (the most recent year of publicly available data), Vassar enrolled 669 first-year students, of whom 67 were in the $30,000-$48,000 income bracket.

The U.S. News & World Report college rankings also came out this morning, and not much changed from last year. Princeton, which is currently fighting a lawsuit challenging whether the entire university should be considered a nonprofit enterprise, is the top national university on the list, while Williams College in Massachusetts is the top liberal arts college. Nick Anderson at the Washington Post has put together a nice table showing changes in rankings over five years; most changes wouldn’t register as being statistically significant. Northeastern University, which has risen into the top 50 in recent years, is an exception. However, as this great piece in Boston Magazine explains, Northeastern’s only focus is to rise in the U.S. News rankings. (They’re near the bottom of the Washington Monthly rankings, in part because they’re really expensive.)

Going forward, the biggest set of rankings for the rest of the fall will be the new college football rankings—as the Bowl Championship Series rankings have been replaced by a 13-person committee. (And no, Bob Morse from U.S. News is not a member, although Condoleezza Rice is.) I like Gregg Easterbrook’s idea at ESPN about including academic performance as a component in college football rankings. That might be worth considering as a tiebreaker if the playoff committee gets deadlocked solely using on-field performance. They could also use the Washington Monthly rankings, but Minnesota has a better chance of winning a Rose Bowl before that happens.

[ADDENDUM: Let’s also not forget about the federal government’s effort to rate (not rank) colleges through the Postsecondary Institution Ratings System (PIRS). That is supposed to come out this fall, as well.]

Posted in Uncategorized | Tagged , , , , | 1 Comment

Are “Affordable Elite” Colleges Growing in Size, or Just Selectivity?

A new addition to this year’s Washington Monthly college guide is a ranking of “Affordable Elite” colleges. Given that many students and families (rightly or wrongly) focus on trying to get into the most selective colleges, we decided to create a special set of rankings covering only the 224 most highly-competitive colleges in the country (as defined by Barron’s). Colleges are assigned scores based on student loan default rates, graduation rates, graduation rate performance, the percentage of students receiving Pell Grants, and the net price of attendance. UCLA, Harvard, and Williams made the top three, with four University of California campuses in the top ten.

I received an interesting piece of criticism regarding the list by Sara Goldrick-Rab, professor at the University of Wisconsin-Madison (and my dissertation chair in graduate school). Her critique noted that the size of the school and the type of admissions standards are missing from the rankings. She wrote:

“Many schools are so tiny that they educate a teensy-weensy fraction of American undergraduates. So they accept 10 poor kids a year, and that’s 10% of their enrollment. Or maybe even 20%? So what? Why is that something we need to laud at the policy level?”

While I don’t think that the size of the college should be a part of the rankings, it’s certainly worth highlighting the selective colleges that have expanded over time compared to those which have remained at the same size in spite of an ever-growing applicant pool.

I used undergraduate enrollment data from the fall semesters of 1980, 1990, 2000, and 2012 from IPEDS for both the 224 colleges in the Affordable Elite list and 2,193 public and private nonprofit four-year colleges not on the list. I calculated the percentage change between each year and 2012 for the selective colleges on the Affordable Elite list and the other less-selective colleges to get an idea of whether selective colleges are curtailing enrollment.

[UPDATE: The fall enrollment numbers include all undergraduates, including nondegree-seeking institutions. This doesn't have a big impact on most colleges, but does at Harvard--where about 30% of total undergraduate enrollment is not seeking a degree. This means that enrollment growth may be overstated. Thanks to Ben Wildavsky for leading me to investigate this point.]

The median Affordable Elite college enrolled 3,354 students in 2012, compared to 1,794 students at the median less-selective college. The percentage change at the median college between each year and 2012 is below:

Period Affordable Elite Less selective
2000-2012 10.9% 18.3%
1990-2012 16.0% 26.3%
1980-2012 19.9% 41.7%

 

The distribution of growth rates is shown below:

enrollment_by_elite

So, as a whole, less-selective colleges are growing at a more rapid pace than the ones on the Affordable Elite list. But do higher-ranked elite colleges grow faster? The scatterplot below suggests not really—with a correlation of -0.081 between rank and growth, suggesting that higher-ranked colleges grow at slightly slower rates than lower-ranked colleges.

enrollment_vs_rank

But some elite colleges have grown. The top ten colleges in the Affordable Elite list have the following growth rates:

      Change from year to 2012 (pct)
Rank Name (* means public) 2012 enrollment 2000 1990 1980
1 University of California–Los Angeles (CA)* 27941 11.7 15.5 28.0
2 Harvard University (MA) 10564 6.9 1.7 62.3
3 Williams College (MA) 2070 2.5 3.2 6.3
4 Dartmouth College (NH) 4193 3.4 11.1 16.8
5 Vassar College (NY) 2406 0.3 -1.8 1.9
6 University of California–Berkeley (CA)* 25774 13.7 20.1 21.9
7 University of California–Irvine (CA)* 22216 36.9 64.6 191.6
8 University of California–San Diego (CA)* 22676 37.5 57.9 152.5
9 Hanover College (IN) 1123 -1.7 4.5 11.0
10 Amherst College (MA) 1817 7.2 13.7 15.8

 

Some elite colleges have not grown since 1980, including the University of Pennsylvania, MIT, Boston College, and the University of Minnesota. Public colleges have generally grown slightly faster than private colleges (the UC colleges are a prime example), but there is substantial variation in their growth.

Posted in Uncategorized | Tagged , , | 1 Comment