Comments on Federal College Rating Metrics

The U.S. Department of Education (ED) released a document containing draft metrics for the Postsecondary Institution Ratings System (PIRS) today (link via Inside Higher Ed), with a request for comments from stakeholders and the general public by February. Although the release of the metrics was delayed several months (and we were initially expecting ratings this fall instead of just some potential metrics), the potential metrics and the explanations provided by ED provide insights about what the ratings will look like if (and when) they are finalized. Below are some of the key pieces of the released metrics, along with my comments.

 

Which colleges will be rated, and how will they be grouped? ED is planning to rate degree-granting and certificate-granting two-year colleges separately from four-year colleges. They are still considering whether to have finer gradations among four year colleges. Given the substantial differences in mission and completion rates between associate’s degree-granting and certificate-granting two-year colleges, I strongly recommend separating the two groups. Four-year colleges can all be rated together if input adjustments are used, or they can be put into much smaller peer groups (the latter seems to be what colleges prefer).

 

Leaving non-degree-granting colleges out of PIRS sounds trivial, but it leaves out a fair number of small for-profit colleges. I think many of the colleges not subject to PIRS will be subject to gainful employment, should that survive its latest legal challenge. Given that gainful employment has financial consequences while PIRS does not at this point, the colleges left out of PIRS are subject to more stringent accountability than many of those in PIRS.

 

What will the ratings categories and scoring system look like? I’m glad to see ED considering three rating categories: high-performing, in the middle, and low-performing. That’s about all the fine gradation the data can support, in my view, and it is far more politically feasible to have fewer ratings categories. No information was provided about how individual metrics will be weighted or scored, which likely indicates that ED is still in the preliminary stage on PIRS.

 

What metrics are being considered? And which ones do we already have data on? The metrics fall into three main categories: access, affordability, and student outcomes.

 

Access: Percent Pell, distribution of expected family contributions (EFC), enrollment by family income quintile, percent first-generation. Percent Pell and enrollment by family income quintile are already collected by the Department of Education, although these measures have gaps because not all students from low-income families file the Free Application for Federal Student Aid (FAFSA). The EFC distribution measure is intriguing, but it’s not currently collected. Perhaps considering the percentage of students with zero EFC (who have the least ability to pay) would make sense. The FAFSA asks students about parental education, so first-generation status could be made available in a few years. There is a question of how to define first-generation status, as it could include a student whose parents have some college but no degree or be limited to those with no college experience.

 

Affordability: Net price of attendance (overall and by income quintile). The net price reflects the total cost of attendance (tuition, fees, books/supplies, and living costs) less all grant or scholarship aid received. It’s a good measure to include, even if it can be gamed by institutions that cut their living allowances to absurdly low levels or use income from the FAFSA instead of the CSS PROFILE (where more sources are counted). I’m surprised not to see a measure for debt burdens or student borrowing here as a measure of affordability.

 

Outcomes: Graduation and transfer rates, short-term employment, longer-term earnings, graduate school attendance, and “loan performance outcomes.” As of right now, the only measures available are graduation/transfer rates (for first-time, full-time students) and student loan repayment. ED is working to improve the graduation and transfer metrics by 2017, which is welcome. I’m intrigued by how loan performance was described:

 

“Relatively simple metrics like the percentage of students repaying their loans on time might be important as consumers weigh whether or not they will be able to handle their financial obligations after attending a specific school.”

 

This is different from the standard cohort default rate measure, which measures whether a student defaults by not making a payment in the last 270 days. Measuring the percentage in current repayment would show a lower percentage of students having a successful outcome, but it better reflects former students’ performance than a cohort default rate. Kudos for ED for making this suggestion.

 

I see employment, earnings, and graduate enrollment outcomes as being good things to consider, but they won’t be ready to include in PIRS for several years. The ban on student unit record data makes tracking employment and earnings difficult unless ED relies on colleges to self-report data from their former students. It’s worth emphasizing the importance of including dropouts as well as graduates in these metrics. Graduate enrollment could in theory be done with the National Student Clearinghouse, but colleges may not want to participate in the voluntary system if it is used for accountability.

 

Any other surprises? I was pleasantly surprised to see ED include a section on considering how to reward colleges for improving their outcomes over time. This might be a way to get around the question of how to adjust for student inputs and institutional resources, or it could be a piece designed to bring more colleges to the discussion table.

 

What does all of this mean? It appears that PIRS is very much in its infancy at this point, given the broadness of the suggested metrics and the difficulty in getting data on some of them in the next year or two. Putting college ratings together is methodologically quite easy to do, but politically very difficult. The delay in the timeline and the call for additional feedback by February highlight the political difficulty of PIRS. Given the GOP takeover of Congress, I think it’s safe to say that even if a full set of ratings comes out next week, the likelihood of ratings being tied to aid by 2018 (as the President has proposed) is basically nil. (For more on why I think PIRS is a difficult political sell, read my new piece in Politico Magazine.) But even getting draft ratings ready for the start of the 2015-16 academic year will be very difficult. ED has a lot of work to do before then.

 

But PIRS does have the potential to substantially improve data availability and transparency on a number of important student outcomes, even without becoming a high-stakes accountability system. I expect that college access organizations, higher education publications, guidance counselors, and even those of us in the rankings business will work to get any new data sources out to students and their families in a consumer-friendly format. That may be the lasting legacy of PIRS.

 

Advertisements

About Robert

I am an assistant professor of higher education at Seton Hall University. All opinions are my own.
This entry was posted in Uncategorized and tagged . Bookmark the permalink.

2 Responses to Comments on Federal College Rating Metrics

  1. bjhosch says:

    There appears to be some disconnect between the first generation criteria in the PIRS framework, which talks about neither parent “attempting” college vs the actual question from FAFSA which asks if each parent has “completed” college or beyond. In our data there’s a 10-15 point gap between those groups.

    Actual FAFSA questions:
    Q 24/25 Highest school completed by Parent 1 / Parent 2
    -Middle school/Jr. high
    -High school
    -College or beyond
    -Other/unknown

  2. Pingback: Public Comments to the Department of Education on College Ratings | Kelchen on Education

Comments are closed.