Testimony to the Advisory Committee on Student Financial Assistance

Below is a copy of my September 12, 2014 testimony at the Advisory Committee on Student Financial Assistance’s hearing regarding the Postsecondary Institution Ratings System (PIRS):

Good morning, members of the Advisory Committee on Student Financial Assistance, Department of Education officials, and other guests. My name is Robert Kelchen and I am an assistant professor in the Department of Education Leadership, Management and Policy at Seton Hall University and the methodologist for Washington Monthly magazine’s annual college rankings. All opinions expressed in this testimony are my own, and I thank the Committee for the opportunity to present.

I am focusing my testimony on PIRS as an accountability mechanism, as that appears to be the Obama Administration’s stated goal of developing ratings. A student-friendly rating tool can have value, but I am confident that third parties can use the Department’s data to develop a better tool. The Department should not simultaneously develop a consumer-oriented ratings system, nor should they release a draft of PIRS without providing information about where colleges stand under the proposed system. I am also not taking an opinion on the utility of PIRS as an accountability measure, as the value of the system depends on details that have not yet been decided.

The Department has a limited number of potential choices for metrics in PIRS regarding access, affordability, and outcomes. While I will submit comments on a range of metrics for the record, I would like to discuss earnings metrics today. In order to not harm colleges that educate large numbers of teachers, social workers, and others who have important but lower-salary jobs, I encourage the Department to adopt an earnings metric indexed to federal poverty guideline. For example, the cutoff could be 150% of the federal poverty line for a family of two, or roughly $23,000 per year.

There are a number of methodological decisions that the Department must make in developing PIRS. I focus on five in this testimony.

The first decision is whether to classify colleges into peer groups. While supporters of the idea state it is necessary in order to have more fair comparisons of similar colleges, I do not feel this is necessary in a well-designed accountability system. I suggest combining all four-year institutions into one group and then separating two-year institutions based on whether more associate’s degrees or certificates were awarded, as this distinction affects graduation rates.

Instead of placing colleges into peer groups, some outcomes should be adjusted for inputs such as student characteristics and selectivity. This partially controls for important differences across colleges that are correlated with outcomes, providing an estimate of a college’s “value-added” to students. But colleges should also be held to minimum outcome standards (such as a 25% graduation rate) in addition to minimum value-added standards.

The scoring system and number of colleges in each rating tier are crucial to the potential feasibility and success of PIRS. A simple system with three or four carefully named tiers (no A-F grades, please!) is sufficient to identify the lowest-performing and highest-performing colleges. I would suggest three tiers with the lowest 10% in the bottom tier, the middle 80% in the next tier, and the highest 10% in the top tier. While the scores all have error due to data limitations, focusing on the bottom 10% makes it unlikely any college in the lowest tier has a true performance outside the bottom one-third of colleges. Using multiple years of data will also help reduce randomness in data; I use three years of data for the Washington Monthly rankings.

Finally, the Department must carefully consider how to weight individual metrics. While I would expect access, affordability, and outcomes to be equally weighted, the colleges in the top and bottom tier should not change much when different weights are used for each metric. If the Department finds the results are highly sensitive to model specifications, the utility of PIRS comes into question.

I conclude with three recommendations—two for the Department and one for the policy community. The Department must be willing to adjust ratings criteria as needed and accept feedback on the draft ratings from a wide variety of stakeholders. They also must start auditing IPEDS data from a random sample of colleges in order to make sure the data are accurate, as the implications of incorrectly-reported or falsely-reported data are substantial. Finally, the policy community needs to continue to push for better higher education data. The Student Achievement Measure project has the potential to improve graduation rate reporting, and overturning the federal ban on unit record data will greatly improve the Department’s ability to accurately measure colleges’ performance.

Thank you once again for the opportunity to present and I look forward to answering any questions.

Posted in Uncategorized | Leave a comment

Rankings, Rankings, and More Rankings!

We’re finally reaching the end of the college rankings season for 2014. Money magazine started off the season with its rankings of 665 four-year colleges based on “educational quality, affordability, and alumni earnings.” (I generally like these rankings, in spite of the inherent limitations of using Rate My Professor scores and Payscale data in lieu of more complete information.) I jumped in the fray late in August with my friends at Washington Monthly for our annual college guide and rankings. This was closely followed by a truly bizarre list from the Daily Caller of “The 52 Best Colleges In America PERIOD When You Consider Absolutely Everything That Matters.

But like any good infomercial, there’s more! Last night, the New York Times released its set of rankings focusing on how elite colleges are serving students from lower-income families. They examined the roughly 100 colleges with a four-year graduation rate of 75% or higher, only three of which (University of North Carolina-Chapel Hill, University of Virginia, and the College of William and Mary) are public. By examining the percentage of students receiving Pell Grants in the past three years and the net price of attendance (the total sticker price less all grant aid) for 2012-13, they created a “College Access Index” looking at how many standard deviations from the mean each college was.

My first reaction upon reading the list is that it seems a lot like what we introduced in Washington Monthly’s College Guide this year—a list of “Affordable Elite” colleges. We looked at the 224 most selective colleges (including many public universities) and ranked them using graduation rate, graduation rate performance (are they performing as well as we would expect given the students they enroll?), and student loan default rates in addition to percent Pell and net price. Four University of California colleges were in our top ten, with the NYT’s top college (Vassar) coming in fifth on our list.

I’m glad to see the New York Times focusing on economic diversity in their list, but it would be nice to look at a slightly broader swath of colleges that serve more than a handful of lower-income students. As The Chronicle of Higher Education notes, the Big Ten Conference enrolls more Pell recipients than all of the colleges ranked by the NYT. Focusing on the net price for families making between $30,000 and $48,000 per year is also a concern at these institutions due to small sample sizes. In 2011-12 (the most recent year of publicly available data), Vassar enrolled 669 first-year students, of whom 67 were in the $30,000-$48,000 income bracket.

The U.S. News & World Report college rankings also came out this morning, and not much changed from last year. Princeton, which is currently fighting a lawsuit challenging whether the entire university should be considered a nonprofit enterprise, is the top national university on the list, while Williams College in Massachusetts is the top liberal arts college. Nick Anderson at the Washington Post has put together a nice table showing changes in rankings over five years; most changes wouldn’t register as being statistically significant. Northeastern University, which has risen into the top 50 in recent years, is an exception. However, as this great piece in Boston Magazine explains, Northeastern’s only focus is to rise in the U.S. News rankings. (They’re near the bottom of the Washington Monthly rankings, in part because they’re really expensive.)

Going forward, the biggest set of rankings for the rest of the fall will be the new college football rankings—as the Bowl Championship Series rankings have been replaced by a 13-person committee. (And no, Bob Morse from U.S. News is not a member, although Condoleezza Rice is.) I like Gregg Easterbrook’s idea at ESPN about including academic performance as a component in college football rankings. That might be worth considering as a tiebreaker if the playoff committee gets deadlocked solely using on-field performance. They could also use the Washington Monthly rankings, but Minnesota has a better chance of winning a Rose Bowl before that happens.

[ADDENDUM: Let's also not forget about the federal government's effort to rate (not rank) colleges through the Postsecondary Institution Ratings System (PIRS). That is supposed to come out this fall, as well.]

Posted in Uncategorized | Tagged , , , , | Leave a comment

Are “Affordable Elite” Colleges Growing in Size, or Just Selectivity?

A new addition to this year’s Washington Monthly college guide is a ranking of “Affordable Elite” colleges. Given that many students and families (rightly or wrongly) focus on trying to get into the most selective colleges, we decided to create a special set of rankings covering only the 224 most highly-competitive colleges in the country (as defined by Barron’s). Colleges are assigned scores based on student loan default rates, graduation rates, graduation rate performance, the percentage of students receiving Pell Grants, and the net price of attendance. UCLA, Harvard, and Williams made the top three, with four University of California campuses in the top ten.

I received an interesting piece of criticism regarding the list by Sara Goldrick-Rab, professor at the University of Wisconsin-Madison (and my dissertation chair in graduate school). Her critique noted that the size of the school and the type of admissions standards are missing from the rankings. She wrote:

“Many schools are so tiny that they educate a teensy-weensy fraction of American undergraduates. So they accept 10 poor kids a year, and that’s 10% of their enrollment. Or maybe even 20%? So what? Why is that something we need to laud at the policy level?”

While I don’t think that the size of the college should be a part of the rankings, it’s certainly worth highlighting the selective colleges that have expanded over time compared to those which have remained at the same size in spite of an ever-growing applicant pool.

I used undergraduate enrollment data from the fall semesters of 1980, 1990, 2000, and 2012 from IPEDS for both the 224 colleges in the Affordable Elite list and 2,193 public and private nonprofit four-year colleges not on the list. I calculated the percentage change between each year and 2012 for the selective colleges on the Affordable Elite list and the other less-selective colleges to get an idea of whether selective colleges are curtailing enrollment.

[UPDATE: The fall enrollment numbers include all undergraduates, including nondegree-seeking institutions. This doesn't have a big impact on most colleges, but does at Harvard--where about 30% of total undergraduate enrollment is not seeking a degree. This means that enrollment growth may be overstated. Thanks to Ben Wildavsky for leading me to investigate this point.]

The median Affordable Elite college enrolled 3,354 students in 2012, compared to 1,794 students at the median less-selective college. The percentage change at the median college between each year and 2012 is below:

Period Affordable Elite Less selective
2000-2012 10.9% 18.3%
1990-2012 16.0% 26.3%
1980-2012 19.9% 41.7%

 

The distribution of growth rates is shown below:

enrollment_by_elite

So, as a whole, less-selective colleges are growing at a more rapid pace than the ones on the Affordable Elite list. But do higher-ranked elite colleges grow faster? The scatterplot below suggests not really—with a correlation of -0.081 between rank and growth, suggesting that higher-ranked colleges grow at slightly slower rates than lower-ranked colleges.

enrollment_vs_rank

But some elite colleges have grown. The top ten colleges in the Affordable Elite list have the following growth rates:

      Change from year to 2012 (pct)
Rank Name (* means public) 2012 enrollment 2000 1990 1980
1 University of California–Los Angeles (CA)* 27941 11.7 15.5 28.0
2 Harvard University (MA) 10564 6.9 1.7 62.3
3 Williams College (MA) 2070 2.5 3.2 6.3
4 Dartmouth College (NH) 4193 3.4 11.1 16.8
5 Vassar College (NY) 2406 0.3 -1.8 1.9
6 University of California–Berkeley (CA)* 25774 13.7 20.1 21.9
7 University of California–Irvine (CA)* 22216 36.9 64.6 191.6
8 University of California–San Diego (CA)* 22676 37.5 57.9 152.5
9 Hanover College (IN) 1123 -1.7 4.5 11.0
10 Amherst College (MA) 1817 7.2 13.7 15.8

 

Some elite colleges have not grown since 1980, including the University of Pennsylvania, MIT, Boston College, and the University of Minnesota. Public colleges have generally grown slightly faster than private colleges (the UC colleges are a prime example), but there is substantial variation in their growth.

Posted in Uncategorized | Tagged , , | 1 Comment

Are Some Elite Colleges Understating Net Prices?

As a faculty member researching higher education finance, I’m used to seeing the limitations in federal data available to students and their families as they choose colleges. For example, the net price of attendance measure (measured as tuition and fees, room and board, books, and other expenses less any grants received) is only for first-time, full-time students—and therefore excludes a lot of students with great financial need. But a new graphic-heavy report from The Chronicle of Higher Education on net price revealed another huge limitation of the net price data.

The report, titled “Are Poor Families Really Paying Half Their Income at Elite Colleges?” looked at the two ways that some of the most selective public and private colleges calculate household income. About 400 colleges require students to file the CSS/Financial Aid PROFILE (or PROFILE for short) in addition to the FAFSA in order to receive institutional aid; unlike the FAFSA, the PROFILE requires all but the lowest-income students to pay an application fee. Selective colleges require the PROFILE because it includes more questions about household assets than the FAFSA, with the goal of getting a more complete picture of middle-income and upper-income families’ ability to pay for college. This form isn’t really necessary for families with low incomes and little wealth, and can serve as a barrier to attending certain colleges –as noted by Rachel Fishman of the New America Foundation.

The Chronicle piece looked at income data from Notre Dame, which provided both the FAFSA and PROFILE definitions of income. The PROFILE definition of family income resulted in far fewer students in the lowest income bracket (below $30,000 per year) than the FAFSA definition. Because Notre Dame targets more aid to the neediest students, the net price using PROFILE income below $30,000 (the very lowest-income students) was just $4,472 per year, compared to $11,626 using the FAFSA definition.

Notre Dame reported net prices to the Department of Education using the FAFSA definition of family income, which is the same way that all non-PROFILE colleges report income for net price. But the kicker in the Chronicle piece is that apparently some colleges use the PROFILE definition of income to generate net price data for the federal government. These selective colleges look much less expensive than a college like Notre Dame that reports data like most colleges do, giving them great publicity. Reporting PROFILE-based net prices can also improve these colleges’ performance on Washington Monthly’s list of best bang-for-the-buck colleges, as we use the average net price paid by students making less than $75,000 per year in the metric. (But many of the elite colleges don’t make the list since they fail to enroll 20% Pell recipients in their student body.)

The Department of Education should put forth language clarifying that net price data should be based on the FAFSA definition of income and not the PROFILE definition that puts fewer students in the lower income brackets and results in a seemingly lower net price. Colleges can report both FAFSA and PROFILE definitions on their own websites, but federal data need to be consistent across colleges.

Posted in Uncategorized | Tagged , , | Leave a comment

Building a Better Student Loan Default Measure

Student loan default rates have been a hot political topic as of late given increased accountability pressures at the federal level. Currently, colleges can lose access to all federal financial aid (grants as well as loans) if more than 25% of students defaulted on their loans within two years of leaving college for three consecutive cohorts. Starting later this year, the measure used will be the default rate within three years of leaving college, and the cutoff for federal eligibility will rise to 30%. (Colleges can appeal this result if there are relatively few borrowers.)

But few students should ever have to default on their loans given the availability of various income-based repayment (IBR) plans. (PLUS loans typically aren’t eligible for income-based repayment, but their default rates oddly aren’t tracked and aren’t used for accountability purposes.) If a former student enrolled in IBR falls on tough times, his or her monthly payment will go down—potentially to zero if income is less than 150% of the federal poverty line. As a result, savvy colleges should be encouraging their students to enroll in IBR in order to reduce default rates.

And more students are enrolling in IBR. Jason Delisle at the New America Foundation analyzed new Federal Student Aid data out this week that showed that the number of students in IBR doubled from 950,000 to 1.9 million in the last year while outstanding loan balances went from $52.2 billion to $101.0 billion. The federal government’s total Direct Loan portfolio increased from $361.3 billion to $464.3 billion in the last year, meaning that IBR was responsible for nearly half of the increase in loan dollars.

This shift to IBR means that the federal government needs to consider new options for holding colleges accountable for their outcomes. Some options include:

(1) Using a longer default window. The “standard” loan repayment plan is ten years, but defaults are only tracked for three years. A longer window wouldn’t give an accurate picture of outcomes if more students enroll in IBR, but it would provide useful information on students who expect to do well enough after college that standard payments will be a better deal than IBR. This probably requires replacement of the creaky National Student Loan Data System, which may not be able to handle that many more data requests.

(2) Look at the percentage of students who don’t pay anything under IBR. This would measure the percentage of students making more than 150% of the poverty line, or about $23,000 per year for a former borrower with one other family member. Even with the woeful salaries in many public service jobs (such as teaching), they’ll likely have to pay something here.

(3) Look at the total amount repaid compared to the amount borrowed. If the goal is to make sure the federal government gets its money back, a measure of the percentage of funds repaid might be useful. Colleges could even be held accountable for part of the unpaid amount if desired.

As the Department of Education continues to develop draft college ratings (to come out later this fall), they are hopefully having these types of conversations when considering outcome measures. I hope this piece sparks a conversation about potential loan default or repayment measures that can improve upon the currently inadequate measure, so please offer your suggestions as comments below.

Posted in Uncategorized | Tagged , , , | Leave a comment

Quick Thoughts on the Ryan Higher Education Budget Discussion Draft

Representative Paul Ryan (R-WI) released a proposal called Expanding Opportunity in America this morning, which covered topics including social benefits, the Earned Income Tax Credit, education, criminal justice, and regulatory reform. My focus is on the higher education section, starting on page 44.

First of all, I’m glad to see a discussion of targeting federal funds right at the start of the higher education section. Ryan notes concerns about subsidies going to students who don’t need them (such as education tax credits going to households making up to $180,000 per year) and the large socioeconomic gaps in college completion. This is important to note for both economic efficiency and targeting middle-income voters.

The policy points are below:

  • Simplify the FAFSA. Most policymakers like this idea at this point, but the question is how to do so. The document doesn’t specify how it should be simplified, or if it should go as far as the Alexander/Bennet proposal to knock the FAFSA back to two questions. Ryan supports getting information about aid available to students in eighth grade and using tax data from two years ago (“prior prior year”) to determine aid eligibility, both of which make great sense. I’ve written papers on both early aid commitment and prior prior year.
  • Reform and modernize the Pell program. Ryan is concerned about the fiscal health of the Pell program and is looking for ways to shore up its finances. He raises the idea of using the Supplemental Educational Opportunity Grant (SEOG)—a Pell supplement distributed by campuses—to help fund Pell. I’ve written a paper about how SEOG and work-study allocations benefit very expensive private colleges over colleges that actually serve Pell recipients. It’s a great idea to consider, but parts of One Dupont just may object. Ryan also suggests allowing students to use their Pell funds however they want (effectively restoring the summer Pell Grant), something which much of the higher education community supports.
  • Cap federal loans to graduate students and parents. This will prove to be a controversial recommendation, with the possibility of interesting political bedfellows. While many are concerned about rising debt and the fiscal implications, there are different solutions. The Obama Administration has instead proposed capping forgiveness at $57,500, while letting students borrow more. I’m conflicted as to what the better path is. Is it better to shift students to the private loan market to get any additional funds, or should they get loans with lower interest rates through the federal government that may result in a fiscal train wreck if loan forgiveness isn’t capped? The Ryan proposal has the potential to help slow the growth in college costs, but potentially at the expense of some students’ goals.
  • Consider reforms to the TRIO programs. TRIO programs serve low-income, first-generation families, but Ryan notes that there isn’t a lot of evidence supporting these programs. I admittedly don’t know as much about TRIO as I should, but I like the call for additional research before judging their effectiveness.
  • Expand funding for federal Work-Study programs. The proposal increases work-study funds through allowing colleges to keep expiring Perkins Loans funds instead of returning them to the federal government. This is the wrong way to proceed because Perkins allocations (and current work-study allocations) are also correlated with the cost of attendance. I would rather see a redistribution of work-study funds based on Pell Grant receipt instead of by cost of attendance, as I’ve noted previously.
  • Build stronger partnerships with post-secondary institutions. Most of this is empty platitudes toward colleges, but the last sentence is critical: “Colleges should also have skin in the game, to further encourage their commitment to outcome-based learning.” There seems to be some support on both sides of the aisle for holding institutions accountable for their performance through methods such as partial responsibility for loan defaults, tying financial aid to outcomes, or college ratings, but an agreement looks less likely at this point.
  • Reform the accreditation process. Ryan supports Senator Lee (R-UT)’s proposal to allow accreditors to certify particular courses instead of degree programs. This is a good idea in general, but the political landscape gets much trickier due to the existence of MOOCs, for-profit colleges (and course providers), and the power of the current higher education lobby. I’ll be interested to see how this moves forward.

Overall, the tenets of the proposal seem reasonable and some parts are likely to get bipartisan support. The biggest questions remaining are whether the Senate will be okay with the House passing Higher Education Act reauthorization components piecemeal (as they are currently doing) and what funding levels will look like for particular programs. In any case, these ideas should generate useful discussions in policy and academic circles.

Posted in Uncategorized | Tagged , , , | 1 Comment

Should Colleges Be Able to Determine Costs of Living?

I was reading through the newest National Center for Education Statistics report with just-released federal data on the cost of college and found some interesting numbers. (The underlying data are available under the “preliminary release” tab of the IPEDS Data Center.) Table 2 of the report shows the change in inflation-adjusted costs for tuition and fees, books and supplies, room and board, and other expenses included in the cost of attendance figure between 2011-12 and 2013-14.

Tuition and fees rose between three and five percent above inflation in public and private nonprofit two-year and four-year colleges between 2011-12 and 2013-14 while slightly dipping at for-profit colleges (perhaps a response to declining enrollment in that sector). Room and board for students living on campus at four-year colleges also went up about three percent faster than inflation, which seems reasonable given the increasing quality of amenities. But the other results struck me as a little odd:

This tweet got picked up by The Chronicle of Higher Education, and led to a nice piece by Jonah Newman talking to me and a financial aid official about what could be explaining these results. In my view, there are three potential reasons why other costs included in the costs of attendance measure could be falling:

(1) Students could be under such financial stress that they’re doing everything possible to cut back on costs at least partially within their control. Given the rising cost of college, this could potentially explain part of the drop.

(2) Colleges could be trying to keep the total cost of attendance—and thus the net price of attendance, which is the cost of attendance less all grant aid received—low for accountability and public shaming purposes. In my work as methodologist for the Washington Monthly college rankings, a college’s net price factors into its score on the social mobility portion of the rankings and its position on our list of America’s Best Bang for the Buck” Colleges. A higher net price could also hurt colleges in the Obama Administration’s proposed college ratings, a draft of which is due to be released later this fall.

(3) Colleges could be trying to keep the cost of attendance low in order to limit student borrowing because students cannot borrow more than the total cost of attendance. Colleges may think that limiting student loan debt will result in lower default rates (a key accountability measure), and there is some evidence that the for-profit sector may be doing this even if it cuts off students’ access to funds needed to pay for living expenses:

Looking at each of the individual components beyond tuition, fees, and room and board, book and supplies costs staying level with inflation or slightly falling in the nonprofit sector could be reasonable. Pushes to make textbook costs more transparent could be having an impact, as could the ability of students to rent books or access online course material at a lower price than conventional material:

While room and board for students living on campus increased 3-4 percentage points faster than inflation over the last two years, the cost of living off campus (not with family) was estimated to stay constant. However, as Ben Miller at the New America Foundation pointed out to me, some colleges cut their off-campus living expenses to implausibly low values:

The “other expenses” category (such as transportation, travel costs, and some entertainment) dropped between one and five percentage points. These drops could be a function of colleges not accurately capturing what it costs to live modestly because surveying students is an expensive and time-consuming proposition for understaffed financial aid offices. But it could also be a result of pressure from administrators or trustees who want to keep the total cost (on paper) lower.

A potential solution would be to take the room and board estimates for off-campus students and the “other expenses” category out of the hands of colleges and instead use a regionally-adjusted measure of living expenses. The Department of Education could survey students at a selected number of representative colleges to get an idea of their expenses and whether they are what students need in order to be successful in college. They could use this survey to develop estimates that apply to all colleges. There is some precedent for doing this, as the cost of attendance estimates for Federal Work-Study and Supplemental Educational Opportunity Grant campus funding add a $9,975 living cost allowance and a $600 books and supplies allowance for all students. This should be adjusted for regional cost of living (and what costs actually are), but it’s something to consider going forward.

Posted in Uncategorized | Tagged , , | 1 Comment

State Financial Aid Application Deadlines—A Lousy Rationing Tool

Financial aid reform has become a hot political topic in Washington as of late, with legislation introduced or pending from Senate Democrats, House Republicans, and the bipartisan pair of Senators Lamar Alexander (R-TN) and Michael Bennet (D-CO). (Here is a nice summary of the pieces of legislation from the National College Access Network.) All three of the proposals support the use of “prior prior year” or PPY, which would advance the financial aid application timeline by up to one year. Given the bipartisan support, this policy change may end up happening.

PPY could affect the deadlines for state financial aid applications in ways that could help some students and hurt others. (It could also affect institutional deadlines, but that’s a topic for another post.) Some state aid deadlines listed on the FAFSA are currently well before tax day on April 15, making it difficult for students to take advantage of the IRS Data Retrieval Tool that automatically populates the FAFSA with income tax data but takes up to three weeks to process. For example, five states (Illinois, Kentucky, North Carolina, Tennessee, and Vermont) recommend filing “as soon as possible” after January 1 in order to get funds before they run out. At least 15 states currently have deadlines before March 2, nearly six months before the start of the following academic year.

The below table shows the percentage of all students who filed the FAFSA in the 2012-13 academic year (the most recent year with complete data available) by March 31 and June 30 by state and dependency status. There are two notes with the table. First, it only includes states with deadlines listed on the FAFSA, as other states are either unknown or on a first-come, first-served basis. Second, application data by state are only available by quarter at this point, although the good folks at Federal Student Aid have told me they hope to release data every month in the future.

Percent of FAFSAs Filed by State, Date, and Dependency Status
Applications Filed by 3/31 Applications Filed by 6/30
State Deadline Dependent Indep. Dependent Indep.
IL 1/1 65% 39% 83% 64%
KY 1/1 63% 40% 80% 64%
NC 1/1 54% 33% 79% 61%
TN 1/1 59% 36% 81% 63%
VT 1/1 70% 39% 87% 66%
CT 2/15 66% 35% 85% 63%
ID 3/1 61% 39% 80% 65%
MD 3/1 67% 41% 83% 64%
MI 3/1 61% 35% 80% 60%
MT 3/1 62% 40% 81% 64%
OK 3/1 47% 30% 74% 59%
OR 3/1 67% 50% 82% 71%
RI 3/1 75% 50% 87% 69%
WV 3/1 72% 40% 86% 63%
CA 3/2 68% 43% 81% 65%
IN 3/10 82% 53% 89% 69%
FL 3/15 43% 30% 72% 60%
KS 4/1 57% 33% 80% 61%
MO 4/2 56% 34% 81% 63%
DE 4/15 51% 28% 82% 60%
ND 4/15 45% 30% 78% 61%
ME 5/1 72% 45% 89% 70%
MA 5/1 65% 36% 87% 66%
PA 5/1 55% 34% 86% 65%
AZ 6/1 45% 29% 73% 59%
NJ 6/1 53% 29% 83% 62%
IA 6/6 58% 32% 83% 62%
AK 6/30 51% 32% 75% 58%
DC 6/30 60% 33% 84% 62%
LA 6/30 31% 25% 74% 58%
NY 6/30 51% 30% 79% 62%
SC 6/30 43% 28% 76% 59%
MS 9/15 37% 25% 69% 56%
MN 10/1 44% 24% 77% 57%
OH 10/1 58% 34% 81% 62%

Source: Federal Student Aid data.

Among the 17 states with stated deadlines before March 31, Indiana students were the most likely to file by March 31 (with a March 10 deadline) and Florida students were the least likely to file (with a March 15 deadline). The differences (82% vs. 43% for dependent students and 53% vs. 30% for independent students) reflect the universality of Indiana’s state financial aid program compared to the much more targeted Florida program. In all states, dependents were far more likely to file by March 31 than independents, meaning that independent students were much less likely to even be considered for state financial aid programs. Students were more likely to file by March 31 in states with earlier aid deadlines, as evidenced by a correlation coefficient of about -0.55 for both independent and dependent students.

By June 30, all but three states (Mississippi, Minnesota, and Ohio) have had their state aid deadlines, but only about 81% of dependent and 63% of independent students have filed their FAFSA by that point. Some of these students may choose to enroll in college for the spring semester only, but many are still planning to enroll in the fall semester. These students can still receive federal financial aid, but will miss out on state aid. The correlation between state aid deadlines and the percent of applications received by June 30 is lower, on the order of -0.4.

So what does this mean? About 20% of dependent and 35% of independent students are likely to miss all state application deadlines under current rules, and about 60% of independent students currently miss state aid deadlines before March 31. These students are likely to have more financial need than earlier applicants, but are left out—as shown by recent research. A shift to PPY is likely to move up these state deadlines as states are unlikely to provide more money to student financial aid. The deadlines serve as a de facto rationing tool.

There are better ways to allocate these funds. Instead of using a first-come, first-served model by setting an artificially early deadline, states could give smaller awards to more students or assign grants via lottery to all students who apply before the start of the fall semester (say, August 1). Susan Dynarski made an important point regarding the current system on Twitter:

States need to consider whether their current application deadlines are shutting out students with the greatest financial need, and whether a move to PPY at the federal level will affect their plans. It is abundantly clear that the current system can be improved upon, and I hope states act to do so.

Posted in Uncategorized | Tagged , , | Leave a comment

Exploring Trends in Pell Grant Receipt and Expenditures

The U.S. Department of Education released its annual report on the federal Pell Grant program this week, which is a treasure trove of information about the program’s finances and who is receiving grants. The most recent report includes data from the 2012-13 academic year, and I summarize the data and trends over the last two decades in this post.

Pell Grant expenditures decreased from $33.6 billion in 2011-12 to $32.1 billion in 2012-13, following another $2.1 billion decline in the previous year. After adjusting for inflation, Pell spending has increased 258% since the 1993-94 academic year.

pell_fig1

Part of the increase in spending is due to increases over the maximum Pell Grant over the last 20 years. Even though the maximum Pell Grant covers a smaller percentage of the cost of college now than 20 years ago, the inflation-adjusted value rose from $3,640 in 1993-94 to $5,550 in 2012-13.

pell_fig2

The number of Pell recipients has also increased sharply in the last 20 years, going from 3.8 million in 1993-94 to just under 9 million in 2012-13. However, note the decline in the number of independent students in 2012-13, going from 5.59 million to 5.17 million.

pell_fig3

Recent changes to the federal calculation formula has impacted the number of students receiving an automatic zero EFC (and the maximum Pell Grant), which is given to dependent students or independent students with dependents of their own who meet income and federal program participation criteria. Between 2011-12 and 2012-13, the maximum income to qualify for an automatic zero EFC dropped from $31,000 to $23,000 due to Congressional action, resulting in a 25% decline in automatic zero EFCs. Most of these students still qualified for the maximum Pell Grant, but had to fill out more questions on the FAFSA to qualify.

pell_fig4

The number of students receiving a zero EFC (automatic or calculated) dropped by about 7% from 2011-12, or about 400,000 students, after more than doubling in the last six years. Part of this drop is likely due to students choosing a slowly recovering labor market over attending college.

pell_fig5

UPDATE: Eric Best, co-author of “The Student Loan Mess,” asked me to put together a chart of the average Pell award by year after adjusting for inflation. Below is the chart, showing a drop of nearly $500 in the average inflation-adjusted Pell Grant in the last two years after a long increase.

pell_fig6

I hope these charts are useful to show trends in Pell receipt and spending over time, and please let me know in the comments section if you would like to see any additional analyses.

Posted in Uncategorized | Tagged , , , | 1 Comment

The Starbucks-ASU Online Program: More Short than Venti?

I’ve got a piece in today’s Inside Higher Ed explaining why I don’t think Starbucks’s partnership with Arizona State University Online will result in a large number of degree completions. Starbucks is getting a lot of great PR for this program, some of which is deserved for making an opportunity available and for working with Inside Track to provide additional counseling to students. However, the conditions set forth in the announcement (extremely delayed reimbursement, the last-dollar nature of the program, and only one participating online institution) makes it unlikely that the takeup rate will be very high.

Read the piece and let me know what you think!

Posted in Uncategorized | Tagged , , | Leave a comment