The union representing Nassau Community College’s full-time faculty

How Do You Measure a Community College’s Success?

24 th

How Do You Measure a Community College’s Success?

Posted by The NCCFT Executive Committee

Some of us have been around long enough to remember, maybe twenty or so years ago, when a SUNY-generated “community college report card” was making the rounds. Nassau Community College fell somewhere in the bottom half of the rankings on a couple of significant measures, one of them being our graduation rate, which was determined based on the assumption that the only graduate worth counting was a student who’d completed her or his coursework in two years. That standard, of course, failed to account for how the socioeconomic realities of our students’ lives—not to mention the personal obstacles so many of them face—make a two-year-graduation timeline nearly impossible to meet. More to the point, the fact that extending that timeline beyond two years increased our graduation rate significantly didn’t seem to matter either to the people who’d gather the data that went into the report card or the people who proposed using it to make certain kinds of policy decisions.

If you were around back then, you’ll remember that performance-based funding, the idea that SUNY would determine how much money NCC received based in part on our ranking on that report card, was one of the things we were most worried about. That never happened, but, since then, the logic of performance-based funding has increasingly informed how higher education gets talked about by just about everybody except educators. You see it, for example, in much of the criticism that conservative politicians direct at us, like Marco Rubio did during the last presidential campaign, when—as quoted in Inside Higher Ed—he lamented that too many students “have thousands of dollars in student loans—for a degree that doesn’t lead to a job.” Rubio may not have called specifically for tying a college’s funding to whether or not the degrees it offers lead to jobs. That thinking, however, did inform “The Student Right to Know Before You Go Act of 2013,” which he cosponsored in the Senate and which called for the government to provide precisely the kind of information it now does on the College Scorecard, including graduation and retention rates, post-college earnings, and how much debt the students at a given institution take on.

The stated purpose of the Scorecard has nothing to do with government funding. Rather, it is supposed to assist people who want to go to college in figuring out which institutions are worth their investment. If you read through Nassau Community College’s scorecard, you’ll see that we score “about average” on most measures and significantly lower than average on two, one of which is still graduation rate. Ours is 20% slightly less than half the national average of 42%. That number, however, includes only first-time full-time students who returned to college after their freshman year and who finished their degree within three years—once again leaving out a significant percentage of the students we serve and failing entirely to consider that we successfully serve many students who did not come to Nassau Community College to get a degree.

If all these numbers did was misrepresent who we are to prospective students, that would be bad enough, but, as Eric Kelderman wrote recently in The Chronicle of Higher Education, “the stakes for [colleges who are assessed using] poor data are growing.” (If you’re not a subscriber, the full text of the article—“Colleges Face More Pressure on Student Outcomes, but Success Isn’t Always Easy to Measure”—is available through the Chronicle link under Publications on the NCC portal). According to the article, accreditors are coming under increasing pressure from experts “to revoke their approval of colleges that perform poorly” on precisely the kinds of measures by which institutions like Nassau Community College are misrepresented: graduation rates, the rate of earnings after graduation, and the rate at which students default on federal students loans. Kelderman quotes Michael Itzkowitz, a former official with the Department of Education and one of the architects of the College Scorecard. “Ultimately, I think if a student is not leaving college with more opportunities, then we should question whether [the college] should continue.” Itkowitz, who is now a senior policy advisor for Third Way, which bills itself as a centrist think tank, recently wrote a report for the organization, the opening sentences of which make this performance-based logic explicit:

How much money would you bet on a casino game where you only win 25% of the time? How about one where you only win 10% of the time? When it comes to higher education, the federal government makes this bet with taxpayer dollars every year, and it’s in the billions.

It’s worth reading Kelderman’s article in full because he does a really good job, using South Carolina’s Williamsburg Technical College as an example, of demonstrating the kinds of success that the federal data miss. For example, while the college falls well below federal standards for graduation rates and low earnings after graduation, state figures show that about 96% of those students “who complete their certificate, diploma, or degree programs are in school or working in their fields within a year.” Moreover, while the $22,000/year average salary-after-a-decade earned by Williamsburg Tech students who have received Pell Grant is less than the national earnings of a high school graduate—the decade time frame is the one used by the federal government—that number is still $6,000 more than the per-capita income in Williamsburg County.

The point is not that anyone should be satisfied with these numbers. That 96% figure, for example, does not say anything about what happened to students who did not complete their programs, crucial information in assessing any college’s performance. Rather, as Kelderman suggests, the point is to consider the consequences of ignoring those numbers in determining whether or not a community college is successful. Imagine, for a moment, that Williamsburg were to lose its accreditation for failing to meet federal standards on graduation rates and post-graduation earnings. Williamsburg County, one of South Carolina’s poorest and least educated, would lose its only college, leaving students there with even fewer opportunities than they have now.

Something else we should not ignore when measuring the success or failure of an institution of higher eduction is whether or not the institution we’re dealing with is for-profit. Not because there is anything inherently wrong with making a profit, but because we know that the for-profit education industry has been rife with predatory and fraudulent practices, bilking both students and the federal government out of money and failing to deliver on the degrees and careers those students were promised. Tellingly, two of the three examples cited in the Third Way report mentioned above, Bryan University and Unitech Training Academy, are for-profit colleges. Yet the report fails to mention this fact, as it fails to mention the measures the Obama administration put in place to curb the abuses such colleges are known for and which the Trump administration is now in the process of undoing.

Conflating for-profit and public institutions might be convenient in terms of data analysis, allowing both kinds of colleges to be measured according to precisely the same bottom-line criteria. That frame of mind, however, forces whoever is doing the evaluating—the federal government, an accrediting agency, a non-profit think tank—to approach the problem like a for-profit business, by focusing almost exclusively on the bottom line. Ironically, in other words, the logic of performance-based funding has a lot in common with the logic of the for-profit college industry. It’s just that the two logics have very different, almost oppositional goals: one focuses on a business’ profitability, while the other, at least in its intent, is focused on protecting students’ investment in higher education.

To put it another way, if precisely the same bottom-line standards are used to evaluate performance for both for-profit and community college, those standards are very likely to miss, dismiss, or otherwise trivialize the “alternative” kinds of success that places like Williamsburg Tech can demonstrate and which for-profit colleges most likely can’t—precisely because those alternative kinds of success are not directed at increasing or protecting anyone’s short-term bottom line. Thankfully, Nassau Community College does not face these issues in the same way as Williamsburg Tech, but the question of how we define our success is no less important—because if we do not frame the narrative of that success for ourselves and then demonstrate its validity, that narrative will be framed for us, and we are unlikely to be happy with the result.

2 thoughts on “How Do You Measure a Community College’s Success?”

  1. Thanks very much for an informative statement of the issues. Definitions of organizational success are always tricky, and education is not the only field in which the questions is being asked. When I taught at NCC some years ago I often thought students were taking too many courses at the same time, as well as holding down a job and sometimes being a parent. In order to graduate within the two-year time-frame, they pretty much had to do it. This pressure meant their work was poorer, and in turn this pressured the faculty had to choose between giving a poor grade or using grade inflation to support people with so many things on their plate. One topic you don’t discuss here is the possible role of free tuition to community college students in NY. What effect would that have on cost-benefit analysis?

    1. Hi Dominick,

      Thanks for commenting. We wrote a little bit about the Excelsior Scholarship in this post, though not quite in the same context as you’re talking about here.

Leave a Reply

Your email address will not be published. Required fields are marked *