Poets and Writers offers the Top 50 MFA Programs in the nation, as compiled by Seth Abramson. Most of the article explains what criteria were excluded from the rankings. I don’t think it’s a bad idea that he completely avoids such subjective criteria as professor status — after all, as he notes, excellent writers are sometimes poor teachers.

It’s also probably wise that he avoids quantifying long-term alumni success, and that he avoids considering MA programs, low-residency programs, and Ph.D. programs. Also, I applaud his attempt to make MFA programs more transparent about funding. Some are certainly notoriously tight-lipped about dispensing information.

But Stacey Harwood, at the Best American Poetry blog, complains that Abramson’s methods are faulty:

“Unlike a valid poll, which would survey a randomly selected representative sample of the total pool of current and potential MFA applicants, Abramson’s poll reflects only the responses of self-selected readers of his blog, and there is nothing to prevent individuals from responding more than once from multiple locations.”

It’s true that the pool was self-selected. But I have to disagree with Stacey: I think this makes the stats more accurate — all the students visiting the MFA blog are likely to know much much more than a random sampling of MFA applicants. After all, we don’t want to find out the majority’s opinion — we want the opinion of people who are in the know and have a good idea of program’s strengths and weaknesses. Plus, the poll is only one piece of information — Abramson offers 12 other categories of info.

But the bigger problem isn’t whether the pool was a representative sample or self-selected; it’s that the pool size was limited to prospective students. Really? Ask any MFA graduate and they’ll be much wiser about the state of the industry and programs a couple years out of their MFA. The poll should not have repeated the errors of the 1997 Newsweek rankings (which Abramson rightly points out was flawed because they only polled directors and professors) by only polling another group of people — prospective students. Wouldn’t the best poll take into account all the various constituencies — directors, professors, prospective students, post-MFAers, literary journal editors, and publishers?

It’s also necessary to point out that a poll such as this can’t fulfill what Stacey wants it to do. We don’t want to know about the popular perception of programs. We actually want an authoritative ranking of programs. In that aspect, even if the polling was done to the approval of a statistician, all we are left with is perception, which isn’t enough.

Which is why Abramson’s data is the best stuff. Annual funding ranks. Acceptance percentages. Postgraduate placement records. These things are helpful, and whoever wants to focus on funding or selectivity rank can prioritize accordingly. But what potential MFA students think about programs? I’ll take it with a grain of salt.

It’s truly unfortunate that the Best American Poetry blog has silenced further debate by deleting Abramson’s comments — I would really have liked to hear more from both of them.