Ranking Urban Planning Programs
For the sake of argument, let’s set aside the issue of how to evaluate PhD programs and faculty quality as such, to focus on the issue of ranking for the purposes of recruiting professional planning students. (An earlier post on ranking cities and whatnot is here.)
My first point is that applicants rank programs when they decide where to apply and then where to attend. Didn’t you? So the question is less whether ranking is advisable than how applicants do it, whether that can be improved, and then whether our individual schools, our organization (ACSP), or other organizations (e.g., Planetizen) can contribute to or interfere with those efforts.
I like to think students do not pick schools the way I pick stocks — by how cool they sound. (My investment history is pathetic, if not tragic. Thankfully, I am secretive about it and my wife refuses to read this blog.) Rather, I bet they research, collate and compile, then rank.
Whether they use that particular word and how analytical the process is will depend. I once had lunch at MIT where I overheard two undergraduate women discussing boys. Then one pulled out a napkin and drew a graph and a couple of curved lines to illustrate her point. (It was a two-dimensional graph, but I wasn’t so curious as to ask what those dimensions were. Ok, I was but didn’t. My sense was that some tradeoffs were being considered.) My point is that, badly or well, rightly or wrongly, ranking happens.
So it seems our options number roughly 4:
A. educate applicants to be better analysts,
B. provide more raw information,
C. do some preprocessing of that information, and/or
D. make their decision for them.
The 13th ACSP Guide to Graduate and Undergraduate Education in Urban & Regional Planning and most of the Planetizen 2007 Guide to Graduate Urban Planning Programs are solid efforts at B.
Posters to the academic planning (Planet) listserv are currently debating the merits of strategy C, but not all such efforts. Individual program web sites and marketing materials do this too, when they include (if reasonable) spin. I haven’t heard anyone question the wisdom or prerogative of these, even though some schools said they were “better” than others in the pre-Planetizen days (you know who you are Cal) and those who did well in the Planetizen ranking now parade that too.*
Which brings me to my main point. Rankings, such as Planetizen’s, are examples of C, not D as this debate has too often suggested. That isn’t to say that C isn’t biased, corrupted or otherwise problematic, but it is hard for me to believe it presumes to do anyone’s thinking for them.** Since my school did fine, I suppose I can’t say that I don’t see the harm of more processed information rather than less.
But I will anyway: I don’t see the harm, unless there are systematic, misleading distortions that can’t be easily corrected. Rather, I trust prospective students to understand there is no definitive, unidimensional way to rank either (a) a Masters program’s comparative strengths and weaknesses or, specifically, (b) what that program has to offer as value-added to produce better practitioners. These rankings are just more data to post-process as they construct their own ranking, per Columbia’s Lance Freeman’s advice, by balancing by what MIT’s Xavier de Souza Briggs labeled (on the Planet listserv) human capital (skills), social capital (networks) and additional signaling effects (e.g., branding). Again, isn’t this what you did?
Take, for example, Briggs’ point that prestige rankings — a big chunk of the signal — are self-perpetuating. Either everyone already knows these, in which case Planetizen reinforcing them should have little real effect, or they are not widely known, which is either unfair or a good thing depending on their underlying merits. (But why should a select few know what the reputations are?) Even in that instance, we have options.***
In particular, we could follow up Maryland’s Howell Baum‘s excellent advice on the Planet list to engage in “more informative enterprises” by doing more with A, B, and especially our own C. I gather that is precisely the intent of both ACSP’s talks with Planetizen and its own evaluation initiative. In any case, why object to simply trying harder to help students make better informed decisions? Planetizen can hardly be blamed for good faith attempts to do just that.
I vote for Baum’s route, in its various directions, rather than being distracted by what we don’t like about other’s efforts to do the same.
____________________
*The argument that we shouldn’t compete with each other is, uh, sweet but rather beside the point. We could randomly distribute applicants across our programs, or use alphabetical order, or lotteries, or other queues, but we don’t. At least the large, general purpose programs compete quite openly for applicants, even if we do this in a more collegial manner than, say, America’s Next Top Model.
**A second important motivation for rankings for some, though it may have been central in getting ACSP off the mark, is their internal use within each campus. Anyone who has served on so-called unit reviews by the central campus administration, external or internal, knows how hungry these reviewers are for “neutral” national evaluations of our programs. They ask, “How does your planning program compare with others?” Not having such information is doubly problematic: It makes the field seem more immaterial or maybe just inconsequential (most major fields have such evaluations, the favored being the NRC assessment of doctoral programs), and it dampens the credibility of our blowing our own horns. These are huge problems for major research universities, where each program is expected to have a national ranking to justify its existence; perhaps less so for others
***I left out a fifth possible strategy: E. Boycott an independent firm doing C, especially if we can’t figure out how they are doing C. This can only have any effect if the students don’t already know the existing reputational ranking, or if there isn’t one, and it would harm any boycotting program choosing to withhold information that, presumably, would strengthen its position in those rankings.
- Published:
- Saturday, December 8th, 2007
- Author:
- randall Crane
- Topics:
- academic life, metrics
Blogroll
- Becker-Posner blog
- BLDG BLOG
- Burb
- CityStates
- Curbed LA
- Cyburbia
- DemocraticSPACE
- Environmental and Urban Economics
- Freakonomics
- LA Transportation Headlines
- Peter Gordon’s Blog
- Planetizen
- The Center for Land Use Interpretation
- The Transportationist
- the urban commons
- This week’s finds in planning
- Urbanicity
Journals
- Cityscape
- environment and planning a,b,c,d
- Harvard Design Magazine
- Housing Policy Debate
- Housing Studies
- International Development Planning Review
- International Journal of Urban and Regional Research
- International Regional Science Review
- Journal of Architectural and Planning Research
- Journal of Housing Economics
- Journal of Planning Education and Research
- Journal of Planning Literature
- Journal of Regional Science
- Journal of the American Planning Association
- Journal of Urban Affairs
- Journal of Urban Economics
- Planning Theory
- Regional Science & Urban Economics
- Transportation Research Parts A,B,C,D,E,F
- Urban Studies
- World Development
Comments are closed
Comments are currently closed on this entry.