It now seems de rigueur for academic institutions to commission economic-impact reports.
A quick Google search of the .edu domain shows what I mean (click on the image for a snapshot of a search I did recently or do your own in real time at Google). I don’t know exactly how many have been done, but I think it’s a certainty that a solid majority of the top research universities both public and private have commissioned one in the last 10 years. In fact, I believe I will build a collection of them and post it [UPDATE – I just have. Check out this page on the sidebar.]
(Disclosure: I don’t do impact analyses and neither does the practice I belong to at Battelle, at least in isolation. These are typically projects that demand a lot of attention to detail, but apart from any broader strategic analysis, and the client doesn’t want to pay very much for what seems like a simple exercise. A few firms like Appleseed do a lot of them, particularly in my own region, and there are some excellent examples from all sorts of organizations, including a real-estate giant’s consulting subsidiary that did UC Berkeley’s, which shows up high in that Google search. In general, universities like an independent analyst who cannot be accused of direct conflict.)
Sometimes I wonder what goes through administrators’ minds when they commission these.
It’s usually research universities that feel the strongest compulsion to to talk about economic impact — even those that in their daily operations and decisions don’t act as if economic development is a core function or even something that anyone might expect of them.
The structure of these economic-impact reports is quite stylized. They’re all built around the “multiplier” analysis that allows the analyst to compare the impacts on spending, jobs, personal income, and taxes with the inputs to the university in the form of direct appropriations, grants, or or other forms of public subsidy. More on this in a moment.
Then, although in some sense this is all implicit in the multiplier analysis, the typical economic-impact report will touch on how much federal funding is brought into the region by the university, and the degree to which the region depends on the institution for educational services and student voluntarism. Most reports also tell the story of how regional businesses depend on the institution for human resources and technological stimulation, and what kind of spin-off activity has been seen from the research enterprise. This latter information is actually quite important to assessing an institution’s true commitment to regional economic development, but the absolute numbers pale next to the “multiplier” impact, and so it’s the multiplier that gets the headlines. As if any one really cared!
Now let’s review why no one should care about the central analysis around these reports are built.Multiplier analyses start with an assumed change in economic output (usually the projected gross revenue of an inward recruit that is being offered a public subsidy package whose cost must be balanced against its potential benefits). Using multpliers drawn from national and regional input-output tables maintained by vendors like Minnesota IMPLAN Group, the analyst calculates what additional economic activity (jobs, personal income, taxes) this incremental output will cause among the firm’s suppliers, their employees, and the vendors who will in turn serve those new direct and indirect jobs. It’s not rocket science; it’s all in the multipliers.
For a university, one already knows or can easily find out the distribution of its payroll and purchasing budgets geographically and by industrial sector. Then the analyst takes the entire university expenditure budget — as if it were an incremental wave of growth — and plugs the correct multipliers in, sector by sector. In some senses, the economic impacts calculated this way are more accurate than for a random employer whose distribution of expenditure must be estimated by its sector, but in other ways these stats are totally meaningless. The university is already in the economy. It’s not going anywhere. It is hardly fair to take its entire budget as if it were an increment in output, as the theory of multpliers demands, and then attribute the sum total of direct, indirect, and induced impacts to the public subsidy.
Yet the wave of these reports continues unabated. Do politicians find them meaningful or persuasive, compared say with anecdotal evidence about just how helpful said university has been to an employer in their district? I wonder. Comments welcome.UPDATE (September 12, 2007): I just found a very nice academic paper entitled “The economic impact of colleges and universities” (link updated August 2016) by Siegfried, Sanderson, and McHenry that addresses in much more comprehensive fashion than I ever could the core analysis of such economic-impact reports. My own critique is a small and simplified subset of the ruthless demolition job the authors do on the logic and integrity of most such exercises.