Learning from cliometrics

2010 October 25
by Shane Landrum

Computer Punch Card by Chris Campbell, on FlickrAs part of my graduate coursework, I learned about cliometrics: a rigorously numerical and quantitative approach to answering historical questions, common in the 1970s but now largely fallen out of favor.

Everyone I talk to tells me a different version of why cliometrics became unfashionable. Some people say that the technology of the time, which required rooms of punchcards to answer relatively basic questions, required too much labor. (Every single person reading this probably has more computing power than all the 1970s cliometricians put together.)

Most historians I’ve talked to point to Fogel and Engerman’s Time on the Cross (1974) as a kind of shark-jumping moment for cliometrics. Attempting to analyze slavery by whether it was economically rational seemed ethically tone-deaf to many, and the book was roundly criticized for flaws in method. Heavily quantitative approaches have persisted in the subfield of economic history, but relatively few historians in recent years have used (or trained in) these methods.

Computer History MuseumLately, I’ve been looking at some recent digital–methods approaches in history and wondering: what makes this work different from cliometrics? Is this a style of work or an analytical approach that runs the same risks? (I’m thinking in particular of heavily GIS-based work, which requires specialist training and large data sets, and of some text–mining strategies.) In order to answer that question usefully, I need to understand more about what caused the shift away from cliometrics, and it seems that most of that story isn’t in the journal literature.

Was cliometrics, being closely allied with economic history (largely the province of white men), never that fashionable among Americanists to begin with? Did it not gain traction because it was antithetical to the ways most other historians were being trained to read sources? Did its requirements for large–scale computing resources limit the numbers of historians who could practice its methods successfully? I sense that the answer to all three of these questions is yes, but if you have any further ideas, I’d love to hear them (especially if they include citations.)

There’s another twist here. The fields I learned my methods from– women’s history, African–American history, queer history– are fundamentally built on questioning quantification, categorization, and other positivist impulses. As interested as I am in the possibilities for, say, mapping rural women’s lives in history, I need to prepare good answers for why what I’m doing isn’t simply cliometrics 2.0. I know that part of the answer is networks and collaboration–fewer data silos– but I’m still trying to figure out the rest.

6 Responses
  1. October 25, 2010

    As a librarian/infopro, I find this sort of discussion refreshing. We’re currently in discussions at our library about what kinds of digital services we can offer to scholars (data management, data visualization, access to high-performance computing, etc.) but you remind me that we should also be involved in the very traditional library conversation of which resources are most appropriate to the scholarly task at hand (based on disciplinary values and goals). How involved are your librarians in the digital work that you do?

    • October 25, 2010

      My school’s librarians have been very minimally involved in my research methods, When I started my research, very few people at my institution were thinking about the kinds of digital methods I use, and I pretty much have had to make it up as I go along. (It’s also entirely possible that staffing changes at my university libraries in the past three years may have brought in new people who could answer my questions more capably; I don’t know, because I’ve been busy working.) In terms of finding practical answers to my technical questions, DHAnswers has been much more useful to me, because I’ve been able to ask questions at a reasonably technical level and get reasonably detailed answers quickly.

      What I can’t easily get over the Internet is face-to-face conversations about discipline-specific methodological issues involved with digital tools. Moreover, my technology/methods questions aren’t easily addressed in a classic reference interview. In an ideal world, I’d be able to have a sustained relationship with an infopro/librarian who could advise me on, for example, database design for social history projects, or software that’ll quickly pull 500 PDFs out of that subscription newspaper database’s search results while keeping the metadata intact. (I really could have used a crash course on metadata systems about three years ago, too.) As far as I can tell, those questions fall in the gray area of expertise between disciplinary faculty and IT, but I haven’t seen my local librarians promoting their expertise about such topics either.

      In short, I think that the kinds of librarians I’d benefit most from exposure to are probably just now starting to be trained, and where they do exist, they can probably name their salaries at research institutions with deeper pockets than mine.

  2. October 25, 2010

    “But in the long run, even in the more esoteric branches of history, it must surely be the case that there will always come a moment when the historian, having worked out a solid conceptual basis, will need to start counting: to record frequencies, significant repetitions, or percentages.” — Emmanuele Le Roy Ladurie The Territory of the Historian (1979), cited in Tosh, ed. Historians on History, p. 238. (Tosh has a nice little section on Cliometrics pro/con, by the way, if you’re looking for citations)

    It’s always been a minor regret that I didn’t both learn statistics as an undergraduate and keep up my computer programming skills after undergrad. I’m a pretty decent hand with Excell, but that’s no replacement for GIS and the other data set skills which I now feel like I need.

  3. October 26, 2010

    Cliometrics (as the application of specifically econometric methods in history) was an especially dramatic generator of controversy, but the use of quantitative methods in history was a lot broader, more influential, and complex. Many historians who used quantitative methods to do political or social history were not economistic cliometricians. The 1960s and 1970s quantitative methodological movement over-promised and under-delivered, but quantitative methods themselves have never disappeared altogether, and you could look to the Social Science History Association for continuity. If you are looking for useful historiography in journals, beyond cliometrics you might seek out review articles about social history and the “cultural turn,” or early quantitative political and social history under retrospective review as the new institutional historians advocated bringing the state back in.

    Some scattered leads:
    Ed Ayers: “The Pasts and Futures of Digital History” (1999)

    Will Thomas, “Computing and the Historical Imagination”, Chapter 5 in the 2004 Companion to Digital Humanities

    Konrad Jarausch and Kenneth Hardy, Quantitative Methods for Historians (1991). The first chapter acknowledges the historiographic moment.

    William Sewell’s Logics of History (2005) includes autobiographical reflections on his own trajectory from quantitative social history to cultural history and interdisciplinary theory.

  4. October 26, 2010

    I don’t really have a substantive comment, since it’s not my discipline, but speaking as a lit scholar and a writing teacher–I love the *sound* of the word “cliometrics.”

Trackbacks/Pingbacks

  1. Tweets that mention Learning from cliometrics | cliotropic -- Topsy.com

Comments are closed.