You are looking at content from Sapping Attention, which was my primary blog from 2010 to 2015; I am republishing all items from there on this page, but for the foreseeable future you should be able to read them in their original form at sappingattention.blogspot.com. For current posts, see here.

Moscow and NyTimes

Nov 17 2010

I’m in Moscow now. I still have a few things to post from my layover, but there will be considerably lower volume through Thanksgiving.

I don’t want to comment too much on yesterday (today’s? I can’t tell anymore) article about digital humanities in the New York Times, but a couple e-mail people e-mailed about it. So a couple random points:

1. Tony Grafton is, as always, magnanimous: but he makes an unfortunate distinction between “data” and “interpretation” that gives others cover to view digital humanities less charitably than he does. I shouldn’t need to say this, but: the whole point of data is that it gives us new objects of interpretation. And the Grafton school of close reading, which seems to generally now involve writing a full dissertation on a single book, is also not a substitute for the full range of interpretive techniques that play on humanistic knowledge.
(more after the break)

2. The article hints at, but doesn’t fully explain, the conscious retrenchment of history into innumeracy in recent decades. I could post a little jeremiad about this at some point. But that literature departments seem to lead much of this charge is somewhat sad.

3. Timothy Burke has a good riposte at the more luddite commenters around the second page of comments on the article.

4. Both the article and the commenters seem to occasionally blur the difference between digitization, digital curation, and quantitative research (meant broadly–research for publication, for visualization, for teaching). This blog, to be clear, draws on the first, which has little to do with the humanities per se, to do the third. I generally have little interest in the second, but I’m not the sort of person who spends much time at local history society exhibits, either. Maybe there’s another category in there.

5. The role of Google in all this is extremely fraught. I’m using Internet Archive books that were scanned by Google, but are now wholly in the public domain; the OCR, whatever its problems, is (I *think*) by IA and should be in the public domain. It would be extremely bad for the field, I think, if the use of Google metadata and analysis tools became de rigeur for research on digital texts, something which could easily happen. On the other hand, no one else has a database of texts that can be scanned from after 1922, except for the scattered results of google scanning at various libraries. I was a big fan of someone’s — was it Dan Cohen again? — request to include Library of Congress digitization in the stimulus package, but that was not to be.

Comments:

I would have liked to see more discussion of this …

Jamie - Nov 4, 2010

I would have liked to see more discussion of this issue of “innumeracy” in Townsend’s article in this month’s Perspectives, “How Is New Media Reshaping the Work of Historians?”

To recap the main point: in a survey of 4,182 faculty at four-year colleges and universities, Townsend found that only 2.4% avoid all but the most basic digital technologies (word processing, library catalogs online); 24.4% are “passive users” (some Google); 68.9% are “active users” who use at least five different digital tools and cautiously look for new ways to incorporate them into their research and teaching; and 4.3% are “power users,” people who are quick to make use of new technology and use it substantively in their work.

But as you can already tell, what counts as a digital tool is extremely basic: word processing, library catalogs, Google, primary sources online, scanners. Over 90% of power users and 75% of active users use each of these. But get a little more sophisticated, and the percentage of involved active users drops quickly: spreadsheets (just over 40%), citation software (just over 30%), databases (under 30%), statistical analysis software (10%), GIS (8%), social media (5%), and text mining software (1%).

A large portion of the article is devoted to online publishing, which suggests to me that the debate is still lodged way back in the basics. Townsend, for example, finds that one of the main obstacles to online book publishing is that it’s hard to get journals to review them, mostly because of “the lack of procedures at the journals for taking an e-mail link or letter and passing it along to a reviewer.”

The fact that the article concludes on a positive note–hey, we’re using technology more than we thought!–is frustrating. What I learned is that historians are doing mostly the same kind of thinking with more efficient tools. This was a missed opportunity to discuss the value of thinking more proactively and creatively with technology.

There’s a lot of room for you in this field.