While most people think of “data” as a numeric value, data can be pretty much anything. In the last decade, there’s been an uptick of exciting computational approaches to humanities research. Researchers use quantitative methods on vast humanities datasets to generate insights often difficult to detect by human reading alone. The Stanford Literary Lab, for example, has published an open access series of pamphlets on its data-driven literary analysis projects, such as mapping the emotions of London across 200 years and 5,000 texts. And English scholars Richard Jean So and Andrew Piper used computational methods to analyze the difference, or lack of, between novels written by authors with MFA degrees and novels written by authors without MFA degrees.
Interested in exploring these methods in your own research and teaching? The Library has you covered. We are proud subscribers of the Gale Digital Scholars Lab, which enables users with no computational background to apply data-analysis methods on primary resources from our Gale holdings.
If you’re looking for a little more guidance, check out Matthew Jockers’ excellent Macroanalysis: Digital Methods and Literary History, a book that explains the uses of computational literary methods and clearly articulates strategies and results from some of his own projects. Or take a gander at Geoffrey Rockwell’s book Hermeneutica: Computer-assisted Interpretation in the Humanities, which includes text mining activities that can be easily replicated using their freely-available, user-friendly, browser-based text mining tool Voyant. And if you’re interested in taking an even deeper dive, the HathiTrust allows users to analyze millions of books using the HathiTrust Research Center Analytics Tools.
Still don’t know where to start? Set up a consultation with Digital Scholarship Librarian Erin Glass.