I want to propose two approaches to thinking about questions of data and research visualization in the humanities. One is concerned with the historical study of database technologies and the construction of the incessantly contested categories of data and information. The second involves putting science studies and digital humanities approaches in conversation with one another.
From early seventies, experts increasingly emphasized that data was too enormous and unwieldy to be handled by common users and advised users to instead concentrate on working with information, which was to be understood as data processed by database management systems. But such a divide between data and information also resulted in other categorical divisions: programmers (who worked with data) vs. non-programmers (who were advised to work with information). As academics in the humanities engage with “Big Data,” histories of databases could help clarify a pertinent question: how have users been encouraged to think about, and work with, their data? In 1981, Sequitur database system’s advertisement in BYTE magazine reads “A Database System that thinks like you do” and shows a sketch of a man with a thought bubble reading “$1/rong/wrong/p” – signifying that he is thinking in computer program language – while the computer has a thought bubble saying, “Couldn’t we be a little friendlier.” In a conversation published in the Softtalk Sept 1980 issue, John Couch mentioned that he found that his father encountered problems managing health spa data using a microcomputer. Couch proposed that machines should have “specification languages” that would allow people to solve problems “by specifying what they wanted in terms of inputs, outputs, relationships etc.” (Apple Orchard) The term “datagrammer” was proposed for those who would work only with specification languages without having to program in algorithmic languages. Through a diagram (see figure 1), it was suggested that before 1980, only programmers could work on data, but after 1980, data would have to evolve into “data defined via specification language,” so that datagrammers could work on it. This visualization recommending how data visualizations (or practices of data usage) should look from the 1980s onwards could be part of humanities research involving the “digital.” Several such anecdotes gathered from old computer magazines can be found in this article. Zooming into and digging of the layers in a database can help comprehend their acts/effects of interfacing (intra-facing), facilitating data usage, and abstraction. How would a history of databases contribute to media archaeology studies, and in what ways can it challenge notions of “raw data”? (On how data are variously cooked, see Gitelman and Bowker’s essays and on “data are capta, taken not given,” refer to Drucker's paper).
Figure 1: Datagramming, John Couch, Apple Orchard, 1981/1982
From histories of data visualizations, I now want to discuss some of the modes used for visualizing science controversies and sociality of texts. Bruno Latour and Michel Callon’s work within science studies (and Actor-Network Theory (ANT)) has called for extensive tracing of the concatenations of various actors involved in networks. Inspired by ANT, Tommaso Venturini and the médialab at Sciences Po have been part of a number of projects related to mapping controversies: EMAPS and MACOSPOL. Science Controversies are never just “science” controversies, they are “socio-technical” debates involving issues, and it is these issues that then gather actors (who are affected by those issues) around them. The animated cartography of the London 2012 Olympic stadium controversy has dynamically reconfiguring actors and relations. As somebody in the humanities studying media’s role in science controversies, for me, circulation (and sociality) of media texts is critical in sustaining, expanding, bending, and contracting publics around an ensuing controversy. In recent years, Rita Felski has found in Latour’s work a way to methodologically conceptualize a text’s sociability — a way to understand how a particular text (written by a specific author) comes to our attention (and solicits our attachment) through the work of numerous co-actors such as publishers, textbooks, syllabi, prize committees, and book clubs among others. Alan Liu, in a similar vein, has asked for the coupling of social science (social modeling) and digital humanities (study of online discourse) methods so as to study the “integral field of social expression.” (One has to also think here of sociality within text, interrelationships between characters of a story, and interactive narratives). These share resonances with the work of theorists of media adopting approaches such as “Media Ecology” and “Mediation” to understand the material-discursive characteristics of the circulatory media environment. Venturini provides a set of toolkits to design timelines, graphs, and web connections required for representing controversies and Alan Liu has curated a comprehensive list of DH tools including those related to network analysis and visualization. Gephi is a popular graph visualization platform among both science studies and digital humanities scholars. Along side technicalities of building visualizations, there is a broad agreement that aesthetic sensibilities and theoretical approaches are crucial (with the caveat that aesthetic sensibilities can often be embedded within visualization tools). Besides ANT, there are other relational philosophies such as Karen Barad’s “agential realism” and ecological perspectives inspired from Deleuze. How would visualizing science controversies shift if we think of reconfigurations of an unfolding controversy in the form of “iteratively intra-active” components of experimental set-ups where both, matter and identities, come to matter? Given the certainty that data often conveys, how can data visualizations of intra-actions, or emerging interactions as visualized through data, retain uncertainties sparking playful interpretations and a sense of wonder (re-enchantment)?
Add new comment