Creator's Statement
Project Background
This adventure in audiography started as a presentation for the Great Lakes Association for Sound Studies [GLASS] conference held in Madison in the Spring of 2018. Functionally, the presentation was meant to introduce the audience to a database project we are working on at the University of Wisconsin-Madison: PodcastRE (short for Podcast Research, http://podcastre.org). The site – which currently indexes and stores close to 1 million audio files from nearly 7700 RSS feeds and takes up just over 30 terabytes of data – aims to provide a searchable, researchable database of podcasts and give researchers the tools to study and analyze podcasts in ways that are as familiar as the current tools for analyzing textual resources in a library. Rather than give a standard conference presentation to provide an overview of the database and its various features, Tom and I wanted to see if we could bring the project to life by layering audio into the presentation in a way that felt more embedded and integral than simply using audio clips as examples. Given the themes and issues Archive 81 – a fictional found footage horror/sci-fi podcast – presents, we thought it might be a suitable framework for our attempt at audiography.
Doing Audiography: Audio First
Although there have been numerous key learnings from this process, we’ll focus here on the process of putting audio first. Rather than starting with a script, we wanted this presentation to be audio-led. While we had a general overview of the ideas we wanted to share about the database, we began the process of producing this piece in our digital audio workstation, Adobe Audition. Knowing our presentation was going to be about audio archiving and audio collections, we listened to the entire first season of the show, paying special attention to moments where characters were discussing issues of preservation, recording, decay, technology and culture. We marked these moments in Audition and exported the sound clips to a new project file. We also marked moments where there were interesting ambient sounds or thematic sounds fit the presentation (sounds of tape starting up, recording machines getting jammed, sounds of the various “archives” and collections in the show, etc.).
With the relevant clips selected, we began organizing the audio into themes that would guide the presentation. Knowing some of the challenges we faced with our database (e.g., what to save, inconsistent metadata, etc.), we looked for audio clips from the show that could serve as a sort of introduction to a particular theme or idea. For example, a clip of the main character talking about metadata could act as a “section header” for a portion of the presentation we might want to discuss metadata issues facing podcasts.
After establishing a loose order with the clips, we began scripting the presentation around the audio clips. Rather than writing the presentation and finding suitable clips, we worked outward from the clips. We first wrote specific sections of the script, much like scripting a standard conference presentation. Once a draft was ready, we would read the script back in conjunction with the audio we had laid out, trimming and editing as necessary. Given that we knew this would be a live presentation at a conference, we spent time thinking through how to perform in conjunction with the audio track and ensuring the timing of the script matched with the timing on the tracks in Audition. This often meant cutting out much longer sentences or more traditionally academic language in order for the narration to fit the timing of the presentation.
The version submitted for [in]Transition is clearly not a “live” performance where in-person narration and recorded audio intertwine. However, we have tried to recreate that aspect of the presentation to some extent. We’ve also had to make adjustments to the audio file and the narration script in order to alter sections that didn’t make sense when not presented in person and without the accompanying visuals.
The audio-led production process offered a unique opportunity to reverse the normal inclination that structures our academic presentations (i.e., writing out ideas and then finding relevant media clips to illustrate those ideas). Moreover, employing an audio workstation such as Audition as the central hub of our project (and using Word and the script as secondary and reactive to what we were doing in Audition) put sound and text on far more equal footing, allowing us to engage in different kinds of citational practices and tune into different registers though which to organize and make arguments.
Credits: Our piece contains audio clips from Archive 81, Serial, the Daily Source Code and the 1949 episode of Suspense “Ghost Hunt”. Music for the piece comes from Archive 81 and Music Bed.
Acknowledgements
PodcastRE is made possible by a UW2020 Discovery Initiative grant from the University of Wisconsin - Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation. The project is also supported through a Digital Humanities Advancement grant from the National Endowment for the Humanities: Exploring the human endeavor. Special thanks to thanks to lead analytics advisor Eric Hoyt, database builder and info architect Sam Hansen, analytics app developer Susan Noh, and lead computer specialist Peter Sengstock, along with the many research assistants who’ve contributed their time and ideas to this project, including Andrew Bottomley, Luke Salamone, Zheng Zheng, Avichal Rakesh, Ying Li, Keyi Cui, Tom Welch, Jessie Nixon, Sean Owczarek, Jacob Mertens, Nick Laureano and Dewitt King.
Bios:
Jeremy Morris is an associate professor of Media and Cultural Studies in the Department of Communication Arts at the University of Wisconsin-Madison. His research focuses on the digitization of the cultural industries (music, software, radio, etc.). He is the author of Selling Digital Music, Formatting Culture (University of California Press, 2015), the co-editor of Appified: Culture in the Age of Apps (University of Michigan Press 2018), and has published widely on new media, music technologies and podcasting. He is the founder of http://podcastre.org, a research database of podcasts that preserves nearly 1 million audio files and offers new ways to study and analyze sonic culture. He was the host and producer for 5 years on a music podcast for a Montreal-based arts and culture website called Midnight Poutine.
Tom Welch is a graduate student in Media and Cultural Studies in the Department of Communication Arts at the University of Wisconsin–Madison. His work focuses on the intersection of gender, sexuality, and labor in the digital media industries. In addition to being a research assistant for PodcastRE, he is a host and producer for the PodcastRE Project podcast. He is also a host for the popular culture podcasts TopCast, Sixteen Stars, and Meme Theory: The Theory of Memes.
Add new comment