Curator’s Reflection in a Tag Cloud

I’m drawn to tag clouds as tools for visualizing patterns in texts. Selfe and Selfe (2013) offer a useful heuristic for employing tag clouds to map a field, which I’ve employed as a tool for reflecting on the contributions to this MediaCommons Field Guide Survey. As recommended by Selfe and Selfe, I’ve identified the question I hope to answer using the tag cloud: What do scholars (as represented in this collection) have to say about algorithms?

It’s fitting to use an algorithm-driven tool for analyzing the contributions to this Field Guide on algorithms, both as a practical matter and in terms of a theoretical perspective. In their heuristic, Selfe and Selfe remind us that algorithm-driven digital tools like tag cloud generators remain mediating tools whose use requires thoughtful, careful consideration as what DePew (2015) calls “applied rhetoric” (p. 440).

Thinking of text clouds as wholly determined by computers, however, can mask a number of important issues involved in generating a text cloud and much of the work that must be done to make text clouds useful to a particular audience. To make good use of computerized text-cloud generators, you need to make certain decisions about the rules that structure the terms within the cloud. (Selfe & Selfe, 2013, p. 29).

I used Tag Crowd (www.tagcrowd.com) for creating the tag cloud. It’s a basic tag cloud creator, but its ability to group English words intelligently and to list word frequency (rather than relying on text size alone) makes it a useful tool for illustrating trends and patterns.

I set the corpus for analysis as the entire text of all 11 contributions plus my own introduction. I copied the text into the Tag Crowd interface to generate a tag cloud of the responses. I set the minimum frequency of a word to 15, meaning a word had to have been used at least 15 times in the corpus to appear in the cloud. I arrived at this limitation through multiple attempts, from minimums of 5 to 20 words. Raising the minimum frequency to 20 reduced the number of terms too much, while lowering the frequency to a minimum of 10 or fewer crowded the tag cloud too densely. Increasing the minimum frequency to 15 made irrelevant the maximum words in the tag cloud, but I set the maximum words to 100 as a failsafe to avoid too crowded a cloud.

Tag Crowd offers the opportunity to group similar words so that the combined frequency of similar words like learn, learned, and learning is reported under a single term, learn. Tag Crowd also enables users to omit terms in order to focus attention on specific aspects of the cloud. For this cloud, I omitted the terms algorithms, comments, help, login, picture, post, and register. All but one of these terms appears on the page as part of the MediaCommons user, post, and comment management process (comments, help, login, picture, post, and register). I omitted the term algorithms to focus attention beyond the stated focus of the Field Guide Survey question.

The resulting tag cloud, showing 35 of 1,825 possible words, appears below.

This word cloud shows points of intersection between human experience and algorithmic existence. Many who responded to this survey question are teachers, and this word cloud seems to point toward the intersection of pedagogy and algorithms in the way students use, characterize, recognize, analyze, and get mediated by algorithms and their functions. Algorithms seem to push us toward ways of engaging with one another: through social experiences, games, writing, play, rhetoric, reading, and courses. Algorithms work in the realm of data, networks, technology, and software. Maybe we look to algorithms for ways to score or predict the unpredictable, like recent terrorist attacks in Paris or human activity in general. And perhaps we place algorithms on a continuum between human and machine, then seek to question whether algorithms can and should be expected to act ethically, to be rhetorical, to be social, to learn, to play games, to be like humans. And maybe we even question whether humans ought to engage with more data, to use networks and scoring to be more like algorithms.

Thanks to all who have participated, and all who will continue to participate, in this survey. The collection is made richer by the growing number of voices joining the conversation, and I hope you’ll take a few minutes to read some of the selections and offer your own reaction.

References

DePew, K. (2015). Preparing for the rhetoricity of OWI. In B. L. Hewett & K. E. DePew (Eds.), Foundational practices of online writing instruction (pp. 439-468). Anderson, SC: Parlor Press.

Selfe, R. J., & Selfe, C. L. (2013). What are the boundaries, artifacts, and identities of technical communication? In J. Johnson-Eilola & S. A. Selber (Eds.), Solving problems in technical communications (pp. 19-48). Chicago, IL: University of Chicago Press.

Add new comment

Log in or register to add a comment.