Toward Ambient Algorithms

In spite of recent work attempting to complicate the concept, the metaphor of the “network” as a mechanistic descriptor for how data connects us to people, places and things online persists. A common critique claims that thinking of networks this way implies an information ecology where explicit and obvious connections between “links”are most valuable because they can be tracked, marketed to, and mined for greater means of connection with “users” later. This is inadequate, the thinking goes, because living organisms and the ecologies they inhabit are simply not machines (or “not simply machines”). Even the notion of the network as an “information ecology” typically conceives of its world as too closed and too human to foster viable, holistic care for all of the people, places and things involved in it.

In common industry parlance, “network” still has these mechanistic connotations in spite of landmark work like Mark C. Taylor’s The Moment of Complexity (2001) that attempted to recover the term from this history of use. In A Counter-History of Composition (2007), Byron Hawk claims that Taylor’s take on network complexity is an outgrowth of “complex vitalism,” an attempt to articulate the intricate relationships in digital technology, for example, as a living system. Thomas Rickert counters in Ambient Rhetoric (2013) thatTaylor’s theories (and even Hawk’s reconsideration of them) have a difficult time developing a theoretical language adequate for ecological concerns largely because they emphasize the explicit and overt and thereby fail to articulate ways to attune to the more implicit and covert ambient background that shapes the context from which language and conscious thought arise (99-107). Rickert claims that if we attune to ambience we will start to “push against the metaphors of node, connection, and web,” and arrive first at “metaphors of environment, place, and surroundings and second to metaphors of meshing, osmosis, and blending” (105). Taking these metaphors as a way of thinking through our immersive connection to the environments we inhabit, Rickert claims that language gets woven into the environment and becomes inextricable at the level of ambient attunement. This means that “language and environment presuppose each other or become mutually entangled and constitutive,” and this “opens us to forms of ‘connection’ that are not driven solely by links.” The implications of this are profound, knowing that this guiding metaphor of the “network” plays such a significant role in how we write online, both in algorithmic code and at the more obvious, interface level that most “users” only see.

As key participants in the construction and maintenance of digital environments, algorithm writers are at high risk of perpetuating this particularly destructive metaphorical tendency. Tarleton Gillespie claims in “The Relevance of Algorithms” (2014) that algorithms produce and certify knowledge, and this has political implications. Through Rickert we might extend this to say that when the language of an algorithm becomes presupposed, its driving metaphors do as well. Gillespie’s model is a good starting point, but his concern is largely for the human public. His ideas do not do enough to look to the larger, nonhuman ecological matters in which algorithms interact. Without great care, such algorithms risk describing people, but also places and things as mere quantities, as purified, aloof or otherwise violently abstracted nouns whose role in the lives of  “users” (itself a violently abstracted noun) is simplified to, say, “marketability,” or “function,” or any other violative descriptor. They risk reifying this violence into the lives of the people, places and things that come into contact with their code.

If these risks are legitimate, then a few key questions come to mind:

  • How do we avoid treating the quantities (be they people, places, or things) that play an essential role in algorithms as mere (violently abstracted) objects so that, rather than becoming connotatively denigrated, they are invited to responsibly participate in whole ecologies of people, places and things?
  • Rather than sweeping out these holistic connections for the sake of simplicity, marketing or other “uses,” what moral imperatives could replace this normalized thinking?” How would this thinking integrate a deeper ecological sense to be equally concerned with person-to-person ethics and those of nonhuman interaction? 
  • There is also the interesting chicken-and-egg metaphysical question (offered by Daniel Hocutt) of whether it is only algorithm writers who must attune to their respective environments in order to write morally sound algorithms, or whether the algorithms themselves are not, in ways, seeking attunement. 

Comments

Sean, I’d like to elaborate on the final bullet point you graciously attributed to me in more elegant and concise words than I originally used. The question of whether algorithms need to attune themselves to the way “language gets woven into the environment and becomes inextricable at the level of ambient attunement” is one I’m taking seriously. As I dig deeper into one particular algorithm — Google Deepmind’s algorithm that has been programmed to learn to play and win Atari 2600 video games without pre-programmed knowledge of specific game mechanics other than an awareness of pixel positions in game play frames and the “greedy” desire to earn points — I recognize that algorithms are being programmed as perpetually self-teaching learning machines. Deepmind’s algorithms are about artificial intelligence, and that intelligence, while artificial, is making decisions about how to succeed. It’s important to remember that “algorithm writers are at high risk of perpetuating this particularly destructive metaphorical tendency,” but it’s also important to consider whether the algorithms themselves need to be cautioned to avoid network metaphors when constructing their (admittedly limited) realities.

As Deepmind’s algorithm teaches itself to play and win Pong using the memory and processing of deep neural networks and the reward system of deep reinforcement learning, what are its creation and learning metaphors? Will it seek to learn within the framing confines of network metaphors? Should we ask it to push against those metaphors and become attuned to the intersection of language and environment? In the case of the Deepmind algorithm learning to play an Atari game, I sense attunement would result in breakthrough knowledge — an understanding of game outcomes that transcends the iterative procedural activity and learning encoded in the language of the algorithm.

The Deepmind algorithm is able only to follow its iterative coding within the confines of its prescribed memory, processing, and frame limits. Were the algorithm to attune itself through self-taught iterative processes to achieve shortcuts in learning, to find pathways that break out of iterative procedures and achieve exponentially advanced outcomes, our first inclination might be to believe that Skynet had become sentient. And maybe that’s true. But it might also suggest a breakthrough in neural networking that could attune itself to its ambient rhetorical environment. What would that mean for our understanding of the role algorithms can and/or should play in learning? In teaching?

These are questions I’m barely able to formulate at this point, much less begin answering. And missing from all this is the ethics of algorithmic development, production, and function. If algorithms can achieve some level of attunement, what are the ethical ramifications of such attunement? Can algorithms achieve some type of conscience, wherein via attunement an algorithm might recognize the nuances of falsehood and deceit?

Add new comment

Log in or register to add a comment.