Of Chatbots and Crackpots: ChatGPT as Crisis of the Image

Curator's Note

‘The clone’. ‘The drone’. And ‘the data double’. In W.J.T. Mitchell’s 2018 lecture, ‘Iconology 3.0’, these are some of his candidates for the cultural icons—paradigmatic ‘images of images’—that have come to define the status of the image in our time.[1] Today, however, Mitchell might have to add another to the list: the chatbot, epitomized by the currently rampant ChatGPT and its large-language-model progeny.

Why approach what’s a largely text-based phenomenon as a question of the image, though? Certainly, many iterations of ChatGPT’s technology deal with actual pictures. But iconology as the study of images across media has a far more pervasive purview.[2] It includes the verbal icon, images in text, such as figures and metaphors; and, furthermore, images of text. An iconology such as Mitchell’s is therefore not only interested in how ChatGPT (as a kind of ‘world picture’ or what Ted Chiang calls a ‘blurry JPEG of the web[3]) holds up a global image of all the shit we say online; or how ChatGPT, as the now-emblematic avatar or mascot of generative AI, has itself become an image. A critical iconology is also interested in the images that animate our experience of ChatGPT.

The story of ChatGPT to date has thrived on such vivifying images—‘stochastic parrots’, ‘hallucinations’ and ‘jailbreaks’, or the ‘arms race’ amid an impending ‘textpocalypse’—fueled by a cultural imaginary long stocked with science fictions of the future. Yet the most persistent of these images is the elusive ‘Author’ that continues to haunt our experience of the chatbot. The creepiest cases, like when Bing Chat manifested the capricious crackpot ‘Sydney’, or when ChatGPT was lured into uttering deadpan taboos in the name of ‘Dan’, only serve to magnify an already innate urge: somehow, we want the chatbot to be a sentient ghostly other. Mitchell would remind us that people have an incorrigible tendency to animate images, to treat them as if alive. In this instance, we animate a technology with an image of autonomy and authorship.

Our apparently irrational inclination to anthropomorphize AI has however been attracting resistance. There are rising calls from activists to demystify and exorcise from our thinking undue notions of ‘intelligence’, ‘learning’, or ‘consciousness’ and to uphold better images of AI.[4] Such corrective efforts can be seen as a kind of iconoclasm, an attack-of-images aimed at the idols of Silicon Valley tech bros and ‘AI hype’. But these efforts may well be in vain. Images that animate are fleeting things; phantoms can’t be put to death. What we have in this deadlock is in fact the latest chapter in a long history of images posing a crisis to humankind—especially ‘when a new medium makes possible new kinds of images, often more lifelike and persuasive than ever before, and seemingly more volatile and virulent.’[5]

So what’s at stake in the seemingly intelligent conversation extended by the text bot ChatGPT? For one thing, our age-old ambivalence towards the image appears to have found its newest, faceless, face.

 

[1] The lecture was later published as W.J.T. Mitchell, ‘ICONOLOGY 3.0 Image Theory in Our Time,’ Nove Teorije 1 (2019), 8–27. 

[2] See W.J.T. Mitchell, Image Science: Iconology, Visual Culture, and Media Aesthetics (Chicago: University of Chicago Press, 2015); and What Do Pictures Want? (Chicago: University of Chicago Press, 2005).

[3] Ted Chiang, ‘ChatGPT is a Blurry JPEG of the Web,’ The New Yorker, February 9, 2023.

[4] For but one of many examples, see Francis Hunger, ‘Unhype Artificial “Intelligence”! A Proposal to Replace the Deceiving Terminology of AI,’ Zenodo (2023). https://doi.org/10.5281/zenodo.7524493

[5] W.J.T. Mitchell, ‘Image,’ in Critical Terms for Media Studies, eds. W.J.T. Mitchell and Mark B.N. Hansen (Chicago: University of Chicago Press, 2010), 35–48, at 38.

Add new comment

Log in or register to add a comment.