Differential Interventions: Images as Operative Tools

Advanced imaging technologies are currently transforming operating theaters into sophisticated augmented reality studios. State-of-the-art operating rooms are run like modern media laboratories exploring recent developments in communication technology, including computer visualizations, positioning and navigation applications, and the expansion of network systems. Surgeons interact with various displays permitting indirect observation of the surgical field through high definition videoscopes (microscopes, endoscopes), supplemented with scans obtained before the operation in combination with image and tracking data acquired during the procedure. Surgical imaging and navigation challenge established frameworks for understanding images, just as much as they belie the idea, widespread in early-stage theories of digital images, of digitization corrupting the detecting capacity of images. While being highly interventional and artificial, and sometimes entirely computer-generated and synthetic, medical images and visualizations undoubtedly reveal pertinent aspects of reality. How are we to make sense of them?

Images used for guidance during surgical procedures exemplify a category of images that in recent literature has been characterized as “operative” (Farocki 2004; Kogge 2004; Krämer 2009). Operative images are images that, in the words of Harun Farocki, “do not represent an object, but rather are part of an operation” (Farocki 2004, 17). The images in question typically serve practical purposes tied to specialized tasks, such as in the case of navigated brain tumor surgery, to localize a tumor and control the removal of pathological tissue. The active and performative work of digital images is also emphasized by new media scholars investigating digital image applications such as Photosynth, Augmented Reality, and Google Street View (Uricchio 2011; Verhoeff 2012; Hoelzl and Marie forthcoming). As pointed out by William Uricchio, algorithmic intermediation reconfigures the relation between the viewing subject and the object viewed in a way that “ultimately determines what we see, and even how we see it” (Uricchio 2011, 33). Algorithmically enabled image applications do not simply reproduce pre-given realities but exercise transformative powers on both ends of the subject-object relationship (Carusi 2012). This is why established representational approaches fall short of accounting for the active roles of digital image applications, and this is why new theorizations of images are needed.

Operational approaches provide promising possibilities for rethinking images for at least three reasons: First, they offer dynamic approaches that analyze phenomena into doings and happenings rather than into things and static entities; second, they offer relational approaches that conceive identity in terms of open-ended processes of becoming; and third, by so doing, they allow us to ascribe agency to images, and crucially, to conceive agency as distributed across interconnected assemblages of people, practices, and mediating artifacts. In the following, we contribute to the ongoing efforts to rethink images in dynamic terms by probing more closely into neurosurgical imaging and navigation practices where images are literally used as operative tools. By zooming in on critical moments of the image-guided neurosurgical process, we draw out key features of an operational understanding of images, with the aim of developing it further as a “differential” theory of images (Hoel 2011a and 2011b).

Figure 1. The Surgeon's view in the operating room.

Contemporary neurosurgery relies heavily on computer-assisted navigation technologies. Neuronavigation, which is a further development of stereotactic surgery (Enchev 2009), is a set of methods that makes use of three-dimensional coordinate systems for frameless guidance, orientation, and localization of structures during brain surgery. Neuronavigation systems transfer multimodal image data into the surgical field, track surgical tools, and overlay the position of important instruments on medical image maps of the patient. A major challenge for neuronavigation is the shift in position of the brain anatomy as the operation progresses, commonly referred to as “brain shift.” To compensate for this shift, updated maps are acquired during the operation on the surgeon’s request (Lindseth et al. 2013). In a tumor removal procedure that we observed, the navigation system included preoperative magnetic resonance images, live video images from a surgical microscope, intraoperative ultrasound images, and optical tracking of surgical instruments. The image and tracking information was shown on a multimodal display unit facing the surgeon (fig. 1), either as corresponding views in separate display windows, or as integrated navigation scenes mixing features from different imaging modalities (fig. 2). In addition to this, the surgeon could change the magnification levels of the images and flip between different modes within each imaging modality.[fn]T1, T2, FLAIR, MR angiography, fMRI, and DTI tracts for the magnetic resonance images, and B-mode and Doppler for the ultrasound images.[/fn] Apart from that, the operating room was populated with additional screens displaying the microscope images as well as the patient’s vital signs during anesthesia. At critical points in the operation (before, during, and after the removal of tumor tissue), ultrasound volumes were obtained to show the extent of brain shift.

Figure 2. Navigation display. Corresponding MR (left) and ultrasound (middle) slices, as well as an overview showing the position of the ultrasound probe relative to the head.

Far from being passive reflections of pre-given realities, medical images rely on active interventions. Magnetic resonance imaging produces images using the magnetic properties of hydrogen atoms, which abound in the human body, especially in tissues such as fat and water. When the patient is placed in an MRI scanner, the system generates images by producing a strong uniform magnetic field that aligns the axes of the protons parallel or antiparallel to the field, emitting a radiofrequency pulse at the right frequency and duration, and altering the magnetic field on a local level using gradient magnets to determine the location of the image “slices”. When the radio-wave transmitter is turned off, the protons start to “relax,” producing radio wave signals as they release their energy and return to their equilibrium state. The signals are picked up by the system’s receiver coils and transformed into gray level intensities for each pixel in a cross-sectional image. Since protons in different tissues have different relaxation times, various scanning sequences can be used to distinguish between different types of tissue, say, fat and water, or pathological tissue and normal tissue. Conventional 2D ultrasound also builds cross-sectional gray-scale images of the anatomy at hand, this time by a transducer probe emitting a high frequency sound pulse into the patient. As the sound waves travel into the body, they hit boundaries and interfaces between various tissue types and some of the sound waves are reflected back to the probe. In ultrasonic imaging, each sound pulse is followed by “listening” for these sound echoes. By measuring the time taken for echoes to return, the system calculates the distance to anatomical structures and displays the strength of the echoes as a gray-scale image.

We have zeroed in on technical details of medical image generation in order to show that MR and ultrasound imaging are based on contrasts. The patterns shown are not simply found; they stand out only to the extent that the areas of interest are subjected to targeted excitations, tissues and organs being provoked to answer back along the lines specified by the parametric setup. Each imaging method has a highly selective range, disregarding anatomical or functional features that fall outside its scope. The point we want to make, however, goes beyond the familiar one concerning the selective nature of imaging methods. Rather, the point is that these methods stand in a generative relation to the imaged features: Each imaging method differentially intervenes into the phenomena under examination, delineating and sustaining a characteristic pattern or structure not detectable in the same way by other methods.

In order for images to support navigation, the image data have to be registered to the patient coordinate system. The registration process makes the coordinates of anatomical features in physical space and the various image spaces coincide. During surgery, 3D ultrasound can also be used to update the preoperative image maps, shifting the positions of imaged features by means of advanced algorithms (Lindseth et al. 2013). In the observed operation, fiducial markers were placed on the patient’s skull before entering the MRI scanner. When the patient was immobilized on the operating table, the markers, visible in the MR images, were used by the surgeon to identify the corresponding points on the patient’s head by means of a tracked pointer. As the procedure progressed, ultrasound was used for direct guidance (fig. 3).

Figure 3. Ultrasound-guided surgery. Preoperative planning using a tracked pointer (left), acquisition of a new ultrasound volume using a tracked ultrasound probe (middle), and removal of tumor tissue using a tracked resection instrument guided by updated ultrasound images.

Images used for neuronavigation are clearly “part of an operation;” they are active, transformative, and most definitely reconfigure the subject-object relationship. For all that, even if they are considered here as “agents,” they are not understood to operate on their own. Further, even if they are made to be processed by computers, the images are ultimately aimed at human eyes. First, when it comes to objects, it is important to note that the imaged features are relational and dynamic entities. The objects revealed by one method are not strictly speaking the same as the objects revealed by another method. Each method establishes, so to speak, a “working object” (Daston and Galison 2007), natural objects being too plentiful and unrefined to usefully cooperate in systematic comparisons. Second, when it comes to subjects, imaging methods enact a productive displacement of the human sensorium, bringing information that is naturally beyond us into the purview of the human senses. By enhancing human sensitivity, they expand the human action range. In neuronavigation, agency is distributed as humans, apparatuses, and tissues form an integrated system.

Thus, if we replace the framework of representation with a dynamic and relational framework, the “operational” understanding of images can be further specified as “differential”: Images serve to discern differences; differential intervention is their mode of operation. However, if we truly endorse a dynamic and relational framework, we soon come to realize that in fact all images, even the pre-digital ones, are operative and differential tools.

References

Carusi, Annamaria (2012), “Making the Visual Visible in the Philosophy of Science,” Spontaneous Generations, 6/1: 106-114, DOI 10.4245/sponge.v6i1.16141.

Daston, Lorraine and Peter Galison (2007), Objectivity (New York: Zone Books).

Enchev, Yavor (2009), “Neuronavigation: Geneology, Reality, and Prospects,” Neurosurgical Focus 27/3: 1-18.

Farocki, Harun (2004), “Phantom Images,” Public, 29 (2004): 12-24.

Hoel, Aud Sissel (2011a), “Differential Images,” in: What is an Image? eds. James Elkins and Maja Naef, 152-4 (University Park: Penn State University press)

Hoel, Aud Sissel (2011b), “Thinking ‘Difference’ Differently: Cassirer versus Derrida on Symbolic Mediation,” Synthese, 179/1: 75-91.

Hoelzl, Ingrid and Remi Marie (forthcoming), “Google Street View: Navigating the Operative Image,” Visual Studies.

Kogge,Werner (2004), “Lev Manovich: Society of the Screen,” in: Medientheorien: Eine philosophische Einführung, eds. David Lauer and Alice Lagaay, 297-315 (Frankfurt: Campus Verlag).

Krämer, Sybille (2009), “Operative Bildlichkeit: Von der ‘Grammatologie’ zu einer ‘Diagrammatologie’? Reflexionen über erkennendes ‘Sehen,’” in: Logik des Bildlichen: Zur Kritik der ikonischen Vernunft, eds. Martina Hessler and Dieter Mersch, 94-123 (Bielefeld: Transcript).  

Lindseth, Frank, Thomas Langø, Tormod Selbekk, Rune Hansen, Ingerid Reinertsen, Christian Askeland, Ole Solheim, Geirmund Unsgård, Ronald Mårvik, and Toril A. Nagelhus Hernes (2013), “Ultrasound-Based Guidance and Therapy,” in: Advancements and Breakthroughs in Ultrasound Imaging, ed. Gunti Gunarathne (InTech), DOI: 10.5772/55884.

Uricchio, William (2011), “The Algorithmic Turn: Photosynth, Augmented Reality and the Changing Implications of the Image,” Visual Studies, 26/1: 25-34.

Verhoeff, Nanna (2012), Mobile Screens: The Visual Regime of Navigation (Amsterdam: Amsterdam University Press).

Add new comment

Log in or register to add a comment.