As an (odd) art historian, I am fond of computers, algorithms, computational methodologies, and visualization (which I perceive as a very useful tool). I do not think I belong to the majority when I say “instead of looking at each painting of an artist, or a plethora of artists, and compare their work, I’d rather look at feature sets derived from their works, visualized as a set of ‘fingerprints’, and try to get an overview of their relations via these fingerprints”.
Let me briefly explain what I mean by a ‘feature set’ and ‘fingerprints’. Feature extraction in image processing is a commonly used method, where an image is represented by a set of features derived from the image. These features could be very basic, such as hue/brightness/saturation of an image, or more complex, like the amount of edges and corners in the image, or the presence of high frequency or low frequency information. They can also be based on human visual system, such as the distribution of saliency over an image, obtained by a computational attention system.
Feature extraction already belongs to the computer scientific jargon, but ‘fingerprint’ is a term I choose to use in this context to denote a specific abstraction. Whenever I use image processing on any given artist’s oeuvre, I look at the resulting visualizations (and there usually are many different ones depending on which feature sets you use), and think that each visualization is a distinct fingerprint of that artist.
Here is for example a figure that compares sampled artworks from three different artists (all members of the social network-site deviantArt, which is dedicated to sharing user-generated art). In this figure, each line represents various feature-values of an artwork, and each color (i.e. green, blue and red) represents an artist, the sum of same colored lines is the fingerprint of that artist.
Instead of trying to explain to an art historian what feature extraction does, and what each line means in this figure, you can simply show him/her a visualizations where these features are explained via the visualization itself. Trained in spotting visual language of artworks (composition/color/balance etc.), an art historian would understand a visualization much better then a scatterplot. Compare for example these two figures (1) of another deviantArt member:
On the left handside, we have all artworks of a deviantArt member in a scatterplot visualization. Each dot represents a painting; the points are arranged according to the mean and standard deviation values of the pixels of each painting. On the right hand-side, we have the same scatterplot, but this time, each painting is represented with a thumbnail of the painting itself. The first plot is readable only by the expert, whereas the latter translates what mean/standard deviation of pixels mean to a layman. This approach belongs to what Lev Manovich calls 'cultural analytics'. He proposes a quick solution to the challenge of “how to read a graph”. The answer is simple: change the points in a graph with the ‘real thing,’ i.e. the thumbnails of the pictures.
We have developed a similar tool to study a collection of artworks from deviantArt (2). The main difference of this tool from the toolset Lev Manovich uses is essentially in its built-in recommendation component. Given two artists to compare, the tool uses machine learning methods to select the best features to maximally separate the galleries of these artists. In a sense, this tool provides the art historian with a direction by telling what kind of similarities two paintings/artists are sharing in computational terms. When combined with the social network information that exposes links between artists, it becomes possible to test (under some assumptions) whether an algorithm works well in spotting stylistic similarities and differences between artworks.
In a recent study I did with Lev Manovich, we have looked at in what ways image processing can be used directly to discover relations between artworks, and found out that this is an incredibly challenging problem if tackled in a generic way (3). It is at the moment not possible to come up with an algorithm that can tell us whether two images share some common component, unless this component is made explicit. It makes sense to fill in the gap by using social network information.
If we can develop an algorithm that can compare two artists, or two artworks successfully, then we can apply these algorithms to a huge dataset of (historical) artworks on which there is little historical data. For a lot of artworks, many important details are not known, many times we don’t even know the name of the artists, production place or time. These types of artworks could be analyzed by such algorithms, and compared to classical examples of their time, and maybe relations that were not thought of before could come to light.
At the moment, we are far from that point. Not only computationally, but also mentally. And I think the latter is more problematic then the former:
One of the questions I never get when talking about the deviantArt project in front of a humanities audience is “what does the x/y axis stand for, what are your parameters?” On the other hand, for scientists, this is almost always the first question they ask, and for them this is crucial in interpreting the graph. This clearly shows that humanities scholars lack the skills to properly interpret such visualizations, they do not understand what it means to have projection of data onto different subspaces spanned by different features, and they don’t think critically when it comes to visualizations.
So, my two cents in the debate of how humanities scholars can benefit from data/research visualization is very humble: the scholar has to be visual-literate, or the tools that are developed for (digital) humanities scholar should cover this lack of visual literacy by making the representation incredibly intuitive.
1 These visualizations are prepared with VisualCulture, a program developed by Lev Manovich's Software Lab. This program is not supported/distributed anymore. Please check this page to see the latest software used by the group.
2 Buter, B., N. Dijkshoorn, D. Modolo, Q. Nguyen, S. van Noort, B. van de Poel, A.A. Akdağ Salah, A.A. Salah, "Explorative visualization and analysis of a social network for arts: The case of deviantArt ", Journal of Convergence, vol.2, no.2, pp.87-94, 2011.
3 Akdag Salah, A.A., L. Manovich, A.A. Salah and J. Chow, "Combining Cultural Analytics and Networks Analysis: Studying a Social Network Site with User-Generated Content," Journal of Broadcasting and Electronic Media, Volume 57, Issue 3, pp. 409-426, 2013. The high resolution images can be downloaded here: 1, 2, 3.