When you press the shutter of Camera Reversa, you effectively take two images: one that you saw, and one “seen” via a content-based image retrieval (CBIR), or reverse image search, that uses various algorithms to match the input image with visually or semantically similar output images, which are drawn from Google’s image database.
Content-based image retrieval algorithms currently function within two levels: from low-level visual analysis to high-level semantic inferences. When an image is utilized as a search query, an image signature is extracted, which is then used to compare the image signatures from a database of pre-analyzed images. The image signature can vary from a histogram, which maps out primitive visual characteristics such as color, texture, and shape, to concept probability models used to infer complex semantic relationships.
Depending on the sophistication of the CBIR algorithms being utilized, a “semantic gap” may occur, in which there is a mismatch in similarity judgments between user and machine. The semantic gap can be a source of frustration, as the human operator may be unsatisfied or unimpressed with the retrieved result, or it may be the source of a poetic juxtaposition which neither human nor algorithm could have generated solely.
This generative photographic process questions traditional notions of intention, authorship, and subject. What are the implications of engaging in a photographic relationship with the world, not to render your embodied environment, but rather to operationalize the visual and semantic qualities of that environment as search queries to explore a given image database? Aesthetic characteristics such as composition, lighting, focal length, and depth of field all have stakes within the photographic act – not in a singular direction, but within overlapping fields of weighted probabilities, as visual attributes compete, intersect, and intermingle with semantic objects.
Furthermore, is it productive to trace authorship to the images generated from such a process? How would creative agency be distributed between the original image maker, the image uploader, the algorithms implicated in the web crawling, indexing, and CBIR processes, the numerous engineers of those algorithms, and the individual photographing his/her embodied environment to provide the search query?
Lastly, who/what is the subject of this photographic discourse? Is it the content of the retrieved images, as reflections of the social and cultural processes that constitute the underlying image database? Are we the subject as image producers and disseminators – a type of collective autoethnography? Is it the various computer algorithms, as we interrogate their evolving retrieval capacities? Or is it our consciousness, as we employ algorithmic counterpoints with which to probe the structures and limitations of our perceptual vantage points?
The interactive installation consists of three parts; (1) a custom built Wi-Fi camera constructed with a Raspberry Pi computer, camera module, WiFi adaptor, HDMI screen, and plastic project box, (2) a Mac mini computer running a python script which facilities a content-based image retrieval via Google's image search and database, (3) and a pair of monitors which display the optical input image from the camera (left monitor) as well as the algorithmically retrieved output image (right monitor).
Below are examples of image pairings from the Camera Reversa. Input image (left), Output image (right).