Seen As Machine
Seen as Machine is an interactive physical sculpture. It is made of a modified Linhof 4x5 camera with a digital back. But in this case, atypical of most digital backs which converts the ground glass of an analog camera into a digital viewing and capture device, this device flips the apparatus and points the camera sensor outwards and the faces the screen inwards. This simple flip turns the camera into a theater, and requires the viewer to peer through the font lens element to see inside the sculpture. The digital back is composed of Raspberry Pi ARM chip computer which is running realtime computations via opensource CV libraries and machine learning object detection (in this case, Google’s MediaPipe). Peering inside the camera reveals a theater of complex digital overlaps that tracks faces — aesthetically distant to the traditionally atomic and analog functionality of the Linhof.
The flip of the digital back represents a reversal of subject. The photographer now must stand in front of the camera and become the subject of it to use it at all. To use the digital camera, a photographer must first become its subject. This interactive positioning is a type of Copernican Trauma, on a symbolic scale, as coined and defined by Benjamin Bratton.
A Copernican Trauma is a type of de-centering of an otherwise human centering activity. The description comes from the scientific shift which discovered that the Sun does not orbit the Earth. Photography is an example of a historical practice which is uniquely human in the sense that the subjectivity and bias of the photographers were fundamentally imprinted into the process of image making. This traditional camera projected ones ethical values more so than it objectively captured a moment in time. One could say that photography in its historical practice was a deeply human-centered practice.
Cameras today are connected to each other in a massive digital network. Cameras in the network communicate with each other with metadata, proxies, and thumbnails, and often are generated from automated systems like surveillance networks, satellites imagery, and machine learning databases. This communication occurs automatically, independent of constant human oversight, that is, excluding the manual maintenance required at increasingly sparse intervals. The scale of these systems, though made through human labour, are so large they take on distinct and emergent machinic qualities that are un-human like. The ‘intelligence’ (with whatever prefix moniker attached) is constantly different and unexpected and new relative to a human or human group intelligence.
More and more of this machine-to-machine intelligence is occurring invisibly in between and all around us. Images made on a smartphone are processed locally to sharpen detail, smooth skin on faces, detect faces, add semantic labeling to images of your pets, then they uploaded to remote server “clouds” to be processed and analyzed. This is the phenomenon of automated meaning-making. Looking inside this sculpture, I hope to make a small moment where the idea of camera is questioned and the notion of a digital ‘Copernican Trauma’ can be symbolically and delicately experienced.