Computer Vision Art Gallery

ICCV 2019, Seoul, Korea

Marta Revuelta

AI Facial Profiling, Levels of Paranoia, (2018)

Website

In this digital age, biometric-based surveillance systems that incorporate artificial intelligence are becoming more common. AI companies claim that the facial recognition technology associated with machine learning can analyze the physical characteristics of a person’s face and thus discern subtle patterns of “suspect” personality types.

Taking the world of firearms as a starting point, this project explores the disturbing uses of automated human binary classification processes driven by AI systems, by proposing a “physiognomic profiling machine", a computer vision and pattern recognition system that detects the ability of an individual to handle firearms and predicts his potential danger from a biometric analysis of his face.

Between fiction and reality, this installation proposes an experience inspired by the protocols of security infrastructures and takes the individual as the starting point for a critical reflection about algorithmic biases; a narrative that refers to the trust and legitimisation of empowering decision-making of intelligent artefacts.


Artwork technical description

Interactive mechanical device composed of: a 3d printed gun camera in resin, a profiling machine made up of several elements; a large block, a conveyor belt, two ink pads and a blower (metal and wood) 175 x 67.5 x 160 cm and two trolleys with two storage boxes (metal, plastic and plexiglass) 30 x 20 x 100 cm each. Total volume 210 x 100 x 200 cm.

Softwares include scraping software ScrapeBox and several originally-coded softwares all written in Python using VGGFaces Convolutional Neural Network, Face Recognition library and OpenCV for face detection; Image streaming analysis, the PDF generation and PDF printing and Electro-pneumatic hardware DMX controller software. Also an originally-coded image classification convolutional neural network (or CNN), and annotating a video software using label detection with Cloud Video Intelligence API, Compute Engine and Virtual Machine.

The historical and ethical background

The practice of using the physical appearance of people to deduce their personality is called physiognomy. Considered as a pseudo-science, the peak momentum of its ambition to acquire scientific credibility was reached during the nineteenth century. From then on the popular belief that there are typologies of people identifiable by facial features and body measurements has repeatedly provided a basis for discriminating entire populations, justifying the slavery and allowing genocide until the mid-twentieth century. Put into practice, the physiognomy becomes a “scientific” form of racism.

We find the same dangerous assumptions today in modern versions of the physiognomy in the psychometric research papers of two scientists from the University of Shanghai from 2016 who claimed to have been able to train an AI to detect the criminal potential of a person based only on a photo of his face. The year after, researchers at Stanford University published another paper in which they said they had trained another AI to detect people’s sexual orientation based solely on a photo of their face.

If the method is different, their axioms are similar to the presumptions of the physiognomists of old times: “profiling people and revealing their personality and behavior based on their faces”. The crucial difference with the old techniques of physiognomy is the introduction of machine learning algorithms. The computerization tends to legitimize facial profiling by the well-known phenomenon of automation bias, the tendency of humans to accept the decisions of automated systems by assuming that they are more objective and neutral, even when they are in fact revealed as erroneous.

In the age of ubiquitous cameras and Big Data, given the increasing reliance of society on machine learning to automate cognitive tasks, the machine-learned physiognomy can also be applied on an unprecedented scale and the impact that these systems can have on our lives can not be underestimated.

The purpose of this installation is to warn against the fact that rapid developments in artificial intelligence and machine learning have allowed “scientific” racism to enter a new era, in which the models used and internalized by the machines incorporate the biases present in human behavior. Whether intentional or not, this legitimization of human prejudices by computer algorithms can unduly endorse them.

The art installation proposes a staging inspired by security infrastructures and takes the individual as the starting point for a critical reflection about algorithmic biases and the bipolarization of human labelling; a narrative that explores the ethical limits of these AI driven artefacts, more centred on the practical applications and their impacts.

Exhibitions and profiling performances

2019 / Mirage Festival #7 — Turbulences | Mirage Festival ; Art, Innovation et Cultures Numériques, Lyon, FR

2018 / Symposium Digital Risks in Situations of Armed Conflict | International Committee of the Red Cross | CodeNode, London, UK

Awards

The project has received The Prix Art Humanité 2018 from Red Cross Geneva, International Committee of the Red Cross and HEAD — Genève as well as the Bourse Déliees Grant from Fonds Cantonal d’Art Contemporain FCAC, Geneva.