What would the ‘Voight-Kampff’ Machine of Today be like?

In Ridley Scott’s depiction of Blade Runner, set in the alternative dystopian year of 2019, what would realities version actually look like in the year 2019.

The ‘Voight-Kampff’ Machine in the movies narrative was to assist in the detection of identifying ‘Replicants’ the synthetic androids of the world. The machine in the words of Syd Mead was made to look like it was “breathing because it would ‘inhale’ localised air between the interviewer and interviewee and process that and pick up acidic traces  and so-forth to try and detect the Replicant – mimicking that of animals as they can ‘smell if you’re afraid’.

This also paired up with the questions to form the ‘Voight-Kampff Test’  It measured bodily functions such as respiration, heart rate, blushing and eye movement in response to emotionally provocative questions.

Blade Runner Wiki

Description from the original 1982 Blade Runner presskit:

A very advanced form of lie detector that measures contractions of the iris muscle and the presence of invisible airborne particles emitted from the body. The bellows were designed for the latter function and give the machine the menacing air of a sinister insect. The VK is used primarily by Blade Runners to determine if a suspect is truly human by measuring the degree of his empathic response through carefully worded questions and statements.

The Voight-Kampff machine is perhaps analogous to (and may have been partly inspired by) Alan Turing’s work which propounded an artificial intelligence test — to see if a computer could convince a human (by answering set questions, etc.) that it was another human. The phrase Turing test was popularised by science fiction but was not used until years after Turing’s death.

‘Determining if something is truly human’ what would the tool be used to determine and identify this and what would the ‘subject’ (instead of synthetic being/AI) be?

What is today’s ‘subject’?

Deepfake Technology

Things that we usually have to distinguish and determine whether something is human can range from fakes to real or digital fakes examples of these can be from primitive disguises to digital fakes more commonly known as a ‘Deepfake’. Using ‘artificial intelligence’ a person can combine and superimpose existing images and videos onto source images of using a machine learning technique called a ‘generative adversarial nertwork’, GAN for short. These have resulted in videos that can depict a person or persons saying or performing actions that have never actually occurred in reality.

An example is when Jordan Peele and Jonah Peretti created a Deepfake using Barack Obama as a public service announcement about the danger of Deepfakes shown below in April of 2018.

Artificial Intelligence or AI

Artificial Intelligence has been broadly defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence. Examples such as visual perception, speech recognition, decision-making, and translation between languages. 

But arguably this level of Intelligence is still the realms of ‘Sci-Fi’ and fictional. Additionally, this is also known as Artificial general intelligence (AGI) the intelligence of a machine that could successfully perform any intellectual task that a human being can. Realities version could be referred to as ‘weak AI’ or ‘narrow AI’ the usage of software to study or accomplish a specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.

GANS and Deepfakes

These Deepfakes use a subset of machine learning which in itself is a subset of AI however, it should be noted that all machine learning counts as AI, but not all AI counts as machine learning.

Deepfakes are created using machine learning in particular GANs.

The basic components of every GAN are two neural networks – a generator that synthesises new samples from scratch, and a discriminator that takes samples from both the training data and the generator’s output and predicts if they are “real” or “fake”.

GANs

The generator input is a random vector (noise) and therefore its initial output is also noise. Over time, as it receives feedback from the discriminator, it learns to synthesise more “realistic” images. The discriminator also improves over time by comparing generated samples with real samples, making it harder for the generator to deceive it.

ProGAN.gif

What Now?

Although we don’t have synthetic beings able to walk talk and mimic our actions, the digital technology is getting close to being able to create digital versions of ourselves even mimicking our voices or even creating fairly accurate images of people who don’t even exist.

https://thispersondoesnotexist.com/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s