I sketched out a rough wire-frame before mocking it up in experience design, the format I had envisioned could both be a physical with a combination of digital or purely a digital experience, creating the experience of a ‘Blade Runner-esk’ detective scenario.
The physical version could be a set of printed out Deepfakes mixed in with real photos, looking like ‘Polaroids’. These could then be inserted into a machine which ‘reads and analyses’ these images and gives back the operator some details on which image is fake or which is real.
Moreover, the technical side of this concept I discussed with my tutor that for the physical version. A section of the table could be cut out with a acrylic area with a transparent surface – this would then be closed with a slit for the ‘Polaroids’ to be inserted into. This would then have a camera placed underneath with on the back on each of the ‘Polaroids’ would have a QR or bar-code that would correspond to its ‘digital’ counterpart that would get ‘scanned’. Additionally creating the illusion that these have been ‘digitally scanned’ and analysed which would blend into this digital interaction.
I sketched out the dual process that would how both the physical and digital version work and where the functionalities of either combined mapping out how the digital and physical interactions would work.
The firs iteration of the sketch just involved a rough wire-frame of how the interaction would be displayed and what I wanted each page to involve. However, when creating this I noticed that I accidentally informed the user that these are deepfakes when I actually wanted to leave them in the dark until these images where analysed then they would be informed of which are deepfakes or real people and the task would begin.