It sounds like a scenario straight out of a Ridley Scott film: technology that not only sounds more “real” than actual humans, but looks more convincing, too. Yet it seems that moment has already arrived.
A new study has found people are more likely to think pictures of white faces generated by AI are human than photographs of real individuals.
“Remarkably, white AI faces can convincingly pass as more real than human faces – and people do not realise they are being fooled,” the researchers report.
The team, which includes researchers from Australia, the UK and the Netherlands, said their findings had important implications in the real world, including in identity theft, with the possibility that people could end up being duped by digital impostors.
However, the team said the results did not hold for images of people of colour, possibly because the algorithm used to generate AI faces was largely trained on images of white people.
Dr Zak Witkower, a co-author of the research from the University of Amsterdam, said that could have ramifications for areas ranging from online therapy to robots.
“It’s going to produce more realistic situations for white faces than other race faces,” he said.
The team caution such a situation could also mean perceptions of race end up being confounded with perceptions of being “human”, adding it could also perpetuate social biases, including in finding missing children, given this can depend on AI-generated faces.
Writing in the journal Psychological Science, the team describe how they carried out two experiments. In one, white adults were each shown half of a selection of 100 AI white faces and 100 human white faces. The team chose this approach to avoid potential biases in how own-race faces are recognised compared with other-race faces.
The participants were asked to select whether each face was AI-generated or real, and how confident they were on a 100-point scale.
The results from 124 participants reveal that 66% of AI images were rated as human compared with 51% of real images.
The team said re-analysis of data from a previous study had found people were more likely to rate white AI faces as human than real white faces. However, this was not the case for people of colour, where about 51% of both AI and real faces were judged as human. The team added that they did not find the results were affected by the participants’ race.
skip past newsletter promotion
Our morning email breaks down the key stories of the day, telling you what’s happening and why it matters
after newsletter promotion
In a second experiment, participants were asked to rate AI and human faces on 14 attributes, such as age and symmetry, without being told some images were AI-generated.
The team’s analysis of results from 610 participants suggested the main factors that led people to erroneously believe AI faces were human included greater proportionality in the face, greater familiarity and less memorability.
Somewhat ironically, while humans seem unable to tell apart real faces from those generated by AI, the team developed a machine learning system that can do so with 94% accuracy.
Dr Clare Sutherland, co-author of the study from the University of Aberdeen, said the study highlighted the importance of tackling biases in AI.
“As the world changes extremely rapidly with the introduction of AI, it’s critical that we make sure that no one is left behind or disadvantaged in any situation – whether due to ethnicity, gender, age, or any other protected characteristic,” she said.