Monday, November 4, 2024
Contact    |    RSS icon Twitter icon Facebook icon  
Unexplained Mysteries
You are viewing: Home > News > Science & Technology > News story
Welcome Guest ( Login or Register )  
All ▾
Search Submit

Science & Technology

Deepfake AI photos can now seem more real than genuine images

January 27, 2023 · Comment icon 11 comments

Which of these two images is fake ? The answer is - both of them are. Image Credit: NVIDIA
Psychologist Manos Tsakiris takes a look at the current state of deepfake photos and the problems they may cause online.
Even if you think you are good at analysing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don't exist.

Recently, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.

These deep fakes are becoming widespread in everyday culture which means people should be more aware of how they're being used in marketing, advertising and social media. The images are also being used for malicious purposes, such as political propaganda, espionage and information warfare.

Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is "trained" by exposing it to increasingly large data sets of real faces.

In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for Generative Adversarial Networks. The process generates novel images that are statistically indistinguishable from the training images.

In our study published in iScience, we showed that a failure to distinguish these artificial faces from the real thing has implications for our online behaviour. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.

My colleagues and I found that people perceived GAN faces to be even more real-looking than genuine photos of actual people's faces. While it's not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people — a concept known as "social trust".

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall — independently of whether the faces were real or not.
This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this "truth default" state, eventually eroding social trust.

Changing our defaults

The transition to a world where what's real is indistinguishable from what's not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger's identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently - in ways we hadn't expected to.

In psychology, we use a term called "reality monitoring" for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

It's crucial for people to be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deepfake technology to be used for nefarious purposes.

The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded in social media platforms to help us distinguish the real from the fake when it comes to new connections' faces.

Manos Tsakiris, Professor of Psychology, Director of the Centre for the Politics of Feelings, Royal Holloway University of London

This article is republished from The Conversation under a Creative Commons license.

Read the original article. The Conversation

Source: The Conversation | Comments (11)




Other news and articles
Recent comments on this story
Comment icon #2 Posted by Chaldon 2 years ago
Faking anything is worthwhile only when there is an interested party and something to gain, so this crap is only for the politics (and these are the kind of humans I generally avoid even to think of). A normal person does not need fakes other than just for private fun, and this is the kind of fun which seems wonderful at first but quickly becomes boring. Fakes are boring, because we know they are fake. Nothing compares to a real thing.
Comment icon #3 Posted by Chaldon 2 years ago
Or may be wonderful implications, a new level of freedom of mind for the people. People will finally stop believing any kind of media propaganda and will think for themselves, they will know for sure that everything on the screen may be fake, so they will lose the interest, and so the politics will lose their power over their minds.
Comment icon #4 Posted by and-then 2 years ago
That's a pretty profound observation.  It sums up perfectly how I feel about the information overload and loss of trust in most areas of interaction with our government/media masters.  At some point, the only option is to disconnect and ignore the messaging.  The one characteristic I've come to see as a red flag for propaganda is the near verbatim consistency, across multiple legacy media outlets, of a single talking point.  
Comment icon #5 Posted by qxcontinuum 2 years ago
OK so conspirationists are gaining another point in my agenda whenever they claim that everything online received from officials is fake news.  I'm starting to admire many of them who have a proven track record of connecting the dots together and be able to discern what presented at face value versus what's the truth
Comment icon #6 Posted by spartan max2 2 years ago
Not having any objective way to know what is true is "a new level of freedom". Got it  Y'all are wild 
Comment icon #7 Posted by Skulduggery 2 years ago
DeepNude websites have been around for a little while... “It has often been said: ‘If you give them enough rope, they’ll hang themselves.’ But why should they? Maybe they’ll create an unbelievably elegant arrangement of rope; an impossibly monumental construction of coiled vision which could only be classified as rope sculpture, stretching a timeless strand of understanding between us and posterity.” ―Crosley Bendix (Don Joyce), from the liner notes to Escape From Noise by Negativland
Comment icon #8 Posted by simplybill 2 years ago
I suppose this will affect the modeling industry. Clothing manufacturers and ad agencies can hire one or two deep-fake software engineers and save the expense of contracting real humans.
Comment icon #9 Posted by spartan max2 2 years ago
Makes me curious if we could theoretically replace news reporters. I know in south Korea there are already some E-girl influencers that are not actually real people.
Comment icon #10 Posted by Freez1 2 years ago
And then you throw in a chat bot and here we are. One day soon you will find yourself reading in here things not even written by a human being but by someone’s demented computer chat bot. Good luck! 
Comment icon #11 Posted by TripGun 2 years ago
It doesn't need to have happened, only the belief is needed.


Please Login or Register to post a comment.


Our new book is out now!
Book cover

The Unexplained Mysteries
Book of Weird News

 AVAILABLE NOW 

Take a walk on the weird side with this compilation of some of the weirdest stories ever to grace the pages of a newspaper.

Click here to learn more

We need your help!
Patreon logo

Support us on Patreon

 BONUS CONTENT 

For less than the cost of a cup of coffee, you can gain access to a wide range of exclusive perks including our popular 'Lost Ghost Stories' series.

Click here to learn more

Recent news and articles