Anyone with an iPhone can now make deepfakes. We aren't ready for what happens next. (Geoffrey Fowler/Washington Post)

advertisement

I’ve made George Washington sing disco and Marilyn Monroe blow me a kiss. With just a photo and an iPhone app, I can create a video of any face saying, or singing, whatever I want.

And now so can you. The technology to create “deepfakes” — videos of people doing things that never really happened — has arrived on smartphones. It’s simple, fun … and also troubling.

The past few months have brought advances in this controversial technology that I knew were coming, but am still shocked to see. A few years ago, deepfake videos — named after the “deep learning” artificial intelligence used to generate faces — required a Hollywood studio or at least a crazy powerful computer. Then around 2020 came apps, like one called Reface, that let you map your own face onto a clip of a celebrity.

Now with a single source photo and zero technical expertise, an iPhone app called Avatarify lets you actually control the face of another person like a puppet. Using your phone’s selfie camera, whatever you do with your own face happens on theirs. Avatarify doesn’t make videos as sophisticated as pro fakes of Tom Cruise that have been flying on social network TikTok — but it has been downloaded more than 6 million times since February alone. (See for yourself in the video I made on my phone to accompany this column.)

Another app for iPhone and Android devices called Wombo turns a straight-on photo into a funny lip-sync music video. It generated 100 million clips just in its first two weeks.

And MyHeritage, a genealogy website, lets anyone use deepfake tech to bring old still photos to life. Upload a shot of a long-lost relative or friend, and it produces a remarkably convincing short video of them looking around and smiling. Even the little wrinkles around the eyes look real. They call it “Deep Nostalgia” and have reanimated more than 65 million photos of people in the past four weeks.

These deepfakes may not fool everyone, but it’s still a cultural tipping point we aren’t ready for. Forget laws to keep fakes from running amok, we hardly even have social norms for this stuff.

All three of the latest free services say they’re mostly being used for positive purposes: satire, entertainment and historical re-creations. The problem is, we already know there are plenty of bad uses for deepfakes, too.

“It’s all very cute when we do this with grandpa’s pictures,” says Michigan State University responsible-AI professor Anjana Susarla. “But you can take anyone’s picture from social media and make manipulated images of them. That’s what’s concerning.”

So I spoke to the people making deepfake apps and the ethics experts tracking their rise to see if we can figure out some rules for the road.

“You must make sure that the audience is aware this is synthetic media,” says Gil Perry, the CEO of D-ID, the tech company that powers MyHeritage’s deepfakes. “We have to set the guidelines, the frameworks and the policies for the world to know what is good and what is bad.”

The technology to digitally alter still images — Adobe’s Photoshop editing software — has been around for decades. But deepfake videos pose new problems, like being weaponized, particularly against women, to create humiliating, nonconsensual fake pornography.

In early March, a woman in Bucks County, Pa., was arrested on allegations she sent her daughter’s cheerleading coaches fake photos and video of her rivals to try to get them kicked off the squad. Police say she used deepfake tech to manipulate photos of three girls on the Victory Vipers squad to make them look like they were drinking, smoking and even nude.

“There’s potential harm to the viewer. There’s harm to the subject of the thing. And then there’s a broader harm to society in undermining trust,” says Deborah Johnson, emeritus professor of applied ethics at the University of Virginia.

Social networks say deepfakes haven’t been a major source of problematic content. We shouldn’t wait for them to become one.

It’s probably not realistic to think deepfake tech could be successfully banned. One 2019 effort in Congress to forbid some uses of the technology faltered.

But we can insist on some guardrails from these consumer apps and services, the app stores promoting them and the social networks making the videos popular. And we can start talking about when it is and isn’t okay to make deepfakes — including when that involves reanimating grandpa.

Installing guardrails

Avatarify’s creator, Ali Aliev, a former Samsung engineer in Moscow, told me he’s also concerned that deepfakes could be misused. But he doesn’t believe his current app will cause problems. “I think the technology is not that good at this point,” he told me.

That doesn’t put me at ease. “They will become that good,” says Mutale Nkonde, CEO of the nonprofit AI For the People and a fellow at Stanford University. The way AI systems learn from being trained on new images, she says, “it’s not going to take very long for those deepfakes to be really, really convincing.”

Avatarify’s terms of service say it can’t be used in hateful or obscene ways, but it doesn’t have any systems to check. Moreover, the app itself doesn’t limit what you can make people say or do. “We didn’t limit it because we are looking for use cases — and they are mainly for fun,” Aliev says. “If we are too preventive then we could miss something.”

Hany Farid, a computer science professor at the University of California at Berkeley, says he has heard that move-fast-and-break-things ethos before from companies like Facebook. “If your technology is going to lead to harm — and it’s reasonable to foresee that harm — I think you have to be held liable,” he says.

What guardrails might mitigate harm? Wombo’s CEO Ben-Zion Benkhin says deepfake app makers should be “very careful” about giving people the power to control what comes out of other people’s mouths. His app is limited to deepfake animations from a curated collection of music videos with head and lip movements recorded by actors. “You’re not able to pick something that’s super offensive or that could be misconstrued,” Benkhin says.

MyHeritage won’t let you add lip motion or voices to its videos at all — though it broke its own rule by using its tech to produce an advertisement featuring a fake Abraham Lincoln.

There are also privacy concerns about sharing faces with an app, a lesson we learned from 2019′s controversial FaceApp, a Russian service that needed access to your photos to use AI to make faces look old. Avatarify (also Russian) says it doesn’t ever receive your photos because it works entirely on the phone — but Wombo and MyHeritage do take your photos to process them in the cloud.

App stores that distribute this technology could be doing a lot more to set standards. Apple removed Avatarify from its China App Store, saying it violated unspecified Chinese law. But the app is available in the United States and elsewhere — and Apple says it doesn’t have specific rules for deepfake apps aside from general prohibitions on defamatory, discriminatory or mean-spirited content.

Labels or watermarks that make it clear when you’re looking at a deepfake could help, too. All three of these services include visible watermarks, though Avatarify removes it with a $2.50-per-week premium subscription.

Even better would be hidden watermarks in video files that might be harder to remove, and could help identify fakes. All three creators say they think that’s a good idea — but need somebody to develop the standards.

Social networks, too, will play a key role in making sure deepfakes aren’t used for ill. Their policies generally treat deepfakes like other content that misinforms or could lead to people getting hurt: Facebook and Instagram’s policy is to remove “manipulated media,” though it has an exception for parodies. TikTok’s policy is to remove “digital forgeries” that mislead and cause harm to the subject of the video or society, such as inaccurate health information. YouTube’s “deceptive practices” policy prohibits technically manipulated content that misleads and may pose a serious risk.

But it’s not clear how good of a job the social networks can do enforcing their policies when the volume of deep fakes skyrockets. What if, say, a student makes a mean joke deepfake of his math teacher — and then the principal doesn’t immediately understand it’s a fake? All the companies say they’ll continue to evaluate their approaches.

One idea: Social networks could bolster guardrails by making a practice out of automatically labeling deepfakes — a use for those hidden watermarks — even if it’s not immediately obvious they’re causing harm. Facebook and Google have been investing in technology to identify them.

“The burden here has to be on the companies and our government and our regulators,” Farid says.

AI-generated videos that show a person’s face on another’s body are called “deepfakes.” They’re becoming easier to make and weaponize against women. (Drew Harwell, Jhaan Elker/The Washington Post)

Whatever steps the industry and government take, deepfakes are also where personal tech meets personal ethics.

You might not think twice about taking or posting a photo of someone else. But making a deepfake of them is different. You’re turning them into a puppet.

“Deepfakes play with identity and agency, because you can take over someone else — you can make them do something that they’ve never done before,” says Wombo’s Benkhin.

Nkonde, who has two teenagers, says families need to talk about norms around this sort of media. “I think our norm should be ask people if you have their permission,” she says.

But that might be easier said than done. Creating a video is a free-speech right. And getting permission isn’t even always practical: One major use of the latest apps is to surprise a friend.

Permission to create a deepfake is also not entirely the point. What matters most is how they’re shared.

“If someone in my family wants to take my childhood picture and make this video, then I would be comfortable with it in the context of a family event,” Susarla says. “But if that person is showing it outside an immediate family circle, that would make it a very uncomfortable proposition.”

The Internet is great at taking things out of context. Once a video is online, you can quickly lose control over how it might get interpreted or misused.

Then there’s a more existential question: How will deepfakes change us?

I discovered deepfake apps as a way to play with my nephew, livening up our Zoom chats by making him look like he’s doing goofy things.

But then I started to wonder: What am I teaching him? Perhaps it’s a useful life lesson to know that even videos can be manipulated — but he’s also going to need to learn how to figure out what he should trust.

Aliev, from Avatarify, says the sooner everyone learns videos can be faked, the better off we’ll all be. “I think that the right approach is to make this technology a commodity like Photoshop,” he said.

Ethicists aren’t so sure.

“What really worries me is what you saw happen over the last few years where any fact that is inconvenient to an individual, a CEO, a politician, they just have to say it’s ‘fake,’” Farid says.

And at the risk of sounding obvious: We don’t want to lose sight of what’s real.

Some people have shared on social media that reanimating the dead with MyHeritage’s videos made them weep with joy. I am sympathetic to that. D-ID says that in its own analysis, only 5 percent of tweets about the service were negative.

But when I tried it with the photo of a friend who died a few years ago, I didn’t feel good at all. I knew my friend didn’t move like that, with the limited range of these computer-generated mannerisms.

“Do I really want these people and this technology messing with my memories?” says Johnson, the U-Va. ethicist. “If I want ghosts in my life, I want real ones.”

Deepfakes are also a form of deception we’re using on ourselves.

This content was originally published HERE

advertisement

Be the first to comment on "Anyone with an iPhone can now make deepfakes. We aren't ready for what happens next. (Geoffrey Fowler/Washington Post)"

Leave a comment

Your email address will not be published.


*