The growing popularity of FaceApp — a photo filter app that delights smartphone users with its ability to transform the features of any face, like tacking on years of wrinkles — has prompted Democratic Sen. Chuck Schumer to call for a federal investigation into the Russia-based company over what he says are potential national security and privacy risks to millions of Americans.
"It would be deeply troubling if the sensitive personal information of U.S. citizens was provided to a hostile foreign power actively engaged in cyber hostilities against the United States," Schumer said in a letter to the FBI and the Federal Trade Commission.
"I ask that the FTC consider whether there are adequate safeguards in place to prevent the privacy of Americans using this application, including government personnel and military service members, from being compromised," the senator wrote.
BIG: Share if you used #FaceApp:— Chuck Schumer (@SenSchumer) July 18, 2019
The @FBI & @FTC must look into the national security & privacy risks now
Because millions of Americans have used it
It’s owned by a Russia-based company
And users are required to provide full, irrevocable access to their personal photos & data pic.twitter.com/cejLLwBQcr
Even as privacy advocates raised security concerns, FaceApp's mug-morphing powers lured celebrities — or anyone who had their picture saved to their phone — such as Drake and the Jonas Brothers, to try on graying hair and wrinkles. By Wednesday, FaceApp had topped Apple's and Google's app download charts.
Schumer's appeal echoed concerns expressed earlier in the day from the Democratic National Committee, over fears that its artificial intelligence technology could expose vulnerable facial recognition data to a country that launched a hacking campaign against the party during the 2016 election.
The DNC has since expanded its cybersecurity efforts, including bringing on chief security officer Bob Lord.
In an email sent to 2020 presidential campaign staff Wednesday, Lord urged "people in the Democratic ecosystem" against using an app that could have access to its users' photos.
"It's not clear at this point what the privacy risks are, but what is clear is that the benefits of avoiding the app outweigh the risks," Lord said in a notice first reported by CNN. "If you or any of your staff have already used the app, we recommend that they delete the app immediately."
Prior to the Democratic warnings, FaceApp began responding to a flood of inquiries about whether the company stores user data and where. FaceApp told TechCrunch in a statement that while its research and development team is based in Russia, no user data are transferred there.
The company that created FaceApp, known as "Wireless Labs," also claims that "most images are deleted from our servers within 48 hours from the upload date."
Founder Yaroslav Goncharov told the website that FaceApp, headquartered in St. Petersburg, uses Amazon's cloud platform and Google Cloud to host the app data, where it processes "most" photos. Uploaded photos, FaceApp said, may be stored temporarily for "performance and traffic" to ensure users don't repeatedly upload the same photo as the platform completes an edit.
Users have expressed concerns that the app has access to their entire respective iOS or Android photo library even if the user sets photo permissions to "never."
But FaceApp told TechCrunch that it only processes photos selected by the user — slurped from their photo library or those captured within the app. Security researchers have done their own work to back that claim. Will Strafach, a security researcher, said he couldn't find evidence that the app uploads the camera roll to remote servers.
FaceApp also said that 99% of users don't log in and, for that group of users, it doesn't have access to any identifying data.
Many data privacy experts are wary about these kinds of machine-learning apps, especially in a post-Cambridge Analytica era. Last year, Facebook said up to 87 million of its users' personal information was compromised by the third-party data analytics firm after an apparent breach of Facebook's policy.
FaceApp's terms of service state that it won't "rent or sell your information to third-parties outside FaceApp (or the group of companies of which FaceApp is a part) without your consent."
But it's that parenthetical clause — giving leeway to an open-ended, unidentified "group of companies" — that raises a red flag for Liz O'Sullivan, a technologist at the Surveillance Technology Oversight Project, and, she said, leaves the door open to another Cambridge Analytica-type scandal.
"My impression of it honestly was shock that so many people were, in this climate, so willing to upload their picture to a seemingly unknown server without really understanding what that data would go to feed," she said.
"For all we know, there could be a military application, there could be a police application," O'Sullivan said of FaceApp.
This app is one of many that leave open the potential to advance facial recognition software that, often unknown to users, is created from a compilation of people's faces.
In many cases, O'Sullivan said, the public doesn't find out what information is being collected about them until we see personal data revealed through Freedom of Information Act requests.
This month, as NPR has reported, researchers received records through one such request to find that Immigration and Customs Enforcement mines a massive database of driver's license photos in facial recognition efforts that may be used to target undocumented immigrants. Researchers have concluded facial recognition technology is biased and imperfect, putting innocent people at risk.
O'Sullivan wants to see more regulation in place that's designed to protect consumers.
Like her, many security advocates in the U.S. will be watching Europe's testing ground as lawsuits against tech giants play out under the General Data Protection Regulation, or GDPR, the European Union's sweeping new data privacy law.
SCOTT SIMON, HOST:
If you've seen photos of friends or celebrities posting selfies with silver hair and wrinkles lately, it's not the relentless news cycle. They probably used FaceApp. The app can make your face look older, younger, even style your hair. It can be fun, but lawmakers warn users to take precaution. The Democratic National Committee warned presidential campaigns against using it, and Senate Minority Leader Chuck Schumer has called on the FBI to investigate. He posted this on social media about FaceApp.
(SOUNDBITE OF ARCHIVED RECORDING)
CHUCK SCHUMER: It allows, quote, "perpetual, irrevocable and worldwide license to your photos, name or likeness." So this is a breathtaking and possibly dangerous level of access.
SIMON: Louise Matsakis is a staff writer at Wired magazine and joins us from New York. Thanks so much for being with us.
LOUISE MATSAKIS: Thanks for having me.
SIMON: In your judgment, is FaceApp really more compromising than Facebook or Instagram, who have billions of users already?
MATSAKIS: Absolutely not. I think that one thing to consider here is that this is a much smaller company, and Facebook has argued in the past that they're better at protecting your data because they, you know, inherently have more resources. But I don't think that this FaceApp is necessarily more controversial or troubling because it happens to be made by developers who are Russian. I think that's where kind of the anxiety from lawmakers comes from here.
FaceApp, like a lot of these free apps, their business model's around advertising. You know, they're sending some of your data to the Google ad networks. That's what they want here. That doesn't mean it's not troubling for other reasons, but I think lawmakers here are maybe fixating on the wrong issue.
SIMON: Why is it troubling for other reasons?
MATSAKIS: I think we should be really careful about who we want to share our faces with. This information is really sensitive. It's not something, like Schumer pointed out, that they're going to get back to you. You know, they maintain the rights to these images, and they can do what they want with them.
But the reality is we're sharing our facial recognition data with so many companies. And a lot of time, we don't have a lot of control over that. So it's really not just this app, even though it's, you know, called FaceApp, so it seems really literal. But when you upload pictures to dating sites, to Facebook, to, you know, photo-sharing sites like Flickr or Instagram, you're probably in a facial recognition database and you don't know it already.
SIMON: I got to tell you. I've been taking pictures of our 16-year-old since she came into our lives. Is it already too late for her?
MATSAKIS: Probably. And it's too late for what Georgetown University estimates is probably over half of Americans who are already in a facial recognition database as it stands. You know, that includes when you get a license, right? You know, that license database might be used by the government to create a facial recognition algorithm that includes maybe a dating profile that might be used to identify your gender or something. You know, researchers from Stanford in 2017 actually took photos from a dating network, and they used it to build a very controversial tool that claimed to identify people's sexuality just based on their photo. So, you know, the people who are on that dating site had no idea something like that was going to happen.
SIMON: Do companies ever lose control of the data? I mean, that happens with credit card information all the time, doesn't it?
MATSAKIS: Sure. And it's not even just losing control. A lot of time, these databases are widely available or they're made public to academic researchers and to corporations in order to foster more scholarship. So a lot of these photos are just out there even if there isn't a data breach to begin with. These big training sets are already available to a, you know, wide array of different actors who want to use them to build tools that they can use to do whatever they want.
SIMON: Is there any way to stop this, any way to take steps, even if it's too late to protect your images?
MATSAKIS: Definitely. I think making sure that your profiles are not public is a good way of doing that. That's really hard because dating profiles, for instance, are inherently public, right? You want other people to see them. But I think really thinking about, do I need to upload this picture right now, is this worth the risk that it poses by uploading it, do I really need to post a lot of pictures of me maybe publicly on Twitter or something like that, or would I rather just have kind of a private Facebook group where I share pictures of my daughter?
SIMON: Even if FaceApp isn't as dangerous as some political figures caution, is it good that this conversation - if you please - has been set off?
MATSAKIS: Definitely. I think this is really great that this conversation is centering on your face, which is such a valuable piece of data for these companies that are building these algorithms, often for troubling purposes, like law enforcement or that sort of thing, or identifying people and perhaps identifying you. And I think that's really an important part of this conversation, even though maybe this app is a little silly.
SIMON: When you say it's troubling that law enforcement could use it, I mean, if you're looking for someone who committed a violent crime against someone you love, you're glad to have that technology. On the other hand, if you're a dissident in Hong Kong, you're not glad to have that technology.
MATSAKIS: Exactly. I think these conversations always involve a trade-off between privacy and security. And we kind of haven't really figured out that balance with facial recognition data yet. And I think another concern is that a lot of these algorithms have been proven to be inaccurate. You know, people have been arrested for crimes...
MATSAKIS: ...They didn't commit because the algorithm, you know, mistakenly identified them. So I think that's another piece that's really important here.
SIMON: Louise Matsakis of Wired, thanks so much for being with us.
MATSAKIS: Thanks again. Transcript provided by NPR, Copyright NPR.