Fake Faces. INSANE!
Fake Faces. Sounds insane doesn’t it. Sounds almost as if you’d be able to notice a fake face right? It’s a face, surely it’s noticeable. Well, we found out that, that surely is not the case. There are now businesses that sell fake people. On a website called Generated.Photos. You are able to buy a ‘unique, worry-free’ fake person. For anythign from $2.99 up to $1000!!
You can use the fake people in a video game or to make your company website appear more diverse. You are even able to get their photos for free on ThisPersonDoesNotExist.com. Furthermore, and get this, adjust their likeness as required. Such as make them old or young or even change their ethnicity! A company called Rosebud.AI will even animate them and make them talk!
The ‘fakers’ have appeared all on the internet and have been used as masks by real people. Used for ill intent: spies who don an attractive face in an effort to infiltrate the intelligence community. Furthermore right-wing propagandists have said to hide behind fake profiles. Or online harassers who troll their targets with a friendly visage.
Fake Faces
The NY Times even created their own AI system to understand how easy it was to generate different fake faces.
The system will see each face as a complex mathematical figure, a range of values that can be adjusted. Thus changing the value will change the face as you see fit.
The NYT system used a different approach. Instead of values that shifted, it generated two values to establish a starting and end point for all values and then created the images in between.
These images have become possible in recent years thanks to AI called generative adversarial networks. So, you feed a computer program with loads of photos of real people. The program will then study them and come up with it’s own photos of people. Another part of the system will then try and detect which of those photos are fake. This back and forth makes the fake photos even more indistinguishable from the real thing! The portraits that were used in the NYT story (which we urge you to check out!) was using GAN software that was made public. By Nvidia, the graphics company.
Who knows where this tech could lead? Maybe there could be a whole party of fake people? It will become so difficult to determine who is real and who is fake!
“When the tech first appeared in 2014, it was bad — it looked like the Sims,”
“It’s a reminder of how quickly the technology can evolve. Detection will only get harder over time.”
Camille François, a disinformation researcher whose job is to analyze manipulation of social networks.
Fake Faces and Advancing Tech
So, one of the reasons as to why these pics are so impressive is due to tech advancing so quickly. Tech has got so good at identifying facial features. We use our faces to unlock our phones or tell a photo software to determine who is who in a group of photos. Facial recognition is ued to identify criminal suspects. Facial tech is used every single day!
What makes it even more impressive is a company called Clearview AI scraped the web of billions of public photos. Which were shared by everyday users. Creating an app which can recognise a stranger in one photo.
While this facial recognition tech is impressive, other A.I. systems aren’t as perfect. Thanks to the underlying bias in teh data used to train the tech. Some systemes just aren’t good at recognizing people of color. As in 2015, an early image detection system developed by Google labeled two Black people as ‘gorillas’. Most likely because the system had been fed many more photos of gorillas than people of dark skin.
Cameras, the eyes of facial recognition systems aren’t good at capturing people with dark skin. That unfortunate standard dates back to the early days of film development when photos were calibrated to best show the faces of light-skinned people. The consequences of this can be severe. In January, a Black man in Detroit named Robert Williams was arrested for a crime he did not commit because of an incorrect facial-recognition match.
Flawed
AI can make our lives easier. Though it is flawed. As are we.However, the thing to remember is that we as humas choose how these A.I. systems are made and what data they’re exposed to. We choose the voices that teach virtual assistants to hear. Which in turn leads the systems to not understand people with accents. It’s all down to us and the data that we feed it. The A.I. system is only reacting to what we put into it and reacting to a set of instructions that we feed it. If x is x, then if the system notices x it will show only x. So to speak. So if we were to design a computer to predict a person’s criminal behavior by feeding it data about past rulings. Made by a human jude. Which in turn will be biased towards those judges. Humans train computers to see how we want them to see. So if we were to label glasses as nerds or dweebs. It will associated people with glasses as nerds of dweebs!
Trust
We are all too quick to trust that a computer is hyperactive and always right. But studies have shown that in situations where humans and computers must cooperate to make a decision. For example identify fingerprints or human faces. People will consistently make the wrong identification when a computer nudged them to do so. In the early days of GPRS systems, drivers famously followed the devices directions to send them to cliffs or lakes.
We really do urge you to check out the New York Times’ Interactive blog post on this because it is fascinating there’s lots of interactivity on the site too. It really shows what technology is capable of. WOW!
(NYT)
Keep up to date with everything How To Kill An Hour by signing up to our newsletter by clicking here!
Let us know what you think of the show by clicking here!
Click here to subscribe to our YouTube Channel to see more amazing ways to kill time!
Follow us on Twitch by clicking here!