The AI software used to create such faces is freely available and is quickly improved. This enables small start-ups to easily create counterfeits that are so convincing that they can deceive the human eye. The systems train on huge databases with actual faces and then try to replicate their functions in new designs.
AI experts fear, however, that the counterfeits will enable a new generation of fraudsters, bots and spies to use the photos to build imaginary online personalities, mask prejudices and hurt efforts to bring true diversity to industry. The fact that such software now has a business model could also lead to a stronger erosion of trust on the Internet, which has already been attacked by disinformation campaigns, "deepfake" videos and other fraudulent techniques.
Elana Zeide, UCLA Artificial Intelligence, Legal and Political Assistant, said the technology "shows how little power and knowledge users have regarding the reality of what they see online."
"There is no objective reality to compare these photos with," she said. "We are used to physical worlds with sensory input … but we have no instinctive or learned reactions to how to recognize what is real and what is not." It is exhausting."
Icons8, an Argentine-based design company that sells digital illustrations and stock photos, launched its website last month and offers “worry-free, versatile, on-demand models with AI”.
The website allows anyone to post fake photos by age (from “infant” to “older”), ethnicity (including “white”, “latino”, “Asian” and “black”) and emotion (“joy”, “neutral “) To filter“ surprise ”) as well as gender, eye color and hair length. However, the system has some strange gaps and distortions: for example, the only skin color available for infants is white.
The company says its faces could be useful for customers who want to spice up promotional materials, fill out prototypes, or illustrate concepts that are too sensitive to a human model, such as "embarrassing situations" and "criminal proceedings". The online guide also promises customers that they can "increase diversity" and "reduce bias" by "incorporating many different ethnic backgrounds into your projects".
Companies have disgracefully embarrassed themselves to put a black man like the University of Wisconsin-Madison in a purely white crowd or to overlay women in group photos of men through arbitrary attempts to increase diversity.
While AI startups offer a simple solution that gives companies the illusion of diversity without actually working with different people, their systems have one critical flaw: they only mimic the similarities they have already seen. Valerie Emanuel, a Los Angeles-based co-founder of the talent agency Role Models Management, said she feared that such fake photos could transform the medium into a monoculture in which most faces look the same.
"We want to create more diversity and show unique faces in advertising in the future," said Emanuel. "That homogenizes a look."
Icons8 first created its faces by taking tens of thousands of photos of approximately 70 models in studios around the world, said Ivan Braun, the company's founder. Braun's colleagues, who operate in the United States, Italy, Israel, Russia and Ukraine, prepared a database for several months, cleaned the pictures, labeled data and sorted the photos according to the exact specifications of the computer.
After these images became available, engineers used an AI system called StyleGAN to output a flood of new photos that generated 1 million images in a single day. His team then selected the 100,000 most convincing images that were made available to the public. More will be generated in the coming months.
The company had won three customers in the first week: an American university, a dating app and a personnel planning company. Braun declined to name the customers.
Customers can download up to 10,000 photos per month from $ 100. The models do not receive any residual amounts for the new AI-generated images that come from their photo shoots, said Braun.
Another company, the San Francisco-based start-up Rosebud AI, offers customers the opportunity to see 25,000 photos of “AI-made models of different ethnicities”. Company founder Lisha Li – named for a fraud code she loved for infinite money A child for the people simulator game "The Sims" – said she first marketed the photos to small businesses on online shopping sites to give you stylish models without needing expensive photography.
The source images of their company came from online databases with free, non-copyrighted photos, and the system enables customers to easily put different faces on a moving set of bodies. She advertises the system as a powerful tool to improve photographers' skills and lets them easily adapt the models for a fashion shoot to the viewer's nationality or ethnicity. "Face is a pain point that technology can solve," she said.
The system is only offered to a limited group of customers who are individually assessed by the company in the hope of blocking bad actors. Around 2,000 prospective customers are on the waiting list.
Both companies rely on an AI breakthrough known as "generative adversarial networks", in which their work is refined using duel algorithms: a creator system outputs a new image that compares a critic system with the original and provides information the next draft of the creator. Each iteration usually produces a better copy than the last one.
But the systems are imperfect artists who are not trained in the fundamentals of human anatomy and can only try to match the patterns of all the faces they have previously worked on. On the way, the AI creates an army of so-called "monsters": nightmare faces with inhuman deformities and surreal mutations. Common examples are excessively fingered hands, expressionless faces and people with mouths.
In recent months, the software has become one of the most striking and viral breakthroughs for AI researchers and has significantly reduced the effort for artists and researchers to create dreamlike landscapes and fictional people. On thispersondoesnotexist.com you can see an apparently infinite stream of counterfeits, just like on thiscatdoesnotexist.com an AI escort system trained on cat pictures. To test whether people can tell the difference between a generated fake and reality, AI researchers at the University of Washington also created the side-by-side website whichfaceisreal.com.
Machine learning techniques are “open source” so virtually anyone can use and build on them. And the software is constantly improving: A newer version of StyleGAN, launched last month by AI researchers at Nvidia, promises faster generation methods, higher quality images and fewer glitches and artifacts that have revealed old counterfeits.
Researchers say the images are a gift to disinformation providers because, unlike real photos taken from another location, they cannot be easily tracked down. Such counterfeits already exist, including on Facebook, where fact checkers have found the images used to create counterfeit profiles to promote selected sites or political ideas.
In another case, the LinkedIn profile of a young woman named Katie Jones was found earlier this year, who connected with top officials in Washington to use an AI-generated image. Counterintelligence experts told the Associated Press that it had the signatures of foreign espionage.
The technology is also the basis for the face-swap videos known as deepfakes, which are used for both parodies and fake pornography. In the past, the systems required mountains of "facial data" in order to generate a convincing counterfeit. However, the researchers released details this year that show "low-shot" techniques, which only require a few images to achieve a convincing imitation.
Creating AI-generated images with this volume can be prohibitive because the process requires exceptional computing power in the form of expensive servers and graphics cards. However, Braun's company, like others, benefits from cloud computing competition between Google and Amazon. Both offer "credits" that start-ups can use for heavy AI work at greatly reduced conditions.
Braun said there is a reasonable fear that AI-generated images will be used for disinformation or abuse. "We have to take care of it. The technology is already there and there is no way out. “However, the solution to this problem is not the responsibility of companies like him: Instead, a“ combination of social change, technological change and politics ”is required. (The company does not use authentication measures such as watermarks to check whether they are real or fake.)
Two models who worked with Icons8 stated that they were only informed after the photo shoot that their portraits were used for AI-generated images. Braun said the first shots were for photography and the idea of an AI application came later, adding, "I never saw it as a problem."
Estefanía Massera, a 29-year-old model in Argentina, said her photo shoot was about expressing different emotions. She was asked to look hungry, angry, tired, and as if she had been diagnosed with cancer. When she looked at some of the faces created by AI, she saw similarities with her eyes.
She compared the facial creation software with “designer baby” systems, in which parents can select the functions of their children. However, she is less worried about how technology could affect her work: the world still needs real models, she said. "Today, the trends in general and for companies and brands should be as real as possible," she added.
Simón Lanza, a 20-year-old student who also sat for an Icons8 shoot, said he could see why people in the shop might be alarmed.
"As a model, I think it would take the work off people," he said. "But you can't stop the future."