Don’t miss the latest developments in business and finance.

AI is using 'fake' data to learn to be less discriminatory and racist

Many AI makers are using 'synthetic' images to train computers on a broader array of people, skin tones, ages or other features

Artificial intelligence
Fake data isn’t just being used to train vision recognition systems, but also predictive software.
Parmy Olson | Bloomberg
5 min read Last Updated : Jun 28 2022 | 1:26 AM IST
Last week Microsoft  said it would stop selling software that guesses a person’s mood by looking at their face. The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.

But there’s another, novel approach that tech firms are exploring: training AI on “synthetic” images to make it less biased. The idea is a bit like training pilots. Instead of practicing in unpredictable, real-world conditions, most will spend hundreds of hours using flight simulators designed to cover a broad array of different scenarios they could experience in the air.

A similar approach is being taken to train AI, which relies on carefully labelled data to work properly. Until recently, the software used to recognize people has been trained on thousands or millions of images of real people, but that can be time-consuming, invasive, and neglectful of large swathes of the population.

Now many AI makers are using fake or “synthetic” images to train computers on a broader array of people, skin tones, ages or other features, essentially flipping the notion that fake data is bad. In fact, if used properly it’ll not only make software more trustworthy, but completely transform the economics of data as the “new oil.”

In 2015, Simi Lindgren came up with the idea for a website called Yuty to sell beauty products for all skin types. She wanted to use AI to recommend skin care products by analyzing selfies, but training a system to do that accurately was difficult. A popular database of 70,000 licensed faces from Flickr, for instance, wasn’t diverse or inclusive enough. It showed facial hair on men, but not on women, and she says there weren’t enough melanin-rich — that is, darker-skinned — women to accurately detect their various skin conditions like acne or fine lines.

She tried crowdsourcing and got just under 1,000 photos of faces from her network of friends and family. But even that wasn’t enough. 

Lindgren’s team then decided to create their own data to plug the gap. The answer was something called GANs. General adversarial networks or GANs are a type of neural network designed in 2014 by Ian Goodfellow, an AI researcher now at Alphabet’s DeepMind. The system works by trying to fool itself, and then humans, with new faces. You can try testing your ability to tell the difference between a fake face and a real one on this website set up by academics at the University of Washington, using a type of GAN.

Lindgren used the method to create hundreds of thousands of photorealistic images and says she ended up with “a balanced dataset of diverse people, with diverse skin tones and diverse concerns.”

Currently, about 80 per cent of the faces in Yuty’s database aren’t of real people but synthetic images which are labelled and checked by humans(3), she says, who help assess her platform’s growing accuracy.

Lindgren is not alone in her approach. More than 50 startups currently generate synthetic data as a service, according to StartUs Insights, a market intelligence firm. Microsoft has experimented with it and Google is working with artificially-generated medical histories to help predict insurance fraud. Amazon.com Inc. said in January that it was using synthetic data to train Alexa to overcome privacy concerns.

Remember when Big Tech platforms found themselves in hot water a few years ago for hiring contractors to listen in on random customers, to train their AI systems? ‘Fake’ data can help solve it.  

The trend is becoming so pervasive that Gartner estimates 60 per cent of all data used to train AI will be synthetic by 2024, and it will completely overshadow real data for AI training by 2030.  

The market for making synthetic images and videos is roughly divided into companies that use GANs, and those that design 3D graphics from scratch. Datagen Technologies, based in Tel Aviv, Israel, does the latter. Its CGI-style animations train car systems to detect sleepiness.

Carmakers have historically trained their sensors by filming actors pretending to fall asleep at the wheel, says Gil Elbaz, co-founder of synthetic data startup Datagen, but that still leads to a limited set of examples. 
Fake data isn’t just being used to train vision recognition systems, but also predictive software, like the kinds  banks use to decide who should get a loan.   

For example, to help design algorithms that distribute loans more fairly to minority groups, Fairgen makes databases of artificial people from minority groups with average credit scores that are closer to those from other groups. One bank in the UK is currently Fairgen’s data to hone its loan software.  Cohen says manipulating the data that algorithms are trained on can help with positive discrimination and “recalibrating society.”

Strange as it may sound, the growth of fake data is a step in the right direction, and not just because it avoids using people’s personal data. It could also disrupt the dynamics of selling data. Synthetic data also won’t eliminate bias completely, though, says Julien Cornebise, an honorary associate professor of computer science at University College London.

“Bias is not only in the data. It’s in the people who develop these tools with their own cultural assumptions,” he says. “That’s the case for everything man-made.”

Fake is the new real

  • If used properly it’ll not only make software more trustworthy, but completely transform the economics of data
  • Fake data isn’t just being used to train vision recognition systems, but also predictive software
  • According to estimates, 60 per cent of all data used to train AI will be synthetic by 2024, and it will completely overshadow real data for AI training by 2030

Topics :TechnologyAritificial Intelligence

Next Story