Sunday, June 08, 2025 | 08:08 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

AI is using 'fake' data to learn to be less discriminatory and racist

Many AI makers are using 'synthetic' images to train computers on a broader array of people, skin tones, ages or other features

Artificial intelligence
premium

Fake data isn’t just being used to train vision recognition systems, but also predictive software.

Parmy Olson | Bloomberg
Last week Microsoft  said it would stop selling software that guesses a person’s mood by looking at their face. The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.

But there’s another, novel approach that tech firms are exploring: training AI on “synthetic” images to make it less biased. The idea is a bit like training pilots. Instead of practicing in