Business Standard

Tuesday, December 24, 2024 | 09:20 AM ISTEN Hindi

Notification Icon
userprofile IconSearch

AI is using 'fake' data to learn to be less discriminatory and racist

Many AI makers are using 'synthetic' images to train computers on a broader array of people, skin tones, ages or other features

Artificial intelligence
Premium

Fake data isn’t just being used to train vision recognition systems, but also predictive software.

Parmy Olson | Bloomberg
Last week Microsoft  said it would stop selling software that guesses a person’s mood by looking at their face. The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.

But there’s another, novel approach that tech firms are exploring: training AI on “synthetic” images to make it less biased. The idea is a bit like training pilots. Instead of practicing in

What you get on BS Premium?

  • Unlock 30+ premium stories daily hand-picked by our editors, across devices on browser and app.
  • Pick your 5 favourite companies, get a daily email with all news updates on them.
  • Full access to our intuitive epaper - clip, save, share articles from any device; newspaper archives from 2006.
  • Preferential invites to Business Standard events.
  • Curated newsletters on markets, personal finance, policy & politics, start-ups, technology, and more.
VIEW ALL FAQs

Need More Information - write to us at assist@bsmail.in