Don’t miss the latest developments in business and finance.

AI design for distribution

The split and distribution of benefits is complicated by philosophical and political stances regarding what is "fair"

Image
Devangshu Datta
5 min read Last Updated : Jul 22 2022 | 10:53 PM IST
Computer programs and algorithms started being used for trading and asset allocation around five decades ago. But figuring out how to distribute public goods and assets is not done algorithmically. However, a new Artificial Intelligence-based study implies that even this might be done more effectively and equitably via self-learning algorithms.

AI research company DeepMind (an Alphabet subsidiary) is famous for its “AlphaZero” algo, which does useful things like teach itself to play great chess and Go, and solve esoteric problems like protein-folding. DeepMind has just published a paper using human-AI interactions to disburse benefits in ways that seem fairer than traditional methods.

Let’s say many people contribute different amounts to a public fund. The fund generates a return. The contributors should then receive commensurate returns. But what is commensurate?

This is easy enough when talking about pension funds. You contribute x amount, and receive the return generated by those asset classes that amount is invested in.

It is much more complicated when imposing taxes, and assessing what mix of public goods should be generated by the pooled taxes, and how that mix should be allocated. Behavioural scientists have been known to use an experiment called the “Public Goods Game” to assess such splits.

In this game, players choose how much they will invest in a fund. The pool generates return. But the payoff may be divided equally, or in other ways. So if the aggregated pool is Rs 100, with 100 participants, and the total return on investment is Rs 50, every player may receive Rs 1.5 back, regardless of contribution. That’s called “strict egalitarianism”.

If each receives a return according to the absolute value of contribution, that’s “libertarian”. “Liberal egalitarian” is yet another distribution method, where the return is calculated on the basis of the fraction of total assets contributed. A low-income player who gives a larger share of income will receive more than a high-income person who contributes a low share of income.   

The total payoff is maximised if everybody contributes a lot. But a “free rider” who contributes zero (or little) may receive something for nothing. If this game is played over multiple rounds, we get a sense of how much the “average” person will tend to contribute and what’s considered a fair return.

This models what happens with taxes and public goods. We pay multiple rounds of taxes. The return of the public goods game may be considered analogous to the distribution of real-world public goods like education, clean air, subsidies, and maybe even universal basic income.

The split and distribution of benefits is complicated by philosophical and political stances regarding what is “fair”. Should a billionaire receive subsidy on gas cylinders? Should a schoolchild from an upper income family receive a free mid-day meal at a government school?

This obviously has wide ranging implications in designing public goods delivery mechanisms. While some people will always be free-riders and others may contribute more than compelled to, if contribution and disbursal satisfy the majority, you have a taxation and public goods system that works, and is popular with voters. On the other hand, if a redistribution scheme is unpopular, it will lead to low contributions, free-riding, unequal outcomes, or induce-tax avoidance.

Using AI, these aspects of human behaviour can be studied across a series of games. Before an AI-based system can reliably deliver on human values, it has to know what the values are.  Computer scientists call this “value alignment.” Instead of programmers making guesses about the likely human values, DeepMind used principles of deep reinforcement learning and human feedback to create what the paper calls “Human-centred mechanism design with Democratic AI”.

DeepMind put together a series of games with thousands of human participants and virtual agents designed to imitate human behaviour. While comparing contributions and feedback from strict egalitarian, libertarian, liberal egalitarian and other human-designed systems, the AI also designed the “Human Centred Redistribution Mechanism (HCRM)”.

In the games played by DeepMind, different players received different initial endowments (thus modelling income inequality) and chose what to contribute. Under HRCM, the payback was conditional both to the absolute value of contribution, and also conditional to the ratio between the contribution and the endowment. The higher the ratio of contribution, the more the return.

The HCRM system turned out to be most popular. The feedback showed the HRCM mechanism was seen as progressive in that it promoted enfranchisement of those who began at wealth disadvantage. But it also punished free-riders and encouraged larger contributions relative to endowments.

Obviously, there are unreal elements to this experiment and it’s hard to see Nirmala Sitharaman, or any other current-day, flesh-and-blood finance minister, admitting to using AI to design tax systems and public goods delivery mechanisms. But this could turn out to be a useful tool in the future.

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :Artificial intelligenceBS Opinion

Next Story