Fairness in Reinforcement Learning

Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
[arXiv]

We initiate the study of fair learning in Markovian settings, where the actions of a learning algorithm may affect its environment and future rewards. Working in the model of reinforcement learning, we define a fairness constraint requiring that an algorithm never prefers one action over another if the long-term (discounted) reward of choosing the latter action is higher.

Our first result is negative: despite the fact that fairness is consistent with the optimal policy, any learning algorithm satisfying fairness must take exponentially many rounds in the number of states to achieve non-trivial approximation to the optimal policy. Our main result is a polynomial time algorithm that is provably fair under an approximate notion of fairness, thus establishing an exponential gap between exact and approximate fairness.