Have you ever wondered if the algorithms that increasingly govern our lives are truly fair? In this blog post, I review an intriguing paper titled "Learning to Be Fair: A Consequentialist Approach to Equitable Decision-Making," published on arXiv. This paper discusses the challenges and potential pitfalls of designing fair machine learning systems, as well as what they call a consequentialist approach to addressing these issues. With machine learning systems becoming integral to various aspects of our lives, such as banking, criminal justice, and healthcare, it is crucial that we consider the importance of creating fair and equitable algorithms that make decisions without causing unintended harm to vulnerable groups.
When creating fair machine learning systems, designers often focus on achieving parity in error rates across different protected attributes like race and gender. However, some strategies that seem fair on the surface might not account for the downstream effects, potentially causing harm to the very groups they aim to protect.
For example, gender-blind criminal risk assessments might overestimate the risk of female defendants recidivating, resulting in increased detention rates for women. In another case, when allocating resources to help individuals attend appointments, such as going to court, a strategy that prioritizes those with the largest estimated effect per dollar could inadvertently favor individuals who live closer to the courthouse. This would lead to an unfair allocation of resources, as demonstrated in Santa Clara County case described in the paper where the strategy resulted in a higher average spend on white clients ($7.4) compared to Vietnamese clients ($5.38).
To address these issues, a consequentialist framework (CF) for algorithmic fairness has been proposed. This approach focuses on the outcomes of decisions rather than the properties of the prediction. It begins by identifying the utility of different possible outcomes, such as efficiency and equity, and uses linear programming that incorporates stakeholder preferences to derive optimal decision policies. This method offers advantages over static experimental designs, like randomized control trials, and adapts better to specific scenarios.
The paper on this topic highlights that "using adaptive experimental designs with our framework yields better outcomes for participants during learning, and often more quickly identifies higher utility decision policies for future use, compared to static experimental approaches like randomized control trials." This finding has significant implications for those who are concerned with fairness in machine learning systems.
In conclusion, the paper suggests that causal definitions of algorithmic fairness lead to Pareto-dominated policies. In simpler terms, this means that regardless of one's preference for efficiency or equity, there will always be another strategy that is more satisfying for everyone involved. Hence, it is crucial for designers of machine learning systems to adopt a consequentialist approach to ensure fair and equitable decision-making that benefits all stakeholders.