June 30th, 2024

The Question of What's Fair Illuminates the Question of What's Hard

Computational complexity theorists repurpose fairness tools to analyze complex problems. Transitioning from multiaccuracy to multicalibration enhances understanding of hard functions, simplifying approximations and strengthening theorems in complexity theory.

Read original articleLink Icon
The Question of What's Fair Illuminates the Question of What's Hard

The article discusses how computational complexity theorists are using tools from algorithmic fairness to analyze hard-to-understand problems. By repurposing fairness tools used in banking and insurance algorithms, researchers can map out the complex parts of a problem and identify what makes it difficult to solve. The article highlights the transition from using multiaccuracy to multicalibration in fairness research, showing how these tools can be applied to strengthen existing theorems in complexity theory. The trio of researchers from Harvard University established connections between fairness tools and complexity theory, demonstrating that multicalibration can enhance our understanding of hard problems by identifying specific inputs that are challenging to solve. By applying multicalibration, researchers were able to simplify the process of approximating hard functions, reducing the number of splits needed to identify difficult inputs. This innovative approach has implications for both fairness in algorithmic decision-making and advancing our understanding of complex computational problems.

Link Icon 2 comments
By @austin-cheney - 4 months
I suppose this explains why developers don’t measure things. The things the typical developer needs to measure are tiny and immediate beyond trivial, typically with tools already available.

The problem with measuring things though is that you are forced to compare data and make original decisions that may likely destroy your personal opinions of a deeply held approach. Most developers bind their careers to a few technical conventions, which means a potentially catastrophic emotional/ethical impasse.

The fairness described in the article is about validation of algorithms across human fairness in finance and insurance, which implies validation against demographic averages. That can mean challenging assumptions of social presumptions for the sake of defeating bias as a quality control measure of algorithmic durability.

In either case the enemy is bias and the challenge is choosing between data objectivity versus current approaches. Even when the problem is made as simple as can be imagined people still find this harder than an outside observer would expect.

By @throwaway22032 - 4 months
> Now consider an algorithm that predicts whether loan applicants are likely to make all their payments. It’s not good enough to have an algorithm that predicts the correct general rate — the 40% chance of rain in our example above. It needs to predict the rate for specific individuals across different population groups in a way that’s both accurate and fair.

It's not clear to me what is meant by the term "fair" in this context. I would make the claim that an accurate prediction is fair by any reasonable definition.

If you're too strict with it you will immediately run into the problem that, say, someone with a well paying career is more likely to pay their loan back than someone who works behind a checkout, but that's "unfair" because it's correlated with family wealth, intelligence, lack of genetic disease, etc. Your algorithm can't be fair in the 'equality for all' sense because if it were it would have close to zero predictive value.