The Question of What's Fair Illuminates the Question of What's Hard
Computational complexity theorists repurpose fairness tools to analyze complex problems. Transitioning from multiaccuracy to multicalibration enhances understanding of hard functions, simplifying approximations and strengthening theorems in complexity theory.
Read original articleThe article discusses how computational complexity theorists are using tools from algorithmic fairness to analyze hard-to-understand problems. By repurposing fairness tools used in banking and insurance algorithms, researchers can map out the complex parts of a problem and identify what makes it difficult to solve. The article highlights the transition from using multiaccuracy to multicalibration in fairness research, showing how these tools can be applied to strengthen existing theorems in complexity theory. The trio of researchers from Harvard University established connections between fairness tools and complexity theory, demonstrating that multicalibration can enhance our understanding of hard problems by identifying specific inputs that are challenging to solve. By applying multicalibration, researchers were able to simplify the process of approximating hard functions, reducing the number of splits needed to identify difficult inputs. This innovative approach has implications for both fairness in algorithmic decision-making and advancing our understanding of complex computational problems.
Related
An Anatomy of Algorithm Aversion
The study delves into algorithm aversion, where people favor human judgment over algorithms despite their superior performance. Factors include agency desire, emotional reactions, and ignorance. Addressing these could enhance algorithm acceptance.
Why do we still teach people to calculate?
Conrad Wolfram promotes modernizing math education by integrating computers, problem-solving emphasis, and adapting to technological advancements. His innovative approach challenges traditional methods to better prepare students for real-world challenges.
Six things to keep in mind while reading biology ML papers
The article outlines considerations for reading biology machine learning papers, cautioning against blindly accepting results, emphasizing critical evaluation, understanding limitations, and recognizing biases. It promotes a nuanced and informed reading approach.
The Magic of Participatory Randomness
Randomness is vital in cryptography, gaming, and civic processes. Techniques like "Finger Dice" enable fair outcomes through participatory randomness, ensuring transparency and trust in provably fair games.
Misconceptions about loops in C
The paper emphasizes loop analysis in program tools, addressing challenges during transition to production. Late-discovered bugs stress the need for accurate analysis. Examples and references aid developers in improving software verification.
The problem with measuring things though is that you are forced to compare data and make original decisions that may likely destroy your personal opinions of a deeply held approach. Most developers bind their careers to a few technical conventions, which means a potentially catastrophic emotional/ethical impasse.
The fairness described in the article is about validation of algorithms across human fairness in finance and insurance, which implies validation against demographic averages. That can mean challenging assumptions of social presumptions for the sake of defeating bias as a quality control measure of algorithmic durability.
In either case the enemy is bias and the challenge is choosing between data objectivity versus current approaches. Even when the problem is made as simple as can be imagined people still find this harder than an outside observer would expect.
It's not clear to me what is meant by the term "fair" in this context. I would make the claim that an accurate prediction is fair by any reasonable definition.
If you're too strict with it you will immediately run into the problem that, say, someone with a well paying career is more likely to pay their loan back than someone who works behind a checkout, but that's "unfair" because it's correlated with family wealth, intelligence, lack of genetic disease, etc. Your algorithm can't be fair in the 'equality for all' sense because if it were it would have close to zero predictive value.
Related
An Anatomy of Algorithm Aversion
The study delves into algorithm aversion, where people favor human judgment over algorithms despite their superior performance. Factors include agency desire, emotional reactions, and ignorance. Addressing these could enhance algorithm acceptance.
Why do we still teach people to calculate?
Conrad Wolfram promotes modernizing math education by integrating computers, problem-solving emphasis, and adapting to technological advancements. His innovative approach challenges traditional methods to better prepare students for real-world challenges.
Six things to keep in mind while reading biology ML papers
The article outlines considerations for reading biology machine learning papers, cautioning against blindly accepting results, emphasizing critical evaluation, understanding limitations, and recognizing biases. It promotes a nuanced and informed reading approach.
The Magic of Participatory Randomness
Randomness is vital in cryptography, gaming, and civic processes. Techniques like "Finger Dice" enable fair outcomes through participatory randomness, ensuring transparency and trust in provably fair games.
Misconceptions about loops in C
The paper emphasizes loop analysis in program tools, addressing challenges during transition to production. Late-discovered bugs stress the need for accurate analysis. Examples and references aid developers in improving software verification.