Blog 3: Exception to Data Driven Rules
Published on:
Towards equitable exceptions
The case study argues that when algorithms are used for decisions, individuals have a right to be an exception to data-driven rule. It proposes a way of evaluating the propriety of an algorithmic recommendation on an individual.
A data-driven rule is an algorithm that guides decision making. An exception to a data-driven rule is an individual unit (person or otherwise) applying the algorithm to is inappropriate.
Data-driven rules can be used in contexts in which they are not useful. For example, while working with job seekers without college education, I noticed that they did not adhere to elitist notions of what a resume should look like. Despite their beyond adequate qualifications and work ethic, applicant tracking systems were not equipped to read their resumes properly.
The benefits to moving towards individualisation is that people with non-standard profiles, like the elderly and migrants, who are usually outside of the intended user profile and who may be experiencing language barriers in addition to tech illiteracy.
The downsides is that human bias could undermine notions of impartiality, but this is a weak argument considering the ostensible objectivity of models. I would also consider the invasiveness and awkwardness of interrogating users to evaluate exceptions.
- Uncertainty is a helpful critical tool. It is the degree to which the model cannot ascertain a decision with appropriate confidence. To center uncertainty is to foreground the simple truth that all models are wrong, and the eventually, a human actor must face the ethical decision of accepting or rejecting a unit with evalutating the consequences of their actions.
Should we use data-driven rules to begin with?
I understand that they are an attractive method with the advent of mass society. Evaluating each individual in times of unprecedented population growth is eased by computation. However, I wonder if traditional modes of decision making can still be applicable today by localising both positions and decisions.
This was a helpful article to think about algorithmic fairness. Using models to influence people’s lives can be a double edged sword. Careful evaluation frameworks must accompany any effort of this nature.