A guest post by Dr Sebastian Smart, a research fellow in the Centre for Access to Justice and Inclusion at Anglia Ruskin University
From welfare policies to credit scoring, job recruitment and even law enforcement practices, complex algorithms are a key part of the systems that impact many aspects of our everyday lives.
As we see more and more decision-making responsibilities once entrusted solely to humans now delegated to automated systems, we are also observing a rise in algorithmic discrimination.
While algorithms offer advantages by improving efficiency and – in some cases – objectivity, they also inherit biases from their training data or design choices. This gives rise to concerns about fairness, equality and justice in areas such as welfare and employment opportunities.
We have all seen from the Post Office scandal the extensive damage that can occur when technology is misapplied.
Tackling this pressing issue is proving difficult, not least because different jurisdictions are taking their own varied approaches to the problem.
Given the global nature of many technology platforms, the absence of universally accepted definitions and regulations when it comes to algorithmic discrimination mean we are seeing varying interpretations and degrees of enforcement across the world. What is needed is a universal approach.
While anti-discrimination laws are largely established in many jurisdictions, laws such as the UK Equality Act or the US Civil Rights Act do not directly consider biases ingrained in algorithms.
Algorithmic discrimination is not properly defined in law, there is no clarity about how an algorithm decides and the data used to make its decisions is often obscure. There is also the potential lack of a person in the loop when making these decisions and, consequently, who could potentially become responsible for the discriminatory act.
This makes it difficult to establish guidelines or regulations that can be applied globally. The considerations regarding decision-making systems are complex, varying not just because of technological advancements and uses but also because of cultural, social or legal contexts in which the algorithms operate.
In the US, for example, an algorithm used in court decisions was revealed to have a racial bias against Black defendants, but despite this had its use upheld by the Wisconsin Supreme Court, which concluded that caution had been exercised by the court while considering its results.
In Europe, a case where the Dutch tax authorities employed an algorithm to identify incorrect child benefit claims and potential fraud, using nationality as a risk factor and resulting in higher risk scores for non-Dutch nationals, was found to be unlawful by the Dutch court.
It ruled against the government’s use of the algorithmic system, citing privacy concerns and the potential for discrimination. It emphasised the fundamental rights of individuals and placed limitations on the state’s use of algorithms, especially where they might disproportionately impact vulnerable groups.
This reflects a growing concern in European jurisdictions about the implications of automated decision-making for human rights and civil liberties.
In Australia, the ‘Robodebt’ scandal saw the government use an automated system to calculate welfare overpayments by matching income data from the tax office with welfare payments in order to identify discrepancies.
This led to thousands of false debt notices being issued and saw the Federal Court of Australia rule against the government, finding that the income averaging approach used by the automated system was unlawful, as it did not constitute a lawful basis for raising a debt.
Comparing this with the US and Dutch cases, we can see the notable differences in judicial perspectives and decisions, underscoring the evolving legal landscape where courts are still grappling with the nuances of algorithmic bias and discrimination.
International legal bodies and organisations have a pivotal role to play in moving towards a much needed, harmonised global approach. The United Nations’ Global Digital Compact will present its key work on the impacts of artificial intelligence in September, including potential agreement on what algorithm discrimination means on a global scale.
We also expect to see the European AI Act, approved in December 2023, becoming the first regional AI regulation during this year. In Latin America, at least eight countries are in the process of regulating AI, and the African Union is likely to release an AI strategy for the continent in 2024.
A unified, global understanding of algorithmic discrimination is crucial in an era where technology transcends borders, and its impact is felt universally.
Without a standardised definition, efforts to regulate and mitigate the negative consequences of algorithmic bias will continue to be fragmented and inconsistent, leading to a patchwork of responses that fail to address the problem comprehensively.
Leave a Comment