Our sectors

To:
postbox@leighday.co.uk
We treat all personal data in accordance with our privacy policy.
Show Site Navigation

The rise of automated workers: Part 2 – do robots discriminate?

Jasmine Patel and Jessica Hunt from the employment and discrimination team discuss the rise of automated workers and what it means for employment in this two-part blog series.

Robot workers
Related Areas of Practice:
Jasmine is a solicitor working in the firm’s employment and discrimination team. Jessica is a legal executive in the employment team.
In part 1 of our blog on the rise of automated workers we discussed what is happening in the world of robotics, AI and employment and how it is affecting jobs. In part 2 of our blog we look at the problems that arise when discrimination is built into automated systems and how this can be prevented. 


Do Robots discriminate?


Whilst we may be able to adapt and find new jobs, there are other concerns which accompany the rise in technologies and artificial intelligence as automation is used to do jobs which many thought were the preserve of human beings alone. A particularly worrying feature is the fact that these robots have been found, like their human programmers, to discriminate. 

In October 2018, Amazon discovered that its AI recruitment tool did not like women and were forced to terminate the project. The aim of this tool was to automate the search for recruitment, using algorithms to score applicants. However, it became clear that the tool was discriminating against women. It reportedly scored applications down which mentioned certain words such as “woman” e.g women’s chess club captain, and assigned value to verbs more commonly found in male CV’s, such as “executed” and “captured”.  

Earlier this year, it was revealed that Facebook had algorithms in place which prevented minorities, women and the elderly from seeing certain advertisements, resulting in unlawful discrimination. Another example of the concerning nature of robotic tools.

Further, in a recent report it was found that Apple’s Siri and Amazon’s Alexa are rooted with gender biases. Unesco found that the female-voiced technology often gives submissive responses to queries, and as such is embedding harmful gender biases. Unesco’s report ‘I’d Blush if I Could’ said that “because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’.

The issue of course is that these AI tools are being created and programmed by humans who have their own in-built prejudices, whether conscious or not. Most AI tools are created by a predominantly male-dominated technology and engineering industry, in some instances using historical data, which reflect patterns favoured by men. In May 2018 the Science and Technology Committee in its 4th Report Algorithms and Decision Making put forward the opinion that if historical recruitment data are fed into a company’s algorithm, the company will “continue hiring in that manner, as it will assume that male candidates are better equipped. The bias is then built and reinforces with each decision.”

Whilst corrections can be made, it is uncertain whether these types of AI tools would then learn new ways that would ultimately continue to discriminate. Perhaps as long as there are human programmers this issue will be impossible to overcome? 


How can discrimination be prevented? 


A wide representation of people and data is essential when formulating algorithms in order to reduce biased views and prejudices.  As reported by the Science and Technology Committee in its 4th Report, “the importance of including individuals from diverse backgrounds, experiences, and identities……is one of the most critical and high-priority challenges for computer science and AI”.  We need to ensure that we understand current systemic discrimination to avoid replicating that in machines.  

In 2018 the World Economic Forum put forward a White Paper report which proposed background context in understanding potential risks of discrimination when looking at machine learning applications and how to prevent them. The guideline principles, in the report, on safeguarding against discrimination included diversity of input (broader datasets used to teach the software will mitigate bias), fairness, decision-making explanations which are understandable, and access to redress. 

If the correct approach is taken in the development of AI, with sufficient human control, representation and a regulatory framework, we could be looking towards a more advanced future that will hopefully assist and enhance what we as the human can do in a more objective and fair way. 


The future 


The rise in automation is showing no signs of slowing down.  One can barely go a week without hearing a news item discussing automation and the dangers and risks humans may face as more advanced technology is created. If we cannot stop it, it  is clear that we need to find ways to ensure it helps us. 

The most important factor, in our view, is to continue having discussions about automation and to continue to identify and challenge discriminatory practices when we see them. Luckily we are adaptable creatures and hopefully this will mean a comfortable path can be navigated; ensuring the robots works for us, and we don’t end up working for them. 

Share this page: Print this page

Let us call you back at a convenient time

We treat all personal data in accordance with our privacy policy.

To discuss your case

    More information

    Categories