Categories
Blog

Coders’ dilemmas: The challenge of developing unbiased algorithms

[By Samarth Gupta]

The 21st century has seen the rapid rise of algorithms to control, manage, analyze, and perform any number of operations on data and information. Some believe that decision-making processes can be significantly improved by delegating tasks to algorithms, thereby minimizing human cognitive biases. As Tobias Baer argues:

“The rise of algorithms is motivated by both operational cost savings and higher quality decisions […]. To achieve this efficiency and consistency, algorithms are designed to remove many human cognitive biases, such as confirmation bias, overconfidence, anchoring on irrelevant reference points (which mislead our conscious reasoning), and social and interest biases (which cause us to override proper reasoning due to competing personal interests).”

However, there are increasing concerns that algorithmic decisions may be as least as biased and flawed as human decisions. One worry is that, they may exhibit ‘race’ or ‘gender’ bias as Amazon’s Artificial Intelligence (AI) based recruitment tool demonstrated, compelling the company to scrap it altogether. According to a Reuters article, this was because “Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”

With the boom in AI, algorithms are being viewed with renewed scrutiny given its sweeping impact on our everyday lives, including how it will shape the future of work. As a student of computer science, I have been actively coding since the last 5 years. I have been involved in various open source communities where programmers develop, test and debug software issues. Moreover, having done internships at different institutions across India, I have closely interacted with programmers from different geographies and socio-economic backgrounds, learning about their perspectives. Reading the ongoing news on algorithmic bias, ethics, and how these systems can discriminate, got me thinking. I wanted to understand how these biases find their way into algorithms. Are coders even aware of the possibility of bias in programming?  What are the different (technical as well as non-technical) approaches used by them to eliminate these biases? Do they receive any kind of ethical/value-based training from the company they work for? Did they face dilemmas of programming something they didn’t want to? If so, then what was their approach to solving the issue?   

To explore these questions, I did a pilot study by interviewing software engineers working in different tech corporations across India. The participants were recruited from my school alumni group as well as members from different programming groups where I have contributed/contribute in the software development cycle. The purpose was to explore the notions of ethics and human values coded into the algorithms based on a programmer’s perspective. The interviewees were selected keeping certain diversity parameters in mind – level in the company, kind of work they do and their years of experience. The companies too ranged in size from start-ups to software giants.

Image credit: Pixnio

The responses were revealing. Firstly, they suggest that none of the organizations conducts any kind of ethical or moral training sessions for the software engineers. In the words of one of the participants:

“No bro…there weren’t any such trainings […]. There were classes on corporate communication and email etiquettes. But majority of the training was focussed on Java and Kubernetes (Computer Software) […]. The manager and other teammates are always there for help in case of any difficulty.”

It was also noted that most of these companies are dominated by male workers, as has been highlighted in the media. According to a diversity report by Google, women make up only about 20% of the engineering staff.  This lack of representation inevitably reflects stereotypes about women in the software.

Tech companies often develop innovative products that cater to extremely large audiences. For instance, WhatsApp has close to two billion active users across 180 countries. Developing a product of such a massive scale depends on complex databases, data structures and algorithms. Even basic software programmes are technically intricate, and involve teams of designers, managers, and engineers with different specializations. However, very often, the code is subdivided into smaller modules and specialized teams work on those aspects independently of other teams. In the words of one of the interviewees:

“Actually, everything happens in a hierarchical manner… first the senior management decides what all features should be there in the product…after that the product managers refine it […] finally the software engineers are asked to code for the features. For example, if I am developer engineer I will only code for the GPS route optimization… testing people will do different types of testing for checking if the code is stable […] designers will redefine and redesign the UI/UX […] and so on.”

This subdivision of the tasks according to the hyper-specialization of the employees prevents them from getting to know the broader context as they are limited to their own ‘tiny’ specialized job. This micro-division of work according to the hyper-specialization has two major consequences:

  1. Employees often feel alienated from the bigger picture as they are confined to their specialized task – debugging the code, testing it on different pre-decided parameters or developing a module of software. This alienation may reflect in the software they develop and thereby affect the users. For instance, a programmer who is hesitant to communicate with others may favour developing an AI-powered chatbot for implementing help functionality in his/her software instead of developing an algorithm that works by asking for help from other users.
  2. It becomes extremely hard to implement ethics in this institutional system. The engineers most often have no contextual awareness of the product they are developing. They just focus on the efficiency of their code, its scalability and efficient design without any regard for ethics or human values.

Most software companies and start-ups proudly claim that they have a casual working environment, provide different kinds of perks, flexibility, recreation facilities and many other benefits to the employees. The organization might have a culture of empathy, diversity and collaboration. They may be empathetic while caring for the employee’s family, inclusive with recruitments specifically for women, and LGBTQ groups, for instance. In the words of one of our interviewees:

“The company has a very sweet, family-like atmosphere […]. There are lots of fun activities, no restrictions on the dress code or fixed work timings…to the extent that on Friday evenings, the whole team goes out for some fun activity…But friend, there is no value of our suggestions…See, what happens is that in the weekly meetings we are asked to give our suggestions on the product, but you understand the working of corporate […] it’s just for formality so that we don’t feel excluded…strict top down orders as given by the senior management are followed […] we are just for coding their desirable features.”

This shows that when it comes to business, companies may become strictly hierarchical, with the junior employees having no say in any of the products that they code for. It clearly reflects the contradiction between the culture that a company envisages and what the employees actually experience on the job.

Another dilemma that programmers face is conflict of interest. Even if they encounter unfair work practices, they must choose between raising their voices against it and risking their job or remain on the safer side and just keep on moving things forward. As one interviewee remarked, when asked whether (s)he had faced any such dilemma:

“Yes…this happened while I was working as an intern with [name of organization]. My task was to analyse the data of the financial transactions…I found out later that the analysis that I was doing was being used to gauge the consumer’s behaviour, but users were not informed…I felt like raising my voice…talked to my friends there [who were also working as interns]…finally decided not to do anything and continue with my project…as nothing was going to change with my objection and I would also lose my PPO [Pre placement offer]…company’s CTC [Cost to company] was very good…why should I interfere with it?”

The final dilemma that emerged in the interviews had to do with designing ethical AI algorithms. Programmers are careful to not explicitly insert any bias in their algorithms. For instance, no programmer would insert a line like “if the gender is female then assign a lower score to the resume”. The bias creeps in at the time of ‘nurturing’ the algorithm – i.e. training it on the data. Real-world data reflects human limitations and judgment, and this invariably gets inscribed into the algorithm when it is trained on real world data.

Hence, the need of the hour is to sensitise the data annotators and collectors about this problem, diversify data to become more representational and subject it to rigorous testing to eliminate stereotypes and biases. The platform-based gig economy has been rising consistently, as reported by the Economic Times and hence will tremendously affect the future of work. Algorithmic bias plays an important role here. At Uber Eats, for instance, gig workers blame the algorithmic system for the unexplained changes that led to slashing their jobs and salaries. Many women are excluded from the gig work force due to the lack of value-based design for women in digital platforms, as reported by the Observer Research Foundation, an Indian think tank.

Algorithms (and code in general) are increasingly impacting our lives in this age of the platform society. If these tools are to work for everyone, in a fair and just manner, values of fairness and justice should be part of their framework. It is not enough to appeal to programmers through measures such as ethical guidelines. Algorithmic biases need to be met in the same complex and wide way they affect us – from individuals to organizations, regulators, and society by and large.