Categories
Blog

Coders’ dilemmas: The challenge of developing unbiased algorithms

[By Samarth Gupta]

The 21st century has seen the rapid rise of algorithms to control, manage, analyze, and perform any number of operations on data and information. Some believe that decision-making processes can be significantly improved by delegating tasks to algorithms, thereby minimizing human cognitive biases. As Tobias Baer argues:

“The rise of algorithms is motivated by both operational cost savings and higher quality decisions […]. To achieve this efficiency and consistency, algorithms are designed to remove many human cognitive biases, such as confirmation bias, overconfidence, anchoring on irrelevant reference points (which mislead our conscious reasoning), and social and interest biases (which cause us to override proper reasoning due to competing personal interests).”

However, there are increasing concerns that algorithmic decisions may be as least as biased and flawed as human decisions. One worry is that, they may exhibit ‘race’ or ‘gender’ bias as Amazon’s Artificial Intelligence (AI) based recruitment tool demonstrated, compelling the company to scrap it altogether. According to a Reuters article, this was because “Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.”

With the boom in AI, algorithms are being viewed with renewed scrutiny given its sweeping impact on our everyday lives, including how it will shape the future of work. As a student of computer science, I have been actively coding since the last 5 years. I have been involved in various open source communities where programmers develop, test and debug software issues. Moreover, having done internships at different institutions across India, I have closely interacted with programmers from different geographies and socio-economic backgrounds, learning about their perspectives. Reading the ongoing news on algorithmic bias, ethics, and how these systems can discriminate, got me thinking. I wanted to understand how these biases find their way into algorithms. Are coders even aware of the possibility of bias in programming?  What are the different (technical as well as non-technical) approaches used by them to eliminate these biases? Do they receive any kind of ethical/value-based training from the company they work for? Did they face dilemmas of programming something they didn’t want to? If so, then what was their approach to solving the issue?   

To explore these questions, I did a pilot study by interviewing software engineers working in different tech corporations across India. The participants were recruited from my school alumni group as well as members from different programming groups where I have contributed/contribute in the software development cycle. The purpose was to explore the notions of ethics and human values coded into the algorithms based on a programmer’s perspective. The interviewees were selected keeping certain diversity parameters in mind – level in the company, kind of work they do and their years of experience. The companies too ranged in size from start-ups to software giants.

Image credit: Pixnio

The responses were revealing. Firstly, they suggest that none of the organizations conducts any kind of ethical or moral training sessions for the software engineers. In the words of one of the participants:

“No bro…there weren’t any such trainings […]. There were classes on corporate communication and email etiquettes. But majority of the training was focussed on Java and Kubernetes (Computer Software) […]. The manager and other teammates are always there for help in case of any difficulty.”

It was also noted that most of these companies are dominated by male workers, as has been highlighted in the media. According to a diversity report by Google, women make up only about 20% of the engineering staff.  This lack of representation inevitably reflects stereotypes about women in the software.

Tech companies often develop innovative products that cater to extremely large audiences. For instance, WhatsApp has close to two billion active users across 180 countries. Developing a product of such a massive scale depends on complex databases, data structures and algorithms. Even basic software programmes are technically intricate, and involve teams of designers, managers, and engineers with different specializations. However, very often, the code is subdivided into smaller modules and specialized teams work on those aspects independently of other teams. In the words of one of the interviewees:

“Actually, everything happens in a hierarchical manner… first the senior management decides what all features should be there in the product…after that the product managers refine it […] finally the software engineers are asked to code for the features. For example, if I am developer engineer I will only code for the GPS route optimization… testing people will do different types of testing for checking if the code is stable […] designers will redefine and redesign the UI/UX […] and so on.”

This subdivision of the tasks according to the hyper-specialization of the employees prevents them from getting to know the broader context as they are limited to their own ‘tiny’ specialized job. This micro-division of work according to the hyper-specialization has two major consequences:

  1. Employees often feel alienated from the bigger picture as they are confined to their specialized task – debugging the code, testing it on different pre-decided parameters or developing a module of software. This alienation may reflect in the software they develop and thereby affect the users. For instance, a programmer who is hesitant to communicate with others may favour developing an AI-powered chatbot for implementing help functionality in his/her software instead of developing an algorithm that works by asking for help from other users.
  2. It becomes extremely hard to implement ethics in this institutional system. The engineers most often have no contextual awareness of the product they are developing. They just focus on the efficiency of their code, its scalability and efficient design without any regard for ethics or human values.

Most software companies and start-ups proudly claim that they have a casual working environment, provide different kinds of perks, flexibility, recreation facilities and many other benefits to the employees. The organization might have a culture of empathy, diversity and collaboration. They may be empathetic while caring for the employee’s family, inclusive with recruitments specifically for women, and LGBTQ groups, for instance. In the words of one of our interviewees:

“The company has a very sweet, family-like atmosphere […]. There are lots of fun activities, no restrictions on the dress code or fixed work timings…to the extent that on Friday evenings, the whole team goes out for some fun activity…But friend, there is no value of our suggestions…See, what happens is that in the weekly meetings we are asked to give our suggestions on the product, but you understand the working of corporate […] it’s just for formality so that we don’t feel excluded…strict top down orders as given by the senior management are followed […] we are just for coding their desirable features.”

This shows that when it comes to business, companies may become strictly hierarchical, with the junior employees having no say in any of the products that they code for. It clearly reflects the contradiction between the culture that a company envisages and what the employees actually experience on the job.

Another dilemma that programmers face is conflict of interest. Even if they encounter unfair work practices, they must choose between raising their voices against it and risking their job or remain on the safer side and just keep on moving things forward. As one interviewee remarked, when asked whether (s)he had faced any such dilemma:

“Yes…this happened while I was working as an intern with [name of organization]. My task was to analyse the data of the financial transactions…I found out later that the analysis that I was doing was being used to gauge the consumer’s behaviour, but users were not informed…I felt like raising my voice…talked to my friends there [who were also working as interns]…finally decided not to do anything and continue with my project…as nothing was going to change with my objection and I would also lose my PPO [Pre placement offer]…company’s CTC [Cost to company] was very good…why should I interfere with it?”

The final dilemma that emerged in the interviews had to do with designing ethical AI algorithms. Programmers are careful to not explicitly insert any bias in their algorithms. For instance, no programmer would insert a line like “if the gender is female then assign a lower score to the resume”. The bias creeps in at the time of ‘nurturing’ the algorithm – i.e. training it on the data. Real-world data reflects human limitations and judgment, and this invariably gets inscribed into the algorithm when it is trained on real world data.

Hence, the need of the hour is to sensitise the data annotators and collectors about this problem, diversify data to become more representational and subject it to rigorous testing to eliminate stereotypes and biases. The platform-based gig economy has been rising consistently, as reported by the Economic Times and hence will tremendously affect the future of work. Algorithmic bias plays an important role here. At Uber Eats, for instance, gig workers blame the algorithmic system for the unexplained changes that led to slashing their jobs and salaries. Many women are excluded from the gig work force due to the lack of value-based design for women in digital platforms, as reported by the Observer Research Foundation, an Indian think tank.

Algorithms (and code in general) are increasingly impacting our lives in this age of the platform society. If these tools are to work for everyone, in a fair and just manner, values of fairness and justice should be part of their framework. It is not enough to appeal to programmers through measures such as ethical guidelines. Algorithmic biases need to be met in the same complex and wide way they affect us – from individuals to organizations, regulators, and society by and large.  

Categories
Blog

Design for one: Centering the inadequacy of technology

[By Chinar Mehta]

The first computer in my own home, years ago, was kept in the only room with air conditioning; my parents’ bedroom. The tangle of wires positioned my parents as the authority in the household – there was nothing we could do on the “personal” computer that could escape their notice. Few of us can deny the significance of the place of the computer, the television, or the telephone in our homes. Chairs and sofas point towards the television, and the location of the telephone censors who we call and how we talk to them. Even my first interactions with a mobile phone, otherwise designed to be for the individual, were on my mother’s phone; a cause of minor annoyance for her when the ringtone would be altered without her consent. My mobile phone today, however, has become much harder to share due to all that I need to protect. I keep it locked with my fingerprint. It contains access to my social networks, fusing a kind of personal identity with my phone. Even more importantly, my phone maps my bank account to me with apps like BHIM or Google Pay, DigiLocker allows me to carry valid digital copies of my identity documents, and Aarogya Setu rules me safe or unsafe from SARS-CoV 2 infection. The phone (and its accessory technologies) is increasingly designed to not just be on my person at all times, but being me. While this may seem like a runaway train at this point to many “digital natives”, we must be willing to denaturalise the mobile phone as a “private” object with design strategies that will be universally acceptable.

Phose users in India: Design for one? (Image credit: UN Women Gallery)

The design context of the smartphone is one where the positioning of the imagined user is severely limited. Many research studies speak to the wide digital divide between men and women in South Asian countries, particularly India, Pakistan, and Bangladesh. In India, men are 33% more likely to own a phone than women, according to a study conducted by the John F. Kennedy School of Government at Harvard University. The study attempts to explain this divide with reference to the gender norms that govern acceptable usage of mobile phones by women, positing the device as “challenging traditional gender norms”. 47% of the women who used mobile phones were borrowers of phones, and the study claims that this poses significant constraints on diversification and independence, especially when the phones are borrowed by women from their husbands. Moreover, the study also suggests that many more women use phones for a simple activity such as making a call, as opposed to something more complex, like using social media websites. These conclusions can be worrying considering research that suggests that access to internet technologies for women in developing nations exposes them to a wide range of benefits; empowering rural women in India and South Africa, helping Iranian women participate in the national discourse via blogging, social media aiding women entrepreneurs in Indonesia, among others. 

While it may be fruitful to examine the various gender norms that guide the rules and contexts of use of mobile phones in India, what generally escapes scrutiny is the assumption that sharing technology such as a mobile phone does not allow for it to be used in the best way possible. The technology itself evades examination. This points to a predilection of some social science research where questions “tend to be framed in terms of what is wrong with the person who is experiencing the problem, rather than in terms of what it is about the current social order that makes the problem likely”. This criticism of the framing of questions also holds true for when the design of the technology is not adequately addressed, but contextual social norms are.  The assumption here is that developing societies have restrictive social norms that do not allow for the use of technology that is seemingly designed for universal use. Internet technologies are driven by commercial or state interests, seldom being analysed as being inadequate for use by marginalised identities. The users of most technologies are imagined by their developers to be very different from those our team at FemLab.Co imagines as part of this project; normatively an “ungendered” white user. In this way, technologies fit into a larger ecosystem of a neoliberal economy of deeply gendered culture and design. The mobile phone, particularly, becomes by design a technology each individual must have access to and use in a certain way. 

With this context, revisiting the mobile phone as a private object becomes crucial. The personal computer of the 1990s, and even the feature phone to a certain extent, was used in a manner that kept the connections between individual identity and the machine loose. It lent itself to a shared use in a household, and it was used as such. The smartphone, in this regard, precludes shared use by design. Social media applications add to this predicament; all of them are designed for use by one person using the device. And yet, surprisingly, according to research, an economic constraint is not the only reason why people may share mobile phones. Of course, social norms in some communities make it acceptable to share mobile phones between members of a family or even friends. Even relatively privileged women, sometimes, have little interest in owning a smartphone. When women do use smartphones, many of them do not make use of social media, which may require a particular kind of technical skill. This kind of skill can be assumed for those familiar with technology prior to the smartphone, like a computer for instance, but not for others. Again, this is not simply a question of teaching individuals to use technologies. Rather, I would like to pose a different question; what is it about a technology that prevents or limits its use? What do women workers in diverse industrial sectors in Hyderabad find lacking in any media technology? Moreover, as far as technical skill is concerned, are there opportunities to learn with shared devices within groups?

We should be concerned about the lack of ownership of mobile phones, but one way to mitigate the problem is to redesign smartphones as devices that become part of an existing pattern of media use (or communicative ecology). Therefore, I argue that if many women are not able to derive the full benefits of a smartphone because (among other reasons) they are sharing the phone with someone else, this stems partly from the design of the smartphone (and its applications) itself. It is essential to centre this inadequacy. Similarly, if women are negotiating with the intended use of the mobile phone, we have a lot to learn from these negotiations. It would be with this spirit that we, at FemLab.Co, would attempt to look at the communicative ecologies of women workers in Hyderabad.