Categories
Blog

Is feminist design a solution to platform workers’ problems?

[By Pallavi Bansal]

Imagine a scenario in which you do not get shortlisted for a job interview – not because you are underqualified – but because the algorithms were trained on data sets that excluded or underrepresented your gender for that particular position. Similarly, you found out that you are consistently paid less than your colleagues in a sales job – not because of your inability to fetch clients or customers for the company – but because the rewarding algorithms favoured clients belonging to a certain religion, race or ethnicity. Further, you are asked to leave the company immediately without any notice or opportunity to interact with your manager – not because you committed a mistake – but because the clients rated you low based on prejudice.

While these biases, favouritism and discrimination could soon become a reality in mainstream workplaces due the exponential growth of decision-making algorithms, it is already causing disruption in the online gig economy. Recently, researchers at George Washington University found social bias in the algorithms related to dynamic pricing used by ride-hailing platforms Uber, Lyft and Via in Chicago, US. The study found “fares increased for neighbourhoods with a lower percentage of people above 40, a lower percentage of below median house price homes, a lower percentage of individuals with a high-school diploma or less, or a higher percentage of non-white individuals.” The authors of this paper Akshat Pandey and Aylin Caliskan told the American technology website, VentureBeat, “when machine learning is applied to social data, the algorithms learn the statistical regularities of the historical injustices and social biases embedded in these data sets.”

These irregularities are also spotted in relation to gender. A study conducted by Stanford researchers documented a 7% pay gap in favour of men, using a database of a million drivers on Uber in the United States. However, this study highlighted an even bigger problem that the researchers attributed to the following factors – differences in experience on the platform, constraints over where to work (drive), and preference for driving speed. A Cambridge University researcher Jennifer Cobbe told Forbes, “rather than showing that the pay gap is a natural consequence of our gendered differences, they have actually shown that systems designed to insistently ignore differences tend to become normed to the preferences of those who create them.” She said the researchers shifted the blame to women drivers for not driving fast enough and ignored why the performance is evaluated on the basis of speed and not other parameters such as safety. Further, in context to women workers in the Indian gig economy, it is imperative to understand whether these biases are socially inherent. For instance, if certain platform companies segregate occupations based on gender, then the resulting pool will inherently lack gender variation. This also compels us to ponder whether the concentration of female labour in beauty and wellness services, cleaning or formalised care work is a result of an inherent social bias or technical bias.

To make sense of all of this and understand how we can improve the design of these digital labour platforms, I spoke to Uday Keith, a Senior AI Developer with Wipro Digital in Bengaluru. His responses drew my attention towards Informatics scholar Bardzell’s feminist human-computer interaction design paradigm, which I use to contextualize them.

Illustration by Pallavi Bansal

PB: How can we overcome biases in algorithms?

UK: First of all, algorithms are not biased, it is the datasets which are biased. The imbalances in the datasets can be corrected via a method known as SMOTE (Synthetic Minority Over-sampling Technique) where the researchers recommend over-sampling the minority and under-sampling the majority class. In order to achieve this, we need to bring diversity to our training datasets and identify all the missing demographic categories. If any category is underrepresented, then the models developed with this data will fail to scale properly. At the same time, it is essential for the AI developers to continuously monitor and flag these issues as the population demographics are dynamic in nature.

This points us toward the two core qualities proposed by Bardzell – Pluralism and Ecology.  According to her, it is important to investigate and nurture the marginal while resisting a universal or totalizing viewpoint. She stresses to consider the cultural, social, regional, and national differences in order to develop technology. The quality of ecology further urges designers to consider the broadest contexts of design artifacts while having an awareness of the widest range of stakeholders. This means AI developers cannot afford to leave out any stakeholder in the design process and should also consider if their algorithms would reproduce any social bias. 

PB: Can there be a substitute for the gamification model?

UK: To simplify the process and ensure equity in the gig economy, platform companies can advise AI developers to introduce a “rule”. This would mean fixing the number of minimum rides or tasks a platform worker gets in a day, which can also help in ensuring a minimum wage to them and provide a certain level of income security. The introduction of a fixed rule can even eliminate social biases as this would not result in a particular gender or social group getting less work. Further, the reward system can undergo a major overhaul. For instance, rather than incentivizing them to drive more and indulge in compulsive game-playing, platform companies can build algorithms that provide financial rewards when the drivers follow traffic rules and regulations, drive within permissible speed limits, and ensure a safe riding experience. In fact, we can even provide options to the customers where they could be given discount coupons if they allow drivers to take short breaks.

Elaborating on participation, Bardzell suggests ongoing dialogue between designers and users to explore understanding of work practices that could inform design. This also means if the platform companies and AI developers are oblivious to the needs and concerns of labour, they may end up designing technology that could unintentionally sabotage users. Secondly, an advocacy position should be taken up carefully. In the earlier example, “driving fast” was considered as a performance evaluator and not “safety”, which usually happens because the designers run the risk of imposing their own “male-oriented” values on users.

PB: How work allocation can be more transparent?

UK: Well, deep learning algorithms used by various companies have a “black box” property attached to them to a certain extent. These algorithms are dynamic in nature as they keep learning from new data during use. One can only make sense of this by continuously recording the weightage assigned to the pre-decided variables.

The quality of self-disclosure recommended by Bardzell calls for users’ awareness of how they are being computed by the system. The design should make visible the ways in which it affects people as subjects. For instance, platform companies can display the variables and the corresponding algorithmic weightage per task assigned on the smartphone screen of the workers. So, if a platform driver has not been allocated a certain ride due to his past behaviour, then the technology should be transparent to reveal that information to him. Uncovering the weightage given to various decision-making algorithms will enable the platform worker to reform their behaviour and gives them a chance to communicate back to the companies in case of discrepancies or issues.

PB: How can we improve the rating systems?

UK: The platform companies have started using qualitative labels that could help users to rate the workers better. However, we do need to see whether sufficient options are listed and suggest changes accordingly. Moreover, if we want to completely avoid the numerical rating system, we can ask the users to always describe their feedback by writing a sentence or two. This can be analysed using Natural Language Processing (NLP), a subfield of Artificial Intelligence that helps in understanding human language and derive meaning.

Bardzell writes about the quality of embodiment in respect to meaningful interactions with the technology and acknowledging the whole humanness of individuals to create products that do not discriminate based on gender, religion, race, age, physical ability, or other human features. This concept should also be applied in relation to how users rate workers and whether they discriminate on the basis of appearances or other factors. Hence, there is a strong need to include the qualitative rating systems along with the quantitative ones.

Additionally, Uday Keith recommends defining “ethics” and frequently conducting ethics-based training sessions since a diverse set of people form the team of data scientists, which comprises of roughly 10% of women in an urban Indian city Bengaluru. He concluded by remarking that the issues in the platform economy are more of a system design fault than that of an algorithmic design – the companies consciously want to operate in a certain way and hence do not adopt the above recommendations.

These pointers make the case for the adoption of a feminist design framework that could bring about inclusive labour reforms in the platform economy. As Bardzell says, “feminism has far more to offer than pointing out instances of sexism,” because it is committed to issues such as agency, fulfilment, identity, equity, empowerment, and social justice.

Categories
Blog

Platform drivers: From algorithmizing humans to humanizing algorithms

[By Pallavi Bansal]

I remember getting stranded in the middle of the road a few years ago when an Ola cab driver remarked that my trip had stopped abruptly and he could not take me to my destination. Frantic, I still requested him to drop me home, but he refused saying he cannot complete the ride since the app stopped working. On another unfortunate day, I was unable to find a cab back home as the drivers kept refusing to take up what they saw as a long ride. When I eventually found a cab, the driver continuously complained about how multiple short rides benefit him more. I tried to tip him after he finished the ride, but instead he requested me to book the same cab again, for a few kilometres, as that would reap more rewards. While I wanted to oblige, I couldn’t find the same driver, even though he had parked his car right outside my house. In yet another incident, I spent the entire night at the airport as I was terrified to book a cab at that late hour. I regretted not checking the flight timings before confirming the booking, having overlooked the fact that women need to be cautious about these things. 

Image credit: Pixabay / Pexels

Although my first response was to blame the cab drivers for what I saw as an unprofessional attitude, it slowly dawned on me that they have their own constraints. In the first scenario, the app had actually stopped working, so he couldn’t complete the ride due to the fear of getting penalized, which also resulted in a bad rating by me. In the second situation, I wondered why the algorithms reward shorter rides rather than longer ones. Moreover, how do they assign drivers if proximity isn’t the only factor and why was my driver not aware of that? In the third instance, why couldn’t I be assigned a woman driver to make me feel safer when traveling late at night?

I spoke to a few senior managers and executives working at popular ride-sharing apps in India to find the answers.

Constant tracking

A senior manager of a well-known ride-sharing platform explained their tracking practices on condition of anonymity:

“The location of driver-partners is tracked every two-three seconds and if they deviate from their assigned destination, our system detects it immediately. Besides ensuring safety, this is done so that the drivers do not spoof their locations. It has been noticed that some drivers use counterfeit location technology to give fake information about their location – they could be sitting at their homes and their location would be miles away. If the system identifies anomalies in their geo-ping, we block the payment of the drivers.”

While this appears to be a legitimate strategy to address fraud, there is no clarity on how a driver can generate evidence when there is an actual GPS malfunction. Another interviewee, a person in a top management position of a ride-sharing company, said, “it is difficult to establish trust between platform companies and driver-partners, especially when we hear about drivers coming up with new strategies to outwit the system every second day.” For instance, some of the drivers had a technical hacker on board to ensure that booking could be made via a computer rather than a smartphone or artificially surging the price by collaborating with other drivers and turning their apps off and on again simultaneously.

Though the ‘frauds’ committed by the drivers are out in the public domain, it is seldom discussed how constant surveillance reduces productivity and amplifies frustration resulting in ‘clever ways’ to fight it. The drivers are continuously tracked by ride-sharing apps and if they fail to follow any of the instructions provided by these apps, they either get penalized or banned from the platform. This technology-mediated attention can intensify drivers’ negativity and can have adverse effects on their mental health and psychological well-being.

Algorithmic-management

Algorithms control several aspects of the job for the drivers – from allocating rides to tracking workers’ behaviour and evaluating their performance. This lack of personal contact with the supervisors and other colleagues can be dehumanizing and disempowering and can result in the weakening of worker solidarities.

When asked if the algorithms can adjust the route for the drivers, especially for women, if they need to use the restroom, a platform executive said, “They always have the option not to accept the ride if there is a need to use the washroom. The customers cannot wait if the driver stops the car for restroom break and at the same time, who will pay for the waiting time?”

Image credit: Antonio Batinić / Pexels

While this makes sense at first glance, in reality, algorithms of a few ride-sharing platforms like Lyft penalize drivers in such cases by lowering their assignment acceptance rate (number of ride requests accepted by the driver divided by the total number of requests received). Lee and team, HCI (Human Computer Interaction) scholars from Carnegie Mellon University explored the impact of algorithmic-management on human workers in context of ride-sharing platforms and found:

 “The regulation of the acceptance rate threshold encouraged drivers to accept most requests, enabling more passengers to get rides. Keeping the assignment acceptance rate high was important, placing pressure on drivers. For example, P13 [one of the drivers] stated in response to why he accepted a particular request: ‘Because my acceptance rating has to be really high, and there’s lots of pressure to do that. […] I had no reason not to accept it, so […] I did. Because if, you know, you miss those pings, it kind of really affects that rating and Lyft doesn’t like that.’”

Uber no longer displays the assignment acceptance rate in the app and states that it does not have an impact on drivers’ promotions. Ola India’s terms and conditions state “the driver has sole and complete discretion to accept or reject each request for Service” without mentioning about the acceptance rate. However, Ola Australia indicate the following on their website: “Build your acceptance rate quickly to get prioritised for booking! The sooner and more often you accept rides (as soon as you are on-boarded), the greater the priority and access to MORE ride bookings!”

The lack of information coupled with ambiguity complicates the situation for drivers, who would try not to reject the rides under any circumstances. Moreover, the algorithms are designed to create persistent pressure on the drivers by using psychological tricks as pointed out by Noam Scheiber in an article for The New York Times:

“To keep drivers on the road, the company has exploited some people’s tendency to set earnings goals — alerting them that they are ever so close to hitting a precious target when they try to log off. It has even concocted an algorithm similar to a Netflix feature that automatically loads the next program, which many experts believe encourages binge-watching. In Uber’s case, this means sending drivers their next fare opportunity before their current ride is even over.”

The algorithmic decision-making also directs our attention to how the rides are allocated. The product manager of a popular ride-sharing app said:

“Apart from proximity, the algorithms keep in mind various parameters for assigning rides, such as past performance of the drivers, their loyalty towards the platform, feedback from the customers, if the drivers made enough money during the day etc. The weightage of these parameters keep changing and hence cannot be revealed.”

All the four people interviewed said that number of women driving professionally is considerably low. This makes it difficult for the algorithms to match women passengers with women drivers. Secondly, this may delay ride allocation for women passengers as the algorithms will first try to locate women drivers.

A lack of understanding of how algorithms assign tasks makes it difficult to hold these systems accountable. Consequently, a group of UK Uber drivers have decided to launch a legal bid to uncover how the app’s algorithms work – how the rides are allocated, who gets the short rides or who gets the nice rides. In a piece in The Guardian, the drivers’ claim says:

“Uber uses tags on drivers’ profiles, for example ‘inappropriate behaviour’ or simply ‘police tag’. Reports relate to ‘navigation – late arrival / missed ETA’ and ‘professionalism – cancelled on rider, inappropriate behaviour, attitude’. The drivers complain they were not being provided with this data or information on the underlying logic of how it was used. They want to [know] how that processing affects them, including on their driver score.”

The fact is that multiple, conflicting algorithms impact the driver’s trust in algorithms as elaborated in an ongoing study of ‘human-algorithm’ relationships.  The research scholars discovered that Uber’s algorithms often conflict with each other while assigning tasks, such as, drivers were expected to cover the airport area but at the same time, they received requests from a 20-mile radius. “The algorithm that emphasizes the driver’s role to cover the airport was at odds with the algorithm that emphasizes the driver’s duty to help all customers, resulting in a tug o’ war shuffling drivers back and forth.” Similarly, conflict is often created when drivers are in the surge area and they get pings to serve customers somewhere out of the way.

Ultimately, we need to shift from self-optimization as the end goal for workers to that of humane algorithms – that which centres workers’ pressures, stress, and concerns in this gig economy. This would also change the attitudes of the passengers, who need to see platform drivers as human drivers, facing challenges at work, like the rest of us.

Categories
Blog

Platformizing women’s labour: Towards algorithms of empowerment

[By Pallavi Bansal]

As the fifth-born daughter to a poverty-stricken couple in a small village of Karnataka, Rinky would consider herself fortunate on days she wouldn’t have to sleep on an empty stomach. Her parents pressurised her to take care of her younger brother while they struggled to make ends meet. As the siblings grew up, the brother started going to a nearby school, while Rinky managed the household chores along with her sisters. A curious teenager, Rinky coerced her brother to teach her every now and then, including how to operate a smartphone they had recently acquired. When she turned 19, she mustered the courage to move to Bengaluru in search of a better life. She survived by doing menial jobs in the beginning such as cleaning houses, cooking and washing dishes. She earned a mere amount of Rs 15,000 (about 200 USD) every month, enough just to get by. She always felt disrespected having to deal with constant humiliation until someone in the neighbourhood advised her to learn driving and partner with the ride-hailing platform Ola.  

Image credit: Renate Köppel, Pixabay

While there were initial hiccups in procuring the vehicle and learning how the app works, this move dramatically changed her life as it turned her into a micro-entrepreneur with a lucrative income of Rs 60,000 to take home every month. Besides allowing flexible work hours, the job provided her a sense of independence which was missing when she worked for others. She wasn’t really bothered about how the rides were assigned to her, though she always worried about her safety while boarding male passengers. At the same time, she was unable to comprehend how some of her colleagues earned more than her despite driving for similar number of hours.

“Rinky” is a composite character but represents stories of many such women for whom the platform-based economy has opened up a plethora of employment opportunities. The interesting aspect is that women workers are no longer confined to stereotypical jobs of salon or care workers; they are venturing into hitherto male domains such as cab driving and delivery services as well. The Babajob platform in fiscal 2016 recorded an increase of 153 per cent in women’s applications for driver jobs. According to the Road Transport Yearbook for 2015-16 (the latest such report available), 17.73 % of the 15 million licensed female drivers ride professionally. Though there are no distinct figures available for how many women are registered with Ola and Uber as drivers, ride-hailing app Ola confirmed a rise of 40 % every quarter in the number of female drivers with them. Moreover, cab aggregative service Uber announced to tie up with a Singapore-based company to train 50,000 women taxi drivers in India by this year.

Clearly, the ride-sharing economy is helping Indian women to break the shackles of patriarchy and improve their livelihood. However, the potential of these platforms cannot be fully utilized unless researchers turn an eye on the algorithms that govern them. These algorithms not only act as digital matchmakers assigning passengers to the drivers but regulate almost all aspects of the job – from monitoring workers’ behaviour to evaluating their performance. These machines often fail to treat workers as humans – as people who can fall sick, need a leisure break, socialise with others to stay motivated, de-route to pick up their kids from school, attend to an emergency at home, lose their temper occasionally, and moreover, coming to the work after facing physical abuse at home. In a normal work environment, employers tend to understand their team-members and often deal with compassion during tough times.

Image credit: Satvik Shahapur

Research shows how these data-driven management systems especially in context of ride-sharing apps impact human workers negatively as they lack human-centred design. They discovered that sometimes female drivers did not accept male passengers without pictures at night only to be penalized by these algorithms later. Moreover, drivers complain of rude passengers, which is seldom taken into consideration by platform companies and it only lowers the driver’s acceptance rate and ratings.

Technology creators need to ask themselves how to ensure that algorithms are designed to enable workers and not just be optimized for customer satisfaction. Alternatively, they need to see the extension of worker’s satisfaction as that of customer gratification given these two realms reinforce one another. By sensitizing to the needs of women like Rinky who are perhaps stepping out in this male-dominated world for the first time, programmers could create a more empowering pathway for such women workers. With the entrenched gender norms burdening women with familial duties and limiting their access to education and skills training, the intervention by platform designers can promise genuine change. While cultural change often takes a long course, by placing women at the centre, designers can accelerate this shift.

More concretely, what if platform companies did the following:

  • They create a feedback/resolution system which accounts for rejections and safeguards ratings when women drivers reject certain passengers if they consider them as potential threat.
  • They can institute flexibility in terms of wanting to go home early and this shouldn’t be translated into ‘lower incentives’, after all this is the premise of gig economy.
  • AI should aim at promoting workers’ well-being, which means following a demanding or intensive piece of work (a long ride in this case), AI could recommend a relatively easier task for drivers.
  • Another aspect is to ensure transparency in terms of how the wages are allocated to different people and an understanding of how the autonomous systems impact ratings with also a system of redressal, i.e. one that allows for corrections etc.  
  • Algorithms should encourage a community-building culture rather than individualism-oriented – social incentives could be given to those drivers who pick up rides when an assigned driver is unable to reach the destination instead of penalizing him or her.
  • While the in-built GPS system in apps can help drivers track public toilets and other places that could be used for restroom breaks, algorithms could be trained to adjust routes according to drivers’ needs and availability of amenities.
  • Moreover, popular ride-sharing platforms like Ola and Uber can consider assigning women passengers to women riders especially during the night-time. This move can make both the parties feel secure considering women dread boarding a taxi in an ‘unsafe’ country like India.

Across disciplines, if we brainstorm on reimagining these platforms as cooperative instead of competitive spaces, of human-centered versus optimization-centered, and as feminist-oriented and not just male-oriented, there may be more promise for our digital wellbeing.