Disparate Impact Diminishes Consumer Trust Even for Advantaged Users

Disparate Impact Diminishes Consumer Trust Even for Advantaged Users

Abstract

Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers’ financial decision-making. Our results show that disparate impact decreased consumers’ trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.

Keywords:
disparate impact algorithmic fairness consumer trust
13

1 Introduction

Applications that seek to advise or nudge consumers into better decision-making (e.g., concerning personal health or finance) can only be effective when consumers trust their guidance. Trustworthiness is an essential aspect in the design of such persuasive technology (PT); i.e., technology aiming to change attitudes or behaviors without using coercion or deception [Nickel2012, Oinas-Kukkonen2008, Verbeek2006]; because consumers are unlikely to use (or be persuaded by) systems that they do not trust [Muir1996, Sattarov2019]. Recent research has identified several factors that affect consumer trust in this context; including consumers’ emotional states [Ahmad2018] as well as the system’s reliability [Nickel2012] and transparency [Sattarov2019]. Moreover, it has been argued that trust also depends on moral expectations that consumers have towards the technology they use [Nickel2012, Sattarov2019]. Consumer trust could increasingly depend on such moral expectations as more systems implement machine learning algorithms (e.g., in personal health [Purpura2011, Sattarov2019] or finance [Lieber2014] applications) that make them harder to scrutinize.

A specific moral expectation that acts as a requirement for trust in this context may be fairness [Varshney2019]. When nudges and advice are tailored to the individual consumer using machine learning, consumers may expect that the system acts fairly towards different consumer groups (e.g., concerning race or gender). Nudging or advising such that the degree of positive impact that the system has on consumers’ lives varies with group membership could constitute an undesired disparate impact. For example, a robo-advisor (i.e., PT designed to improve consumers’ financial situation [Lieber2014]) could have a disparate impact by systematically recommending “safer”, lower-risk investments to female consumers compared to male consumers, yielding them lower returns. Such disparate impact would violate the moral expectation of fairness and thereby undermine consumer trust.

Employing machine learning in consumer-oriented applications often holds the promise of increasing their usefulness to the individual consumer [Purpura2011, Yang2018a] but also bears a greater vulnerability for disparate impact. Recent research has demonstrated that machine learning algorithms may unfairly discriminate based on group membership [angwin2019machine, Barocas2016, Ntoutsi2020]. Such discrimination is referred to as algorithmic unfairness if a pre-defined notion of fairness is violated [Ntoutsi2020, Verma2018] and can easily lead to an undesired disparate impact [Barocas2016, Feldman2015]. For example, outcomes in advice from robo-advisors may differ between groups, given that financial advice has historically been gender-biased to the disadvantage of female consumers [Baeckstrom2018, Mullainathan2012] and algorithmic unfairness often results from disparities in the historical data that is used to train the algorithm [Ntoutsi2020]. Although several methods have been developed to mitigate algorithmic unfairness [Bellamy2019], in many cases it is currently not possible to do so to a satisfactory degree [Corbett-Davies2018].

Disparate impact is thus a realistic issue that could undermine the efficacy of consumer-oriented artificial intelligence (AI) systems. It has been argued that fairness plays a key role in fostering trust in AI [Arnold2019, Rossi2019, Toreini2020, Varshney2019, Varshney2020]. However, to the best of our knowledge, no previous work has studied the influence of undesired disparate impact (i.e., as a result of algorithmic unfairness) on consumer trust. It is further unclear whether unfairly advantaged consumers are affected to the same degree as disadvantaged consumers in this context. That is, the influence of disparate impact on consumer trust may depend on perceived personal benefit (i.e., advantaged users trusting the system’s advice despite disparate impact as long as they personally benefit) or not (i.e., advantaged users losing trust in lockstep with disadvantaged users despite a perceived personal benefit). We study the effect of disparate impact on consumer trust at the use case of gender bias in robo-advisors by investigating the following research questions:

  • RQ1. Does an apparent disparate impact of a robo-advisor affect the degree of trust that consumers place in it?

  • RQ2. Does disparate impact affect the trust of unfairly advantaged consumers to a different degree than that of unfairly disadvantaged consumers?

To answer these questions, we conducted a between-subjects user study where we exposed participants to varying degrees of disparate impact of a robo-advisor (i.e., advantaging male users; see Section 3). Our results show that disparate impact negatively affected consumer’s trust in the robo-advisor and decreased their willingness to use it (see Section 4). Furthermore, we find that, despite both groups recognizing their respective personal (dis)advantage, both the disadvantaged group (women) as well as the advantaged group (men) experienced the same decrease in trust when learned about a disparate impact of the robo-advisor. Our findings underline the importance of ensuring algorithmic fairness in consumer-oriented (AI) systems when aiming to maintain consumer trust.

2 Background and Related Work

We study the effect of disparate impact on consumer trust at the use case of gender bias in robo-advisors. Our reasons for choosing the financial domain here are threefold. First, algorithmic decision-making is already widespread in consumer-oriented financial applications (e.g., in robo-advisors) [Lieber2014]. Second, algorithmic decision-making in such systems is highly impactful: it directly affects consumers’ financial situations and thereby their life quality. Third, (human) financial advice has traditionally been gender-biased, underestimating and disadvantaging female consumers [Mullainathan2012, Baeckstrom2018]. Historical data on financial advice thus contain these biases. If the algorithms that underlie robo-advisors are trained using these data, robo-advisors may have according disparate impact.

Trust in AI systems. Consumers do not use systems that they do not trust [Muir1996]. That is why trust is an important aspect in the interaction between consumer-oriented AI systems (e.g., those implementing PT) and consumers [Ahmad2018, Nickel2012, Sattarov2019, Verbeek2006]. Recent research has linked trust in such systems to the reliability [Nickel2012] and transparency [Sattarov2019] of the system at hand as well as consumers’ emotional states [Ahmad2018] and moral expectations [Nickel2012, Sattarov2019]. Such moral expectations may gain in importance as systems increasingly rely on machine learning algorithms [Lieber2014, Orji2018, Purpura2011, Sattarov2019, Yang2018a]. Moreover, whereas in some cases consumers fall prey to automation bias (i.e., a tendency to prefer automated over human decisions) [Cummings2004], in other cases, they experience what has been referred to as algorithm aversion: a tendency to prefer human over algorithmic advice [Diab2011, Onkal2009, Promberger2006]. Research has shown that algorithm aversion can be the result of witnessing how an algorithm errs [Dietvorst2015]. Especially in cases where a machine learning algorithm acted unfairly, leading to an undesired disparate impact (i.e., violating consumers’ moral expectations and reflecting erroneous decision-making), consumer trust could thus be diminished.

Measuring and mitigating algorithmic unfairness. Research has demonstrated that machine learning algorithms can make biased (unfair) predictions to the disadvantage of specific groups [angwin2019machine, Barocas2016, Ntoutsi2020, vigdor2019apple]. For instance, AI systems may discriminate between white and black defendants in predicting their likelihood of re-offending [angwin2019machine] and between male and female consumers in predicting their creditworthiness [vigdor2019apple]. Several methods have been proposed to measure and mitigate biases in algorithmic decision-making [Bellamy2019, Hardt2016, Mary2019, Mehrabi2019, Ntoutsi2020, Zafar2017]. Despite these efforts, the measurement and mitigation of algorithmic bias remain challenging [Corbett-Davies2018, Ntoutsi2020].

Disparate impact and trust. Algorithmic fairness has been identified as a core building block of trustworthy AI systems [Arnold2019, Rossi2019, Varshney2019, Toreini2020, Varshney2020], yet few studies directly investigate the relationship between algorithmic fairness (or disparate impact) and consumer trust. Participants in one study reported that learning about algorithmic unfairness induced negative feelings and that it might cause them to lose trust in a company or product [Woodruff2018]. Consumers have further expressed general concerns about disparate impact of AI on a societal level [Araujo2020] and are more likely to judge decisions as less fair and trustworthy if they are made by an algorithm as opposed to a human [Lee2018]. However, it has also been shown that the degree to which people are concerned about disparate impact depends on their personal biases [Otterbacher2018, Smith2020]. What remains unclear is to what extent disparate impact (as a result of algorithmic unfairness) affects consumer trust and, if so, who (i.e., unfairly advantaged and disadvantaged consumers) are affected in particular.

3 Method

To investigate the two research questions identified in Section 1, we conducted a between-subjects user study. The setting of this study was a fictional scenario in which a bank offers a robo-advisor – called the AI Advisor – to its customers. We aimed to perform a granular analysis of the effect of disparate impact on consumer trust by exposing participants to different degrees of disparate impact supposedly caused by the AI Advisor and measuring their attitudes towards this system. Specifically, we analyzed whether the different degrees of disparate impact affected participant’s trust (i.e., whether they believed that the AI Advisor would make correct predictions and therefore benefit its users). To differentiate between this general notion of trust and related attitudes, we also measured willingness to use and perceived personal benefit concerning the AI Advisor.

3.1 Operationalization

Dependent Variables.

Our experiment involved measuring participants’ attitude towards the AI advisor; specifically trust, willingness to use, perceived personal benefit. Each variable was measured twice: once after participants saw general user statistics (Step 1; see Section 3.3) and once after participants saw gender-specific user statistics on the AI advisor (Step 2). We computed difference scores from these two measurements that reflected how seeing the gender-specific statistics affected participant’s attitudes as compared to their baseline attitudes.

  • Change in Trust (Continuous). Participants rated their trust by responding to the item “In general, the AI advisor can be trusted to make correct recommendations” on a 7-point Likert scale. We coded all responses on an ordinal scale ranging from (strongly disagree) to (strongly agree) and subtracted the second measurement from the first to compute the change in trust. Values could thus range from to .

  • Change in Willingness to Use (Categorical). Participants could respond to the item “I would personally use the AI Advisor” with either “yes” or “no”. We recorded whether their answer had changed (i.e., “yes” to “no” or vice versa) or stayed the same in the second measurement. This variable thus encompassed three categories.

  • Change in Perceived Personal Benefit (Continuous). Participants rated their perceived personal benefit by responding to the item “I would personally benefit from using the AI advisor” on a 7-point Likert scale. To compute the change in perceived personal benefit, we again subtracted the second measurement from the first. Values could thus range from to .

Independent Variable.

Our experiment varied depending on the condition that a participant was placed in (see Section 3.3):

  • Condition. During the experiment, we showed participants a table with user statistics of bank customers that use the AI advisor. These statistics, supposedly showing the average change in bank account balance for users and non-users of the AI Advisor, split by gender, differed depending on the condition a participant had been placed in. Each participant saw only one of four conditions: the control condition (in which the statistics were balanced across genders, reflecting an absence of disparate impact) or one of three experimental conditions – which we call little bias, strong bias, and extreme bias – that reflected varying degrees of disparate impact in favor of male consumers. Specifically, these different degrees of disparate impact represented scenarios in which female users of the AI advisor were disadvantaged but still benefited from using the AI advisor (little bias), did not benefit from the AI advisor (strong bias), or would in fact benefit from not using the AI advisor (extreme bias). Table 1 shows the numbers that were shown in the second statistics table in each of the conditions.

Using Not using
AI Advisor AI Advisor
Male 20% 10%
Female 20% 10%
All 20% 10%
Control condition
Using Not using
AI Advisor AI Advisor
Male 25% 10%
Female 15% 10%
All 20% 10%
Little bias condition
Using Not using
AI Advisor AI Advisor
Male 30% 10%
Female 10% 10%
All 20% 10%
Strong bias condition
Using Not using
AI Advisor AI Advisor
Male 35% 10%
Female 5% 10%
All 20% 10%
Extreme bias condition
Table 1: Fictional gender-specific statistics shown to participants during the second step of the study across. Only the top left two cells (concerning users of the AI Advisor) differed across conditions, reflecting varying degrees of disparate impact.

Individual Differences and Descriptive Statistics.

We took two additional measurements to enable more fine-grained analyses and describe our sample:

  • Gender. Participants could state which gender they identified with by picking from the options “male”, “female”, and “other / not specified”.

  • Age. Participants could write their age in an open text field.

3.2 Hypotheses

Based on the research questions RQ1 and RQ2 introduced in Section 1, the related work from Section 2, and the experimental setup described in this section, we formulated several hypotheses. We expected that disparate impact will decrease consumer trust (H1a) and that consumers will be less likely to use the AI Advisor (H1b) if it has disparate impact (i.e., the stronger the disparate impact, the lower consumer trust and willingness to use the AI Advisor). We predicted that disparate impact would affect the perceived personal benefit of male consumers differently compared to female consumers (i.e., following what the displayed statistics suggest; H2a). Accordingly, we further expected that the decrease in trust described in H1a would be moderated by gender (H2b). That is, we predicted that the trust of advantaged consumers (i.e., men) would be affected differently compared to disadvantaged consumers (i.e., women).

  • H1a. Consumers who are exposed to statistics that reveal a disparate impact of a robo-advisor in favor of male users will trust this system less to give correct recommendations compared to consumers who are exposed to balanced statistics.

  • H1b. Consumers who are exposed to statistics that reveal a disparate impact of a robo-advisor in favor of male users will be less likely to use this system compared to consumers who are exposed to balanced statistics.

  • H2a. The effect of statistics suggesting a disparate impact of a robo-advisor in favor of men on perceived personal benefit is moderated by gender.

  • H2b. The effect of statistics suggesting a disparate impact of a robo-advisor in favor of men on consumer trust is moderated by gender.

3.3 Procedure

We set up our user study by creating a task on the online study platform Figure Eight.5 Before commencing with the experiment, participants were shown a short introduction and asked to state their gender and age. The experiment consisted of two steps. Whereas Step 1 was the same for all participants, Step 2 differed depending on which one of four conditions a participant had been assigned to.

Step 1.

We introduced participants to a fictional scenario in which they could activate a robo-advisor – called the AI advisor – in their banking app:


“Imagine your bank offers a digital assistant called the ‘AI advisor’. If you activate the AI advisor in your banking app, it will monitor your financial situation and give you relevant recommendations that may improve your financial situation. For example, it may suggest saving strategies or recommend investments.”

Additionally, to promote the idea that the AI Advisor is generally reliable, participants were given an idea of whether people benefit from using the AI Advisor:

“Overall statistics suggest that people benefit from using the AI advisor. The bank account balance of bank customers who use the AI advisor increases by an average of 20% every year, whereas the balance of customers who don’t use the AI advisor increases by an average of only 10% per year.”

Below was a table displaying the mentioned statistics. We then measured trust, willingness to use, and perceived personal benefit concerning the AI Advisor.

Step 2.

Participants were led to a new page for the second step of the experiment. Here we added some additional information on the AI advisor:

“Next to general statistics on all bank customers, we can also look at how the AI advisor performs for subgroups of bank customers. Below you can see the change in bank account balance for men and women in particular.”

Below this text was a table similar to the table in Step 1, but with two added rows that showed the average change in bank account balance per year for men and women in particular (see Table 1). Whereas the statistics for all bank customers overall, as well as for men and women not using the AI advisor was the same in all conditions, the statistics for men and women using the AI advisor varied depending on the condition they were assigned to (see Section 3.1). Each Table 1 shows the displayed statistics for male and female users per condition. We then again measured trust, willingness to use, and perceived personal benefit.

3.4 Statistical Analyses

Testing H1a and H2b.

To test whether there is an effect of disparate impact on consumer trust (H1a) that is moderated by gender (H2b), we conducted a classical ANOVA with condition and gender as between-subjects factors and change in trust as the dependent variable. A significant main effect of condition on change in trust in this analysis would suggest that change in trust differed between conditions (H1a). In this case, we would perform posthoc analyses to investigate the differences between the conditions in more detail. A significant interaction effect between condition and gender would suggest that the conditions had a different effect for the disadvantaged group (i.e. female participants) compared to the advantaged group (i.e., male participants; H2b).

We further conducted a Bayesian ANOVA according to the protocol proposed by van den Bergh et al. [VanDenBergh2020]. Bayesian hypothesis tests involve the computation of the Bayes factor, a quantitative comparison of the predictive power of two competing statistical models models [Wagenmakers2018]. The Bayes factor weighs the evidence provided by the data and thus allows for direct model comparison. Practically, comparing different models (i.e., including or excluding an interaction effect of condition and gender) this way allowed for a richer interpretation of our results. We performed the Bayesian ANOVA using the software JASP [JASP2020] with default settings. We computed Bayes Factors (BFs) by comparing the models of interest to a null model6 and interpret them according to the guidelines proposed by Lee and Wagenmakers [Lee2013], who adopted them from Jeffreys [jeffreys1939].

Testing H1b.

We tested whether disparate impact affected participants’ willingness to use the AI Advisor by conducting a chi-squared test between condition and change in willingness to use. A significant result in this analysis would suggest that the number of participants’ who changed their willingness to use the AI Advisor differed across conditions.

Testing H2a.

We conducted another ANOVA with condition and gender as between-subjects factors and change in perceived personal benefit as dependent variable to test whether gender acted as a moderator here. A significant interaction effect in this analysis would indicate that this was the case.

Significance Threshold and Correction for Multiple Testing.

In all classical analyses we conducted, we aimed for a type 1 error probability of no more than 0.05. However, by conducting our planned analyses we automatically tested a total of seven hypotheses: three in each ANOVA (i.e., two main effects and one interaction) and one in the chi-squared test. This meant that the probability of committing a type 1 error rose considerably [Cramer2016].Therefore, we adjusted our significance threshold by applying a Bonferroni correction, where the desired type 1 error rate is divided by the number of hypotheses that are tested [Napierala2012]. In our main analyses we thus handled a significance threshold of and only regarded results as statistically significant if their -value fell below this adjusted threshold. The same procedure was applied for posthoc analyses comparing each of the four conditions with each other as this meant conducting hypothesis tests (i.e., adjusting the threshold to ).

3.5 Participants

We recruited 567 participants via the Figure Eight pool of contributors (554) and direct contacts (13). Seventy-three participants were excluded from the study because they either filled at least one of the obligatory text fields with less than 10 characters, took less than 60 seconds to complete the task, or took more than 10 minutes to complete the task. Furthermore, we did not analyze data of five participants who stated “other / not specified” as their gender because our study involved a disparate impact between male and female consumers.

After exclusion, 489 participants remained. Of those, 238 (49%) were male and 251 (51%) were female; with a mean age of (sd = ). Participants recruited by Figure Eight received $0.10 as payment for participation. Random allocation to the four conditions resulted in 124, 121, 121, and 122 participants in the control, little bias, strong bias, and extreme bias conditions, respectively.

4 Results

H1a: Disparate Impact Decreased Consumer Trust.

As hypothesized, change in trust differed across conditions (F = 6.906, ; see the left-hand panel of Figure 1). The results from the Bayesian ANOVA confirm this result, showing strong evidence for a main effect of condition (, see Table 2). To test for differences between the individual conditions, we conducted posthoc analyses (i.e., Mann-Whitney tests). Only the difference between the control and extreme bias conditions was significant ( = 9368, ). This suggests that participants lost trust due to disparate impact, but also that the unfairness needed to be comparatively extreme for this effect to occur.

Models P(M) P(M—data) BF BF error %
Null model 0.200 8.209e-4 0.003 1.000
condition 0.200 0.057 0.244 70.024 0.001
gender 0.200 0.004 0.018 5.413 1.533e-6
condition + gender 0.200 0.735 11.116 895.772 1.638
condition + gender +
condition * gender 0.200 0.202 1.012 245.894 1.978
Table 2: Bayesian ANOVA with change in trust as dependent variable.
Figure 1: Change in trust across conditions for all participants (left-hand panel) and split by gender (right-hand panel). The error bars represent 95% confidence intervals.

H1b: Disparate Impact Decreased Willingness to Use.

In accordance with disparate impact negatively affecting trust (H1a), it decreased participants’ willingness to use the AI Advisor (see Table 3). The increasing proportion of participants who changed their attitude from “yes” to “no” as conditions reflected stronger disparate impact was statistically significant (, ).

Condition
Change Control Little bias Strong bias Extreme bias
1 9 23 16
121 111 97 105
2 1 1 1
total 124 121 121 122
Table 3: Change in willingness to use across conditions. The labels , , and reflect changes from “yes” to “no”, no change, “no” to “yes”, respectively.

H2a: Gender Moderated the Effect of Disparate Impact on Perceived Personal Benefit.

As expected, the results from the second ANOVA show a significant interaction effect of condition and gender on change in perceived personal benefit (F = 8.525, ). This means that male participants’ perceived personal benefit was affected differently compared to that of female participants. More specifically, Figure 2 shows that whereas men’s perceived personal benefit did not change due to seeing the gender-specific user statistics across conditions, female participants perceived increasingly lower levels of personal benefit as disparate impact (to their disadvantage) became more severe.

H2b: Male Consumers Experienced the Same Decrease in Trust as Female Consumers.

In contrast to what we hypothesized, we do not find a significant interaction effect of condition and gender on change in trust (, ; see the right-hand panel of Figure 1). We can therefore not conclude that the conditions had a different effect on male participant’s change in trust compared to that of female participants. The Bayesian ANOVA confirms this result: the model containing just two main effects for condition and gender explain the data best ( = 895.77; see Table 2) and roughly four times better than the model that includes the interaction effect ( = 245.89). This suggests that unfairly advantaged and disadvantaged participants (i.e., men and women, respectively) experienced the same decrease in trust due to algorithmic unfairness despite diverging levels of perceived personal benefit (H2a).

Figure 2: Change in perceived personal benefit across conditions and split by gender. The error bars represent 95% confidence intervals.

5 Discussion

In this paper, we presented a between-subjects user study that aimed to investigate the influence of algorithmically-driven disparate impact on consumer trust at the use case of gender-bias in robo-advisors. Our results suggest that disparate impact – at least when it is extreme – decreases trust and makes consumers less likely to use such systems. We further find that, although disadvantaged and advantaged users recognize their respective levels of personal benefit in scenarios of disparate impact, both experience equally decreasing levels of trust when they learn about a disparate impact caused by the system at hand. Our work contributes to a growing body of literature that highlights the importance of ensuring fairness and avoiding disparate impact of consumer-oriented AI systems.

5.1 Implications

Our findings have implications for consumers as well as industry. Consumers should be aware that machine-learning-based applications can be biased. If disparate impact is an important factor for consumer trust, consumers need to think critically when using such systems. One potential way forward for consumers would be to demand from companies to publish independently carried out research into the (algorithmic) fairness and impact of their products.

Publishers of consumer-oriented AI systems need to establish algorithmic fairness in their products and avoid disparate impact to serve consumers effectively. Our findings show that failing to do so may lead to a decrease in consumers’ trust and willingness to use such systems.

5.2 Limitations and Future Work

Our study is subject to at least five important limitations. First, we studied the effect of disparate impact on consumer trust at a specific use case: a binary gender bias in robo-advisors. This makes our results difficult to generalize because many other forms of bias (including those based on race, religion, or sexual orientation) as well as other AI systems (e.g., for recommendations of medical treatment, tourist attractions, or movies) exist. It is easy to imagine how consumer trust could be affected differently when, for example, disparate impact concerns small minorities, multitudes of gender identities (or another consumer characteristic), a chosen group membership such as consumers’ profession, or a system that is less impactful on consumers’ personal lives than a robo-advisor. On a related note, we here positioned women in the disadvantage and men in the advantage (i.e., the setting that corresponds to biases in human financial advice) but it is not certain if we were to obtain the same results if the (dis-)advantage was distributed the other way round. Future work could explore these different scenarios to help generalize and better understand the relationship between the effect of disparate impact on consumer trust.

Second, our finding that advantaged and disadvantaged users experienced the same decrease in trust appears to go counter to previous research suggesting that people make stronger fairness judgments when they are personally affected [Ham2008a]. However, it is not clear from our results to what degree advantaged users (i.e., men) felt personally affected; e.g., because they have women in their lives who they deeply care about. The role of personal relevance in the effect of disparate impact on consumer trust thus remains to be clarified by future research.

Third, our results show a decreasing trend in consumer trust as conditions become more extreme, but show a statistically significant difference only between the control and extreme bias conditions. Future work could examine these differences (also across domains) in more detail to establish the relationship between the level of disparate impact and consumer trust (e.g., to determine what lies within and beyond an “acceptable margin” of disparate impact).

Fourth, we studied fairness related to group membership (i.e., gender), which might elicit a different (moral) evaluation than fairness on the individual level. Our results show that trust can decrease despite perceived personal benefit. However, this effect might have been caused by a sense of loyalty towards the disadvantaged group. An interesting direction for future work is to study whether similar patterns emerge when disparate impact concerns individuals; e.g., when advantaged and disadvantaged subjects are randomly chosen.

6 Conclusion

We presented a user study investigating the effect of algorithmically-driven disparate impact (i.e., when algorithm outcomes adversely affect one group of consumers compared to another) on consumer trust. Specifically, we studied the effect of gender-bias in an application that aimed to persuade consumers’ to make better financial decisions. We found that disparate impact decreased participants’ trust and willingness to use the application. Furthermore, our results show that the trust of unfairly advantaged participants was just as affected as that of disadvantaged participants. These findings imply that disparate impact (i.e., as a result of algorithmic unfairness) can undermine trust in consumer-oriented AI systems and should therefore be avoided or mitigated when aiming to create trustworthy technology.

Acknowledgements

This research has been supported by the Think Forward Initiative (a partnership between ING Bank, Deloitte, Dell Technologies, Amazon Web Services, IBM, and the Center for Economic Policy Research – CEPR). The views and opinions expressed in this paper are solely those of the authors and do not necessarily reflect the official policy or position of the Think Forward Initiative or any of its partners.

References

Footnotes

  1. institutetext: Delft University of Technology
    email: t.a.draws@tudelft.nl
  2. email: t.a.draws@tudelft.nl
  3. institutetext: myTomorrows
    email: zoltan.szlavik@mytomorrows.com
  4. email: zoltan.szlavik@mytomorrows.com
  5. Since conducting this study in June 2019, Figure Eight has been renamed to Appen. More information can be found at https://appen.com.
  6. The null model in this procedure consisted of only an intercept.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
427390
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description