Using Resource-Rational Analysis to Understand Cognitive Biases in Interactive Data Visualizations

Using Resource-Rational Analysis to Understand Cognitive Biases in Interactive Data Visualizations

Abstract

Cognitive biases are systematic errors in judgment. Researchers in data visualizations have explored whether cognitive biases transfer to decision-making tasks with interactive data visualizations. At the same time, cognitive scientists have reinterpreted cognitive biases as the product of resource-rational strategies under finite time and computational costs. In this paper, we argue for the integration of resource-rational analysis through constrained Bayesian cognitive modeling to understand cognitive biases in data visualizations. The benefit would be a more realistic “bounded rationality” representation of data visualization users and provides a research roadmap for studying cognitive biases in data visualizations through a feedback loop between future experiments and theory.

\onlineid

0 \vgtccategoryResearch \vgtcinsertpkg \CCScatlist \CCScatTwelveHuman-centered computingVisualizationVisualization theory, concepts and paradigms

\firstsection

Introduction

Recently data visualization researchers have investigated whether cognitive biases transfer to decision-making with interactive visual interfaces and explored strategies to mitigate them [41, 2, 4, 1, 42]. A large portion of this work involves collecting and analyzing empirical evidence on the effect of different cognitive biases through user experiments. These studies generally are motivated by the classical psychological approach to cognitive biases, i.e., the “heuristic and biases” framework [31] introduced by Tversky and Kahneman [32, 33]. While these studies provide great value to the visualization community on illuminating the effect of cognitive biases on visual analysis tasks, they do not include quantitative cognitive models that yield explicit testable hypotheses predicting users’ behavior under different experiment conditions. Moreover, they tend to ignore critiques that heuristics which lead to biased judgments actually reflect the rational use of limited cognitive resources [8].

Adopting cognitive modeling for data visualization research can provide many opportunities to accelerate innovation, improve validity, and facilitate replication efforts [22]. Early examples to understand cognitive processes while using data visualizations consider sensemaking approaches [9] or visual attention coupled with decision-making [23]. However, a drawback of past Vis cognitive frameworks [9, 25, 23] is they are typically descriptive or “process” diagrams. As such, they lack the level of detail necessary to make detailed quantitative predictions or to generate strong hypotheses about behavior [7]. This complicates efforts to predict when cognitive biases will impact how people interpret and make decisions from visualizations.

One area of opportunity is Bayesian cognitive modeling for data visualizations [43, 14, 12]. These models rest on the claim that people reason under uncertainty in accordance with the principles of Bayesian inference [10]. This approach is appealing because it provides a normative framework for how people should reason and make decisions from information under uncertainty. However, in practice people may behave in ways that are inconsistent with the predictions of Bayesian models, often due to well-known limitations in cognitive capacity, including constraints in time, which are common in many visualization studies and tasks. Existing applications of Bayesian cognitive modeling to information visualization have not yet acknowledged these limitations like bounded rationality [29, 30].

Specifically, we’d like to note gaps and disconnect among current efforts in studying the effect of cognitive biases in interactive data visualizations. A promising approach from cognitive science is the resource-rational analysis of cognitive biases as a way to understand rational trade-offs between judgment accuracy and the mind’s limited resources, including fixed time [10, 17, 16]. In this paper, we argue that resource-rational analysis can provide a framework for many cognitive biases in data visualizations while providing a quantitative theoretical framework, or “research roadmap,” that enables a feedback loop to add realism through further constraints. Such a roadmap may not only better identify cognitive biases’ effects in data visualization decision-making but may also provide a means for mitigating these biases before they occur.

1 Cognitive Bias in Interactive Data Visualizations

Cognitive biases are systematic errors (or deviations) in judgment [33, 16]. They have been studied by cognitive psychologists and social scientists to understand how and why individuals sometimes make consistent errors in decision-making. Recently, data visualization researchers have explored the role of cognitive biases transfer to data visualization decision-making [41, 4, 36] and, if such biases can be identified, how these findings could inform the design of visualizations systems that can debias or mitigate such effects [26, 5, 2, 24]. If a well-designed system can help users to find the right explore-exploit mix [11], ideally such a system would safeguard against possible forking path problems [27, 44] and mitigate systematic errors and enable better decision-making.

Data visualization research in cognitive biases tend to focus on either empirical studies or frameworks with little interaction between them. Empirical studies try to demonstrate evidence of traditional cognitive biases through data visualization user studies, typically analyzing user’s interaction behaviors or decisions [3, 2, 1, 42, 35, 13]. Alternatively, general descriptive frameworks (like taxonomies) have been introduced for cognitive biases [4, 41]; however, these tend to broadly cover many human biases [36, 40] and are limited in their ability to provide testable predictions for empirical studies.

Cognitive science has a long history of studying visualization cognition as a subset of visuospatial reasoning, in how individuals derive meaning from visual (external) representations [34]. Typically, these models either focused on perception and/or prior knowledge [23]. More recently, data visualization researchers have integrated similar ideas to understand visualization cognitive processes through insight-based approaches [9] and top-down modeling [19, 25]. However, past visualization cognitive models tend to be based on verbal “process” diagrams, and are not quantitative models that yield explicit testable hypotheses. Without such quantitative predictions, implications of the models can be vague, difficult to simulate, and even more difficult to test and refine.

Bayesian cognitive modeling is a promising approach to studying cognitive biases [43]. Building on work in cognitive science, Wu et al. first argued that Bayesian cognitive modeling provides a means to model many irrational behaviors like cognitive biases in a “principled” way. Building from their work, Kim et al. [14] and Karduni et al. [12] have provided further extensions on studying Bayesian cognitive modeling for data visualizations. In particular, by eliciting each user’s prior belief about an uncertain relationship, these studies have used Bayesian models to predict how people should update those beliefs in response to data visualizations. Although these two studies provide novel elicitation methods with Bayesian cognitive models in data visualization, they do not directly connect such approaches with experiment designs to identify cognitive biases. Moreover, they do not incorporate realistic constraints on users (e.g., time or memory limits) in their modeling or experiment. This is where resource-rational analysis may remedy these shortcomings.

2 Resource-Rational Analysis

Classical approaches to understand rationality [45, 21] assume individuals incorporate utility theory [37] to maximize their expected utility. Simon [29, 30] challenged this notion with bounded rationality, the idea that rational decisions must be framed in the context of the environment and one’s limited cognitive resources. Whereas normative rational models exist on Marr’s computational level [20] (i.e., on the structure of the problem), bounded rationality connects Marr’s computational level and the algorithmic level (e.g., representation and transformation) as human cognition involves making approximations from a normative rational model [10, 15]. The problem is studying each level separately is insufficient to explain the underlying mechanisms in human intelligence [15].

To address this problem, Lieder and Griffiths [17, 15, 16] introduce resource-rational analysis as rational models that bridge the idealized, unbounded computational level to a more realistic, highly resource-constrained algorithmic level. As an iterative process, a rational model can be modified over time to move closer towards a more realistic model of individuals’ true cognitive resources and processes. Figure 1 outlines the five steps in resource-rational analysis. Like other rational theories, resource-rational theory posits that there exists some optimal solution yielded by the rules of expected utility theory, Bayesian inference, and standard rules of logic (Step 1 in Fig. 1). However, bounded rationality limits the space of feasible decisions that are possible given the cognitive constraints which lead to approximate models of rationality (Step 2). Instead, resource rationality is the optimal algorithm under this constraint (Step 3) which then yield testable predictions (Step 4). In this way, resource-rational analysis reinterprets cognitive biases as an optimal (rational) tradeoff between external task demands and internal cognitive constraints (e.g., cost of error in judgment vs. time cost to reduce this error) [18]. This rational interpretation reconciles with Gigerenzer’s criticism of cognitive biases as irrational use of heuristics as rational [8].

Figure 1: Flow diagram of five steps of the resource-rational analysis process adapted from Lieder and Griffiths [15].

2.1 Example: Anchoring Bias

One popular cognitive bias that has been studied in multiple visualization experiments is anchoring bias [1, 42, 39, 35]. Anchoring bias is the tendency for an initial piece of information, relevant or not, to effect a decision-making process [33]. Typically, this is followed by an adjustment in response to new information which falls short of the normative judgment (the anchoring-and-adjustment effect). Anchoring-and-adjustment approach posits a two step process [6]. In the first step, a person will develop their estimate, or anchor, of an open-ended question. In the second step, the person will adjust her estimate as new information is processed. Error occurs when she does fails to make a sufficient adjustment to the correct answer.

Lieder and Griffiths [17] examined anchoring bias through the lens of resource-rational analysis. Following Figure 1, they formulate the problem through Bayesian decision theory for numerical estimation, the classical task associated with anchoring-and-adjustment [33, 6]. They assume that the mind approximates Bayesian inference through sampling algorithms, which represent probabilistic beliefs through a small number of randomly selected hypotheses proportional to their actual prevalence [38]. More specifically, they posit that sampling occurs through Markov Chain Monte Carlo (MCMC), a popular algorithm in statistics and artificial intelligence.

The advantage of this approach is that it provides testable predictions that can be considered empirically through controlled experimentation (Step 4 in Fig. 1). The model predicts that scenarios in which there are high time costs and no error costs, result in the highest degree of anchoring bias as participants have a much higher cost for each adjustment but less concern for accuracy (or error). Therefore, in such situations participants will tend to have more bias (absolute distance). This occurs as participants have zero adjustments and favor their anchor (provided or self-generated) as time costs are critical. To test this model, Lieder et al. [18] developed an empirical experiment on MTurk for estimating bus arrival under four different scenarios. They find strong evidence for resource rationality adjustment as the degree of anchoring bias varied based on different time and error costs. Moreover, they find that incentives can be effective at reducing anchoring bias even with self-generated and provided anchors, contrary to Epley and Gilovich [6].

3 Future Work and Conclusion

Resource-rational analysis could be beneficial in data visualization studies in which users are faced with meaningful cost-benefit tradeoffs in interpreting the visualization. In other words, experiments where additional effort leads to a more accurate decisions from the data. This would especially be the case for system in which sampling occurs over time, either directly sampling information from a display, or sampling alternative states/outcomes in the user’s mental model. In the context of visualizing hurricane paths [28], users might at first overweight the risks of salient negative outcomes (e.g., a direct hit on New Orleans), but with more time (or different type of visualization?) arrive at a better calibrated estimate of the risk.

References

  1. I. Cho, R. Wesslen, A. Karduni, S. Santhanam, S. Shaikh, and W. Dou. The anchoring effect in decision-making with visual analytics. In IEEE Conference on Visual Analytics Science and Technology (VAST), 2017.
  2. E. Dimara, G. Bailly, A. Bezerianos, and S. Franconeri. Mitigating the attraction effect with visualizations. IEEE transactions on visualization and computer graphics, 2018.
  3. E. Dimara, A. Bezerianos, and P. Dragicevic. The attraction effect in information visualization. IEEE transactions on visualization and computer graphics, 23(1), 2017.
  4. E. Dimara, S. Franconeri, C. Plaisant, A. Bezerianos, and P. Dragicevic. A task-based taxonomy of cognitive biases for information visualization. IEEE Transactions on Visualization and Computer Graphics, 2018.
  5. G. Ellis. So, what are cognitive biases? In Cognitive Biases in Visualizations, pp. 1–10. Springer, 2018.
  6. N. Epley and T. Gilovich. The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological science, 17(4):311–318, 2006.
  7. S. Farrell and S. Lewandowsky. Computational Models as Aids to Better Reasoning in Psychology. Current Directions in Psychological Science, 19(5):329–335, Oct. 2010. doi: 10 . 1177/0963721410386677
  8. G. Gigerenzer and W. Gaissmaier. Heuristic decision making. Annual review of psychology, 62:451–482, 2011.
  9. T. M. Green, W. Ribarsky, and B. Fisher. Building and applying a human cognition model for visual analytics. Information visualization, 8(1):1–13, 2009.
  10. T. L. Griffiths, F. Lieder, and N. D. Goodman. Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in cognitive science, 7(2):217–229, 2015.
  11. T. T. Hills, P. M. Todd, D. Lazer, A. D. Redish, I. D. Couzin, C. S. R. Group, et al. Exploration versus exploitation in space, mind, and society. Trends in cognitive sciences, 19(1):46–54, 2015.
  12. A. Karduni, D. Markant, R. Wesslen, and W. Dou. A bayesian cognition approach for belief updating of correlation judgments through uncertainty visualizations. In IEEE Conference on Information Visualization (InfoVis), 2020.
  13. A. Karduni, R. Wesslen, S. Santhanam, I. Cho, S. Volkova, D. Arendt, S. Shaikh, and W. Dou. Can you verifi this? studying uncertainty and decision-making about misinformation in visual analytics. The 12th International AAAI Conference on Web and Social Media (ICWSM), 2018.
  14. Y.-S. Kim, L. A. Walls, P. Krafft, and J. Hullman. A bayesian cognition approach to improve data visualization. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14, 2019.
  15. F. Lieder and T. L. Griffiths. Resource-rational analysis: understanding human cognition as the optimal use of limited. Psychological Science, 2(6):396–408, 2018.
  16. F. Lieder and T. L. Griffiths. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43, 2020.
  17. F. Lieder, T. L. Griffiths, Q. J. Huys, and N. D. Goodman. The anchoring bias reflects rational use of cognitive resources. Psychonomic bulletin & review, 25(1):322–349, 2018.
  18. F. Lieder, T. L. Griffiths, Q. J. Huys, and N. D. Goodman. Empirical evidence for resource-rational anchoring and adjustment. Psychonomic Bulletin & Review, 25(2):775–784, 2018.
  19. Z. Liu and J. Stasko. Mental models, visual reasoning and interaction in information visualization: A top-down perspective. IEEE Transactions on Visualization & Computer Graphics, (6):999–1008, 2010.
  20. D. Marr. Vision: A Computational Investigation into the Human Representation ond Processing of Visual Information. WH Freeman, 1982.
  21. R. Nozick. The nature of rationality. Princeton University Press, 1994.
  22. L. M. Padilla. A case for cognitive models in visualization research: Position paper. In 2018 IEEE Evaluation and Beyond-Methodological Approaches for Visualization (BELIV), pp. 69–77. IEEE, 2018.
  23. L. M. Padilla, S. H. Creem-Regehr, M. Hegarty, and J. K. Stefanucci. Decision making with visualizations: a cognitive framework across disciplines. Cognitive research: principles and implications, 3(1):29, 2018.
  24. P. Parsons. Promoting representational fluency for cognitive bias mitigation in information visualization. In Cognitive Biases in Visualizations, pp. 137–147. Springer, 2018.
  25. R. E. Patterson, L. M. Blaha, G. G. Grinstein, K. K. Liggett, D. E. Kaveney, K. C. Sheldon, P. R. Havig, and J. A. Moore. A human cognition framework for information visualization. Computers & Graphics, 42:42–58, 2014.
  26. M. Pohl, L.-C. Winter, C. Pallaris, S. Attfield, and B. W. Wong. Sensemaking and cognitive bias mitigation in visual analytics. In Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp. 323–323. IEEE, 2014.
  27. X. Pu and M. Kay. The garden of forking paths in visualization: A design space for reliable exploratory visual analytics. 2018.
  28. I. T. Ruginski, A. P. Boone, L. M. Padilla, L. Liu, N. Heydari, H. S. Kramer, M. Hegarty, W. B. Thompson, D. H. House, and S. H. Creem-Regehr. Non-expert interpretations of hurricane forecast uncertainty visualizations. Spatial Cognition & Computation, 16(2):154–172, 2016.
  29. H. A. Simon. A behavioral model of rational choice. The quarterly journal of economics, 69(1):99–118, 1955.
  30. H. A. Simon. Rational choice and the structure of the environment. Psychological review, 63(2):129, 1956.
  31. D. Streeb, M. Chen, and D. A. Keim. The biases of thinking fast and thinking slow. In Cognitive Biases in Visualizations, pp. 97–107. Springer, 2018.
  32. A. Tversky and D. Kahneman. Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2):207–232, 1973.
  33. A. Tversky and D. Kahneman. Judgment under uncertainty: Heuristics and biases. science, 185(4157):1124–1131, 1974.
  34. B. Tversky. Visuospatial reasoning. The Cambridge handbook of thinking and reasoning, pp. 209–240, 2005.
  35. A. C. Valdez, M. Ziefle, and M. Sedlmair. Priming and anchoring effects in visualization. IEEE transactions on visualization and computer graphics, 24(1):584–594, 2018.
  36. A. C. Valdez, M. Ziefle, and M. Sedlmair. Studying biases in visualization research: Framework and methods. In Cognitive Biases in Visualizations, pp. 13–27. Springer, 2018.
  37. J. von Neumann, O. Morgenstern, H. W. Kuhn, and A. Rubinstein. Theory of Games and Economic Behavior (60th Anniversary Commemorative Edition). Princeton University Press, 1944.
  38. E. Vul. Sampling in human cognition. PhD thesis, Massachusetts Institute of Technology, 2010.
  39. E. Wall, L. Blaha, C. Paul, and A. Endert. A formative study of interactive bias metrics in visual analytics using anchoring bias. In IFIP Conference on Human-Computer Interaction, pp. 555–575. Springer, 2019.
  40. E. Wall, L. Blaha, C. L. Paul, K. Cook, and A. Endert. Four perspectives on human bias in visual analytics. In DECISIVe: Workshop on Dealing with Cognitive Biases in Visualizations, 2017.
  41. E. Wall, L. M. Blaha, L. Franklin, and A. Endert. Warning, bias may occur: A proposed approach to detecting cognitive bias in interactive visual analytics. In IEEE Conference on Visual Analytics Science and Technology (VAST), 2017.
  42. R. Wesslen, S. Santhanam, A. Karduni, I. Cho, S. Shaikh, and W. Dou. Investigating effects of visual anchors on decision-making about misinformation. In Computer Graphics Forum, vol. 38, pp. 161–171, 2019.
  43. Y. Wu, L. Xu, R. Chang, and E. Wu. Towards a bayesian model of data visualization cognition, 2017.
  44. E. Zgraggen, Z. Zhao, R. Zeleznik, and T. Kraska. Investigating the effect of the multiple comparisons problem in visual analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 479. ACM, 2018.
  45. M. Zouboulakis. The varieties of economic rationality: From Adam Smith to contemporary behavioural and evolutionary economics. Routledge, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414406
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description