Meaningful Models: Utilizing Conceptual Structure to Improve Machine Learning Interpretability

Meaningful Models: Utilizing Conceptual Structure to Improve Machine Learning Interpretability

Abstract

The last decade has seen huge progress in the development of advanced machine learning models; however, those models are powerless unless human users can interpret them. Here we show how the mind‘s construction of concepts and meaning can be used to create more interpretable machine learning models. By proposing a novel method of classifying concepts, in terms of ‘form’ and ‘function’, we elucidate the nature of meaning and offer proposals to improve model understandability. As machine learning begins to permeate daily life, interpretable models may serve as a bridge between domain-expert authors and non-expert users.

1Introduction

In the last decade, machine learning algorithms have made huge strides, producing state-of-the-art results across a number of domains including image recognition, speech recognition, and natural language processing. However, while such results are exciting, there currently exists a gap between data modeling and knowledge extraction [28]. Machine learning models are rendered powerless unless they can be interpreted, thus in order for knowledge to be extracted from a model, we must account for the human cognitive factors involved in such a process. Interpretation must therefore be accounted for in machine learning processes, as shown in Figure 1. In addition to promoting more transparent results, interpretable models enable non-experts to utilize machine learning tools. For example, a business manager is more likely to accept a model‘s recommendations if its results can be presented in business terms [4]. As an ever-growing number of professionals come to rely on machine learning tools, the most successful models will provide an elegant user experience, presenting users with information and intelligence that are easily interpretable.

Figure 1: This illustration demonstrates the role of human interpretability in the development of a machine learning model. Without interpretable results, a human expert will not be able to accurately or efficiently modify their model or their datasets. This illustration is based on a diagram presented in .
Figure 1: This illustration demonstrates the role of human interpretability in the development of a machine learning model. Without interpretable results, a human expert will not be able to accurately or efficiently modify their model or their datasets. This illustration is based on a diagram presented in .

In the formal logic sense, an interpretation is a mapping of a formal construct to the entities and their relations it represents [24]. Less formally, interpretability can be seen as a signaling problem; a model must present its output such that a specific meaning is conveyed to its user. To understand how to convey meaning, we must first understand the nature of meaning itself. Therefore, in order to design models for interpretability, we must first investigate the processes by which humans assign meaning to symbols, and how the mind extracts knowledge from information.

Whereas previous investigations into machine learning interpretability have largely focused on the relation between accuracy and interpretability, algorithm and feature selection, and model visualizations (e.g. [24], we will instead focus on the psychology of human concept learning. Using a relational model of meaning, we will propose a novel method of classifying concepts according to their structure and function within a given context. Based upon that method, we will offer several proposals to improve non-expert understanding of machine learning tools at a conceptual level.

2Implicit Learning and Feature Extraction

Humans are organisms that have evolved to learn from experience, evaluating novel stimuli through a process of comparison to previously stored stimuli. While learning, in the traditional sense of schooling and education, is an active process, in order to investigate the basis of knowledge, we‘ll have to begin at the sub-conceptual and subconscious level.

The mind constantly and implicitly processes complex information in an incidental manner, without direct awareness of what has been learned [27]. This process of passive knowledge acquisition is known as implicit learning.

Implicit learning began as a field of study with A.S. Reber‘s work in the late 1960‘s, and has been proposed as an evolutionary ancestor of explicit thought [22]. This process occurs automatically, and represents the subtle yet constant re-wiring of a brain‘s neurons as they adapt in response to new stimuli [25]. Most importantly, implicit learning occurs at the subconscious, or pre-conscious level; therefore, the knowledge gained is sub-conceptual, which is to say, the patterns learned are not immediately associated with a reference symbol [18]. Instead, this process extracts relevant features from the local environment via the mind‘s lower level perceptual processes [26]. A feature is an individual measurable property of a phenomenon being observed [3]. Features may be continuous or categorical, and they comprise the most basic building block of human knowledge [26].

3From Features to Concepts

The process of feature extraction is constant and unconscious; to bring this knowledge into the conscious domain requires conceptualization [13]. A concept is an abstract system composed of a set of features paired with a symbolic representation. In many ways, conceptualization mirrors a simple dictionary structure, where the symbol acts as the key, and its associated feature set is the value. The symbolic representation can be any real or abstract token, including images, sounds, and smells. However, the most common form of symbolic representation is a word, a character or combination of characters. For example, the concept of a dog might contain the features [furry: yes, ears: 2, legs: 4, tail: yes] and would be denoted by the character string: ‘dog‘. Since concepts are composed of a multi-dimensional set of features, they are inherently complex symbolic objects.

Concepts are abstract, meaning they can be applied to novel stimuli, and concept learning relies on incremental assumptions [17]. The mind, as a concept formation system, accepts a stream of observations (i.e. events, objects, instances), and discovers a classification scheme over the data stream. Learning occurs not as a single event but as a continuous process; the mind‘s classification scheme evolves and changes as new observations are processed [9]. Figure 2 [8] demonstrates this incremental learning process by which an agent adapts to its environment, organizing experiences to improve its performance [9].

Figure 2: This flow chart illustrates the act of learning as a continuous incremental process, by which an organism adapts to improve its fitness within a given environment.
Figure 2: This flow chart illustrates the act of learning as a continuous incremental process, by which an organism adapts to improve its fitness within a given environment.

This view of learning demonstrates that learning is not a discrete act, but rather a continuous process by which new information contributes to the evolution of existing concepts and the formation of new concepts. Furthermore, it aligns with [24] heuristic of interpretability, which states, “people tend to find those things understandable, that they already know”. Thus, when building a meaningful model, the intended audience must be taken into account when structuring output. If the output can be phrased or structured in a familiar way, subjects will be more likely to implicitly trust and utilize the information.

4From Concepts to Meaning

Having established concepts as a system composed of a [key, value] pair, where the key is a symbol and the value is the associated feature set, we can look at meaning. The word “meaning” is often used in a variety of ways, from Plato‘s physically irreducible mystical essences to ideas of how words are used [20]. Here, I will offer a view which finds its roots in connectionist psychological models, but until recently was unrealized at scale [21]. This view holds that since words simply denote clusters of features, words themselves have no inherent meaning; stripped of its associated features, a word is simply a meaningless symbol. Instead, meaning arises from the cognitive mapping of a word (or symbol) onto an underlying feature map [19]. For example, to someone with no knowledge of the English language, the word ‘tree’ would mean nothing, as their mind has not mapped the symbol to a set of features. However, to a native speaker, not only would ‘tree’ have meaning, but they could likely identify ‘forest’ as a similar concept, due to their overlapping feature sets. This theory of meaning has gained validation from the rise of latent semantic analysis (LSA) techniques, which construct models from the implicit relational mapping of a text. This ‘map‘ does not exist in itself, it is an abstraction – an infinite number of point-to-point distances computed by triangulation from earlier established points [19]. However, models created in the manner have proven highly accurate, and overlaying word-symbols on top of such maps have produced highly intuitive results.

Using this approach, we can view ‘meaning‘ as a fundamentally relational property, as a word‘s relation to the semantic system in which it exists defines its meaning. Importantly, this leads us to realize that to efficiently convey meaning, we must start at the sub-conceptual level by identifying the specific information we hope to convey, then crafting a message such that it conveys the intended features given the context of audience. Given this theory, I will use the word “meaning” as shorthand for “the set of features associated with a symbol, given context”.

5The Form and Function of Concepts

The relational theory of meaning holds that a symbol, say, a word or an image, may hold different meanings in different contexts, given that it interacts with those contexts differently. While this might seem to imply that words cannot be assigned any true meaning, in practice this is not the case. Through shared communication protocols such as language, individually relative meanings solidify into a statistically canonical cultural form [12].

Nevertheless, this theory lacks a direct explanation of the relationship between a symbol and meaning. We posit that this relationship can be best understood in terms of form and function. The function of a concept is its meaning, given context, and it represents how the concept interacts with its larger semantic context. Concepts that share their function are synonyms [16]. The form is the specific instance of the class of objects defined by the object’s function. For example, compare the following three phrases, “I‘m going to the store’’, “I‘m heading to the store’’, and “I‘m heading the soccer ball’’. Given the context of the first two phrases, “going” and “heading” share the same meaning, and can thus be considered different forms, or instances, of the same conceptual function, or class. Given the context of the second two phrases, the conceptual form, “heading”, is the same, but its function differs.

In some aspects, this categorization of concepts by form and function represents an extension of the “theory theory” of concepts in which concepts are composed of core and peripheral features (see: [5]. An object’s core features are its causally deepest properties, whereas peripheral features refer to incidental features of a concept that do not directly define its nature. These descriptions of features as either core or peripheral are useful in qualitative description, but are difficult to translate into more technical contexts. Instead, we propose that function best encapsulates the meaning of core features, and form best encapsulates the meaning of peripheral features. The essence, or core of a concept, is its meaning, defined by the concept’s function within a given context. Peripheral qualities, or form, are in turn best understood as the characteristics of a specific object. For example, within the simple context presented in Figure 3, the rock interacts with a piece of paper by resting on top of it. While the form of the rock may be a small, grey, 2lb stone, within the given context, its function is to apply downward force on the paper, therefore its meaning is ‘paper weight’. Similarly, as the paper supports the rock, from the rock’s perspective, the function of the paper is support, so its meaning is ground. Forms can change without altering the operation of a system, so long as the object retains it’s function.

Since an object’s meaning is defined by interaction with its context, and the interaction can be viewed as a function, a relationship between inputs and output, meaning can be understood as a function within a larger process of interaction.

Figure 3: This diagram displays how meaning arises through interaction. This diagram also reveals that meaning is a function of perspective: from the perspective of the paper, the rock is simply a weight whereas to the rock, the paper may as well be the ground.
Figure 3: This diagram displays how meaning arises through interaction. This diagram also reveals that meaning is a function of perspective: from the perspective of the paper, the rock is simply a weight whereas to the rock, the paper may as well be the ground.

6Proposals to Improve Meaningful Models

6.1Clearly Outline a Model’s Function

The function of a concept is defined by the change it enacts on its context, and thus represents a transition from an initial state to an output state. Thus, to improve model interpretability, models should have very clearly defined requirements for input and the goals of the output. For example, doctors might be supplied with a few models that perform different tasks, including quality-of-life (QOL) assessments, anomaly detection, and DNA sequence mining [7]. Authors of such models should clearly state the purpose and intended applications of their work. If the model is only intended to perform exploratory data analysis, the author should emphasize in their discussion that confirmatory data analysis is required [11]. Furthermore, authors should directly address the transportability of the model, i.e. which aspects of the method can be directly used in novel situations, and which aspects must be tuned for further application.

Additionally, authors should minimize the number of attributes in their classifiers. Minimizing attributes creates a simpler, and therefore more interpretable, form of the model, and also decreases the risk of overfitting, especially in smaller studies. One approach to limit attributes might involve variable ranking (see: [2]). Another viable method proposed by [29] pares down variables using a weight elimination algorithm.

6.2Place the Model in Context

In addition to specifying the purpose and scope-of-use of a model, authors should attempt to construct models such that they complement and expedite existing processes. In doing so, the “meaning” of their model will be elucidated by its context in the existing process. For example, in the early stages of developing a medical diagnostic imaging application, it is impossible to conclusively prove that the application works, but possible to prove that it does not work [11]. If the latter is the case, it is best to discover such quickly, so that new processes and applications may be developed. A model in this process would become more meaningful, by virtue of having a clearly defined function within the scope of a larger system. Additionally, incorporating models into existing processes forces those models to incorporate some level of domain knowledge, and serve as useful tools rather than complete solutions unto themselves.

6.3Design for User Experience

Finally, when developing models that aim to solve specific problems within a given domain area, thought should be given to preparing a front-end for users within that domain. A well-designed front-end would ideally accomplish the above proposals by clearly specifying required inputs, presenting coherent outputs, and positioning the model as a tool within a larger process or framework. Current developments of machine learning platforms such as Google Cloud Platform, Amazon Machine Learning, Microsoft Azure, and H20.ai have made strong progress in this regard, combining powerful models with intuitive representations.

While the algorithms and structure of the model itself accounts for the model‘s function, a cohesive front-end provides an overlaid form for the information conveyed. Essentially, this front-end can be viewed as a translation between the direct model output and a non-expert user. This translation should capitalize on the fundamentals of human concept acquisition by providing both information in a familiar format, and context. To this end, authors should focus on key user experience metrics, such as: will the users recommend the tool? Does this tool create a more efficient or effective process? What are the most significant usability problems with the tool? Are usability improvements being made from one version to the next [1]? These questions place an emphasis on considering the understandability of a model in the design of the algorithms. Interpretability is difficult to achieve as a post-processing step; the relationship between understandability and accuracy must be accounted for from the start [24].

7Conclusion

We have analyzed the psychology of human concept learning, and identified how the mind‘s construction of concepts and meaning can be used to create more interpretable machine learning models. Meaning arises from the interaction of a concept within a specified context. Furthermore, the identity of an object and its meaning can be fully described by two traits: form and function. Form describes the exact qualities and structure of an object, while function describes the object‘s meaning as a function of its interaction in its context. Furthermore, this promotes a view of concepts as functions in context, which allows them to be conceptualized as a relationship between input and output.

Thus, the interpretability of a model on a conceptual level can be bolstered by clearly indicating the model‘s input requirements and output goals, and providing context for the model within a larger process. Additionally, these goals may be combined through the development of a cohesive front-end to present information in a familiar format and expand the usability of a model to non-expert users.

References

  1. Measuring the user experience: collecting, analyzing, and presenting usability metrics


    Albert, William and Tullis, Thomas. .
  2. Distributional word clusters vs. words for text categorization.
    Bekkerman, Ron, El-Yaniv, Ran, Tishby, Naftali, and Winter, Yoad. The Journal of Machine Learning Research
  3. Pattern recognition.
    Bishop, Christopher M. Machine Learning
  4. Business data mining: a machine learning perspective.
    Bose, Indranil and Mahapatra, Radha K. Information & management
  5. Conceptual change in childhood


    Carey, Susan. .
  6. Benefitting from the variables that variable selection discards.
    Caruana, Rich and De Sa, Virginia R. The Journal of Machine Learning Research
  7. Machine learning in medicine


    Cleophas, Ton J and Zwinderman, Aeilko H. .
  8. Learning and inductive inference.
    Dietterich, T. G., London, R.L., Clarkson, K., and Dromey, G. In Feigenbaum, E. and Barr, Avron (eds.), The Handbook of Artificial Intelligence, chapter XIV, pp. 323–512. W. Kaufmann, Los Altos, CA, 1982.
  9. Concept formation: Knowledge and experience in unsupervised learning


    Fisher, D. H., Pazzani, M. J., and Langley, P. .
  10. An extensive empirical study of feature selection metrics for text classification.
    Forman, George. The Journal of machine learning research
  11. Machine learning, medical diagnosis, and biomedical engineering research-commentary.
    Foster, Kenneth R, Koprowski, Robert, and Skufca, Joseph D. Biomedical engineering online
  12. Using relations within conceptual systems to translate across conceptual systems.
    Goldstone, Robert L and Rogosky, Brian J. Cognition
  13. A rational analysis of rule-based concept learning.
    Goodman, Noah D, Tenenbaum, Joshua B, Feldman, Jacob, and Griffiths, Thomas L. Cognitive Science
  14. Words, thoughts, and theories


    Gopnik, Alison and Meltzoff, Andrew N. .
  15. Analysis of interpretability-accuracy tradeoff of fuzzy systems by multiobjective fuzzy genetics-based machine learning.
    Ishibuchi, Hisao and Nojima, Yusuke. International Journal of Approximate Reasoning
  16. Natural language processing and text mining


    Kao, Anne and Poteet, Steve R. .
  17. Issues in the comparative cognition of abstract-concept learning.
    Katz, Jeffrey S, Wright, Anthony A, and Bodily, Kent D. Comparative Cognition & Behavior Reviews
  18. The cognitive unconscious.
    Kihlstrom, John F. Science
  19. Handbook of latent semantic analysis


    Landauer, Thomas K, McNamara, Danielle S, Dennis, Simon, and Kintsch, Walter. .
  20. Philosophical investigations.
    Ludwig, Wittgenstein. London, Basic Blackw
  21. Computational explorations in cognitive neuroscience: Understanding the mind by simulating the brain


    O’Reilly, Randall C and Munakata, Yuko. .
  22. Implicit learning of artificial grammars.
    Reber, Arthur S. Journal of verbal learning and verbal behavior
  23. The cognitive unconscious: An evolutionary perspective.
    Reber, Arthur S. Consciousness and cognition
  24. Learning interpretable models


    Rüping, Stefan. .
  25. Frequency of occurrence and the criteria for automatic processing.
    Sanders, Raymond E, Gonzalez, Eulalio G, Murphy, Martin D, Liddle, Cherie L, and Vitina, John R. Journal of Experimental Psychology: Learning, Memory, and Cognition
  26. The development of features in object concepts.
    Schyns, Philippe G, Goldstone, Robert L, and Thibaut, Jean-Pierre. Behavioral and brain Sciences
  27. Implicit learning.
    Seger, Carol Augart. Psychological bulletin
  28. Making machine learning models interpretable.
    Vellido, Alfredo, Martin-Guerrero, Jose David, and Lisboa, Paulo JG. In ESANN, volume 12, pp. 163–172. Citeseer, 2012.
  29. Generalization by weight-elimination with application to forecasting.
    Weigend, Andreas S, Rumelhart, David E, and Huberman, Bernardo A. In NIPS, volume 90, pp. 875–882, 1990.
  30. Use of the zero norm with linear models and kernel methods.
    Weston, Jason, Elisseeff, André, Schölkopf, Bernhard, and Tipping, Mike. The Journal of Machine Learning Research
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10355
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description