The Latent Relation Mapping Engine: Algorithm and Experiments

The Latent Relation Mapping Engine:
Algorithm and Experiments

\namePeter D. Turney \emailpeter.turney@nrc-cnrc.gc.ca
\addrInstitute for Information Technology
National Research Council Canada
Ottawa, Ontario, Canada, K1A 0R6
Abstract

Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.

The Latent Relation Mapping Engine: Algorithm and Experiments Peter D. Turney peter.turney@nrc-cnrc.gc.ca
Institute for Information Technology
National Research Council Canada
Ottawa, Ontario, Canada, K1A 0R6

1 Introduction

When we are faced with a problem, we try to recall similar problems that we have faced in the past, so that we can transfer our knowledge from past experience to the current problem. We make an analogy between the past situation and the current situation, and we use the analogy to transfer knowledge (?, ?, ?, ?, ?).

In his survey of the computational modeling of analogy-making, French (?) cites Structure Mapping Theory (SMT) (?) and its implementation in the Structure Mapping Engine (SME) (?) as the most influential work on modeling of analogy-making. In SME, an analogical mapping is from a source to a target . The source is more familiar, more known, or more concrete, whereas the target is relatively unfamiliar, unknown, or abstract. The analogical mapping is used to transfer knowledge from the source to the target.

Gentner (?) argues that there are two kinds of similarity, attributional similarity and relational similarity. The distinction between attributes and relations may be understood in terms of predicate logic. An attribute is a predicate with one argument, such as large(), meaning is large. A relation is a predicate with two or more arguments, such as collides_with(), meaning collides with .

The Structure Mapping Engine prefers mappings based on relational similarity over mappings based on attributional similarity (?). For example, SME is able to build a mapping from a representation of the solar system (the source) to a representation of the Rutherford-Bohr model of the atom (the target). The sun is mapped to the nucleus, planets are mapped to electrons, and mass is mapped to charge. Note that this mapping emphasizes relational similarity. The sun and the nucleus are very different in terms of their attributes: the sun is very large and the nucleus is very small. Likewise, planets and electrons have little attributional similarity. On the other hand, planets revolve around the sun like electrons revolve around the nucleus. The mass of the sun attracts the mass of the planets like the charge of the nucleus attracts the charge of the electrons.

Gentner (?) provides evidence that children rely primarily on attributional similarity for mapping, gradually switching over to relational similarity as they mature. She uses the terms mere appearance to refer to mapping based mostly on attributional similarity, analogy to refer to mapping based mostly on relational similarity, and literal similarity to refer to a mixture of attributional and relational similarity. Since we use analogical mappings to solve problems and make predictions, we should focus on structure, especially causal relations, and look beyond the surface attributes of things (?). The analogy between the solar system and the Rutherford-Bohr model of the atom illustrates the importance of going beyond mere appearance, to the underlying structures.

Figures 1 and 1 show the LISP representations used by SME as input for the analogy between the solar system and the atom (?). Chalmers, French, and Hofstadter (?) criticize SME’s requirement for complex hand-coded representations. They argue that most of the hard work is done by the human who creates these high-level hand-coded representations, rather than by SME.

(defEntity sun :type inanimate)
(defEntity planet :type inanimate)
(defDescription solar-system
         entities (sun planet)
         expressions (((mass sun) :name mass-sun)
                ((mass planet) :name mass-planet)
                ((greater mass-sun mass-planet) :name >mass)
                ((attracts sun planet) :name attracts-form)
                ((revolve-around planet sun) :name revolve)
                ((and >mass attracts-form) :name and1)
                ((cause and1 revolve) :name cause-revolve)
                ((temperature sun) :name temp-sun)
                ((temperature planet) :name temp-planet)
                ((greater temp-sun temp-planet) :name >temp)
                ((gravity mass-sun mass-planet) :name force-gravity)
                ((cause force-gravity attracts-form) :name why-attracts)))
Figure 1: The representation of the solar system in SME (?).
(defEntity nucleus :type inanimate)
(defEntity electron :type inanimate)
(defDescription rutherford-atom
         entities (nucleus electron)
         expressions (((mass nucleus) :name mass-n)
                ((mass electron) :name mass-e)
                ((greater mass-n mass-e) :name >mass)
                ((attracts nucleus electron) :name attracts-form)
                ((revolve-around electron nucleus) :name revolve)
                ((charge electron) :name q-electron)
                ((charge nucleus) :name q-nucleus)
                ((opposite-sign q-nucleus q-electron) :name >charge)
                ((cause >charge attracts-form) :name why-attracts)))
Figure 2: The Rutherford-Bohr model of the atom in SME (?).

Gentner, Forbus, and their colleagues have attempted to avoid hand-coding in their recent work with SME.111Dedre Gentner, personal communication, October 29, 2008. The CogSketch system can generate LISP representations from simple sketches (?). The Gizmo system can generate LISP representations from qualitative physics models (?). The Learning Reader system can generate LISP representations from natural language text (?). These systems do not require LISP input.

However, the CogSketch user interface requires the person who draws the sketch to identify the basic components in the sketch and hand-label them with terms from a knowledge base derived from OpenCyc. Forbus et al. (?) note that OpenCyc contains more than 58,000 hand-coded concepts, and they have added further hand-coded concepts to OpenCyc, in order to support CogSketch. The Gizmo system requires the user to hand-code a physical model, using the methods of qualitative physics (?). Learning Reader uses more than 28,000 phrasal patterns, which were derived from ResearchCyc (?). It is evident that SME still requires substantial hand-coded knowledge.

The work we present in this paper is an effort to avoid complex hand-coded representations. Our approach is to combine ideas from SME (?) and Latent Relational Analysis (LRA) (?). We call the resulting algorithm the Latent Relation Mapping Engine (LRME). We represent the semantic relation between two terms using a vector, in which the elements are derived from pattern frequencies in a large corpus of raw text. Because the semantic relations are automatically derived from a corpus, LRME does not require hand-coded representations of relations. It only needs a list of terms from the source and a list of terms from the target. Given these two lists, LRME uses the corpus to build representations of the relations among the terms, and then it constructs a mapping between the two lists.

Tables 1 and 2 show the input and output of LRME for the analogy between the solar system and the Rutherford-Bohr model of the atom. Although some human effort is involved in constructing the input lists, it is considerably less effort than SME requires for its input (contrast Figures 1 and 1 with Table 1).

Source planet attracts revolves sun gravity solar system mass Target revolves atom attracts electromagnetism nucleus charge electron
Table 1: The representation of the input in LRME.
Source Mapping Target
solar system atom
sun nucleus
planet electron
mass charge
attracts attracts
revolves revolves
gravity electromagnetism
Table 2: The representation of the output in LRME.

Scientific analogies, such as the analogy between the solar system and the Rutherford-Bohr model of the atom, may seem esoteric, but we believe analogy-making is ubiquitous in our daily lives. A potential practical application for this work is the task of identifying semantic roles (?). Since roles are relations, not attributes, it is appropriate to treat semantic role labeling as an analogical mapping problem.

For example, the Judgement semantic frame contains semantic roles such as judge, evaluee, and reason, and the Statement frame contains roles such as speaker, addressee, message, topic, and medium (?). The task of identifying semantic roles is to automatically label sentences with their roles, as in the following examples (?):

  • Judge She] blames [Evaluee the Government] [Reason for failing to do enough to help].

  • Speaker We] talked [Topic about the proposal] [Medium over the phone].

If we have a training set of labeled sentences and a testing set of unlabeled sentences, then we may view the task of labeling the testing sentences as a problem of creating analogical mappings between the training sentences (sources) and the testing sentences (targets). Table 3 shows how “She blames the Government for failing to do enough to help.” might be mapped to “They blame the company for polluting the environment.” Once a mapping has been found, we can transfer knowledge, in the form of semantic role labels, from the source to the target.

Source Mapping Target
she they
blames blame
government company
failing polluting
help environment
Table 3: Semantic role labeling as analogical mapping.

In Section 2, we briefly discuss the hypotheses behind the design of LRME. We then precisely define the task that is performed by LRME, a specific form of analogical mapping, in Section 3. LRME builds on Latent Relational Analysis (LRA), hence we summarize LRA in Section 4. We discuss potential applications of LRME in Section 5.

To evaluate LRME, we created twenty analogical mapping problems, ten science analogy problems (?) and ten common metaphor problems (?). Table 1 is one of the science analogy problems. Our intended solution is given in Table 2. To validate our intended solutions, we gave our colleagues the lists of terms (as in Table 1) and asked them to generate mappings between the lists. Section 6 presents the results of this experiment. Across the twenty problems, the average agreement with our intended solutions (as in Table 2) was 87.6%.

The LRME algorithm is outlined in Section 7, along with its evaluation on the twenty mapping problems. LRME achieves an accuracy of 91.5%. The difference between this performance and the human average of 87.6% is not statistically significant.

Section 8 examines a variety of alternative approaches to the analogy mapping task. The best approach achieves an accuracy of 76.8%, but this approach requires hand-coded part-of-speech tags. This performance is significantly below LRME and human performance.

In Section 9, we discuss some questions that are raised by the results in the preceding sections. Related work is described in Section 10, future work and limitations are considered in Section 11, and we conclude in Section 12.

2 Guiding Hypotheses

In this section, we list some of the assumptions that have guided the design of LRME. The results we present in this paper do not necessarily require these assumptions, but it might be helpful to the reader, to understand the reasoning behind our approach.

  1. Analogies and semantic relations: Analogies are based on semantic relations (?). For example, the analogy between the solar system and the Rutherford-Bohr model of the atom is based on the similarity of the semantic relations among the concepts involved in our understanding of the solar system to the semantic relations among the concepts involved in the Rutherford-Bohr model of the atom.

  2. Co-occurrences and semantic relations: Two terms have an interesting, significant semantic relation if and only if they they tend to co-occur within a relatively small window (e.g., five words) in a relatively large corpus (e.g., words). Having an interesting semantic relation causes co-occurrence and co-occurrence is a reliable indicator of an interesting semantic relation (?).

  3. Meanings and semantic relations: Meaning has more to do with relations among words than individual words. Individual words tend to be ambiguous and polysemous. By putting two words into a pair, we constrain their possible meanings. By putting words into a sentence, with multiple relations among the words in the sentence, we constrain the possible meanings further. If we focus on word pairs (or tuples), instead of individual words, word sense disambiguation is less problematic. Perhaps a word has no sense apart from its relations with other words (?).

  4. Pattern distributions and semantic relations: There is a many-to-many mapping between semantic relations and the patterns in which two terms co-occur. For example, the relation may be expressed as “ causes ”, “ from ”, “ due to ”, “ because of ”, and so on. Likewise, the pattern “ from ” may be an expression of (“sick from bacteria”) or (“oranges from Spain”). However, for a given and , the statistical distribution of patterns in which and co-occur is a reliable signature of the semantic relations between and (?).

To the extent that LRME works, we believe its success lends some support to these hypotheses.

3 The Task

In this paper, we examine algorithms that generate analogical mappings. For simplicity, we restrict the task to generating bijective mappings; that is, mappings that are both injective (one-to-one; there is no instance in which two terms in the source map to the same term in the target) and surjective (onto; the source terms cover all of the target terms; there is no target term that is left out of the mapping). We assume that the entities that are to be mapped are given as input. Formally, the input for the algorithms is two sets of terms, and .

(1)

Since the mappings are bijective, and must contain the same number of terms, .

(2)
(3)

A term, or , may consist of a single word (planet) or a compound of two or more words (solar system). The words may be any part of speech (nouns, verbs, adjectives, or adverbs). The output is a bijective mapping from to .

(4)
(5)
(6)

The algorithms that we consider here can accept a batch of multiple independent mapping problems as input and generate a mapping for each one as output.

(7)
(8)

Suppose the terms in are in some arbitrary order .

(9)

The mapping function , given , determines a unique ordering of .

(10)

Likewise, an ordering of , given , defines a unique mapping function . Since there are possible orderings of , there are also possible mappings from to . The task is to search through the mappings and find the best one. (Section 6 shows that there is a relatively high degree of consensus about which mappings are best.)

Let be the set of all bijective mappings from to . ( stands for permutation, since each mapping corresponds to a permutation.)

(11)
(12)
(13)

In the following experiments, is on average and at most, so is usually around and at most . It is feasible for us to exhaustively search .

We explore two basic kinds of algorithms for generating analogical mappings, algorithms based on attributional similarity and algorithms based on relational similarity (?). The attributional similarity between two words, , depends on the degree of correspondence between the properties of and . The more correspondence there is, the greater their attributional similarity. The relational similarity between two pairs of words, , depends on the degree of correspondence between the relations of and . The more correspondence there is, the greater their relational similarity. For example, dog and wolf have a relatively high degree of attributional similarity, whereas dogbark and catmeow have a relatively high degree of relational similarity.

Attributional mapping algorithms seek the mapping (or mappings) that maximizes the sum of the attributional similarities between the terms in and the corresponding terms in . (When there are multiple mappings that maximize the sum, we break the tie by randomly choosing one of them.)

(14)

Relational mapping algorithms seek the mapping (or mappings) that maximizes the sum of the relational similarities.

(15)

In (15), we assume that is symmetrical. For example, the degree of relational similarity between dogbark and catmeow is the same as the degree of relational similarity between barkdog and meowcat.

(16)

We also assume that is not interesting; for example, it may be some constant value for all and . Therefore (15) is designed so that is always less than .

Let and be defined as follows.

(17)
(18)

Now and may be defined in terms of and .

(19)
(20)

is the best mapping according to and is the best mapping according to .

Recall Gentner’s (?) terms, discussed in Section 1, mere appearance (mostly attributional similarity), analogy (mostly relational similarity), and literal similarity (a mixture of attributional and relational similarity). We take it that is an abstract model of mapping based on analogy and is a model of mere appearance. For literal similarity, we can combine and , but we should take care to normalize and before we combine them. (We experiment with combining them in Section 9.2.)

4 Latent Relational Analysis

LRME uses a simplified form of Latent Relational Analysis (LRA) (?, ?) to calculate the relational similarity between pairs of words. We will briefly describe past work with LRA before we present LRME.

LRA takes as input a set of word pairs and generates as output the relational similarity between any two pairs in the input.

(21)
(22)

LRA was designed to evaluate proportional analogies. Proportional analogies have the form , which means “ is to as is to ”. For example, masonstonecarpenterwood means “mason is to stone as carpenter is to wood”. A mason is an artisan who works with stone and a carpenter is an artisan who works with wood.

We consider proportional analogies to be a special case of bijective analogical mapping, as defined in Section 3, in which . For example, is equivalent to in (23).

(23)

From the definition of in (17), we have the following result for .

(24)

That is, the quality of the proportional analogy masonstonecarpenterwood is given by .

Proportional analogies may also be evaluated using attributional similarity. From the definition of in (18), we have the following result for .

(25)

For attributional similarity, the quality of the proportional analogy masonstonecarpenterwood is given by .

LRA only handles proportional analogies. The main contribution of LRME is to extend LRA beyond proportional analogies to bijective analogies for which .

Turney (?) describes ten potential applications of LRA: recognizing proportional analogies, structure mapping theory, modeling metaphor, classifying semantic relations, word sense disambiguation, information extraction, question answering, automatic thesaurus generation, information retrieval, and identifying semantic roles. Two of these applications (evaluating proportional analogies and classifying semantic relations) are experimentally evaluated, with state-of-the-art results.

Turney (?) compares the performance of relational similarity (24) and attributional similarity (25) on the task of solving 374 multiple-choice proportional analogy questions from the SAT college entrance test. LRA is used to measure relational similarity and a variety of lexicon-based and corpus-based algorithms are used to measure attributional similarity. LRA achieves an accuracy of 56% on the 374 SAT questions, which is not significantly different from the average human score of 57%. On the other hand, the best performance by attributional similarity is 35%. The results show that attributional similarity is better than random guessing, but not as good as relational similarity. This result is consistent with Gentner’s (?) theory of the maturation of human similarity judgments.

Turney (?) also applies LRA to the task of classifying semantic relations in noun-modifier expressions. A noun-modifier expression is a phrase, such as laser printer, in which the head noun (printer) is preceded by a modifier (laser). The task is to identify the semantic relation between the noun and the modifier. In this case, the relation is instrument; the laser is an instrument used by the printer. On a set of 600 hand-labeled noun-modifier pairs with five different classes of semantic relations, LRA attains 58% accuracy.

Turney (?) employs a variation of LRA for solving four different language tests, achieving 52% accuracy on SAT analogy questions, 76% accuracy on TOEFL synonym questions, 75% accuracy on the task of distinguishing synonyms from antonyms, and 77% accuracy on the task of distinguishing words that are similar, words that are associated, and words that are both similar and associated. The same core algorithm is used for all four tests, with no tuning of the parameters to the particular test.

5 Applications for LRME

Since LRME is an extension of LRA, every potential application of LRA is also a potential application of LRME. The advantage of LRME over LRA is the ability to handle bijective analogies when (where ). In this section, we consider the kinds of applications that might benefit from this ability.

In Section 7.2, we evaluate LRME on science analogies and common metaphors, which supports the claim that these two applications benefit from the ability to handle larger sets of terms. In Section 1, we saw that identifying semantic roles (?) also involves more than two terms, and we believe that LRME will be superior to LRA for semantic role labeling.

Semantic relation classification usually assumes that the relations are binary; that is, a semantic relation is a connection between two terms (?, ?, ?, ?). Yuret observed that binary relations may be linked by underlying -ary relations.222Deniz Yuret, personal communication, February 13, 2007. This observation was in the context of our work on building the datasets for SemEval 2007 Task 4 (?). For example, Nastase and Szpakowicz (?) defined a taxonomy of 30 binary semantic relations. Table 4 shows how six binary relations from Nastase and Szpakowicz (?) can be covered by one 5-ary relation, Agent:Tool:Action:Affected:Theme. An Agent uses a Tool to perform an Action. Somebody or something is Affected by the Action. The whole event can be summarized by its Theme.

Nastase and Szpakowicz (?)
Relation Example Agent:Tool:Action:Affected:Theme
agent student protest Agent:Action
purpose concert hall Theme:Tool
beneficiary student discount Affected:Action
instrument laser printer Tool:Agent
object metal separator Affected:Tool
object property sunken ship Action:Affected
Table 4: How six binary semantic relations from Nastase and Szpakowicz (?) can be viewed as different fragments of one 5-ary semantic relation.

In SemEval Task 4, we found it easier to manually tag the datasets when we expanded binary relations to their underlying -ary relations (?). We believe that this expansion would also facilitate automatic classification of semantic relations. The results in Section 9.3 suggest that all of the applications for LRA that we discussed in Section 4 might benefit from being able to handle bijective analogies when .

6 The Mapping Problems

To evaluate our algorithms for analogical mapping, we created twenty mapping problems, given in Appendix A. The twenty problems consist of ten science analogy problems, based on examples of analogy in science from Chapter 8 of Holyoak and Thagard (?), and ten common metaphor problems, derived from Lakoff and Johnson (?).

The tables in Appendix A show our intended mappings for each of the twenty problems. To validate these mappings, we invited our colleagues in the Institute for Information Technology to participate in an experiment. The experiment was hosted on a web server (only accessible inside our institute) and people participated anonymously, using their web browsers in their offices. There were 39 volunteers who began the experiment and 22 who went all the way to the end. In our analysis, we use only the data from the 22 participants who completed all of the mapping problems.

The instructions for the participants are in Appendix A. The sequence of the problems and the order of the terms within a problem were randomized separately for each participant, to remove any effects due to order. Table 5 shows the agreement between our intended mapping and the mappings generated by the participants. Across the twenty problems, the average agreement was 87.6%, which is higher than the agreement figures for many linguistic annotation tasks. This agreement is impressive, given that the participants had minimal instructions and no training.

Type Mapping Source Target Agreement
A1 solar system atom 90.9 7
A2 water flow heat transfer 86.9 8
A3 waves sounds 81.8 8
A4 combustion respiration 79.0 8
science A5 sound light 79.2 7
analogies A6 projectile planet 97.4 7
A7 artificial selection natural selection 74.7 7
A8 billiard balls gas molecules 88.1 8
A9 computer mind 84.3 9
A10 slot machine bacterial mutation 83.6 5
M1 war argument 93.5 7
M2 buying an item accepting a belief 96.1 7
M3 grounds for a building reasons for a theory 87.9 6
M4 impediments to travel difficulties 100.0 7
common M5 money time 77.3 6
metaphors M6 seeds ideas 89.0 7
M7 machine mind 98.7 7
M8 object idea 89.1 5
M9 following understanding 96.6 8
M10 seeing understanding 78.8 6
Average 87.6 7.0
Table 5: The average agreement between our intended mappings and the mappings of the 22 participants. See Appendix A for the details.

The column labeled gives the number of terms in the set of source terms for each mapping problem (which is equal to the number of terms in the set of target terms). For the average problem, . The third column in Table 5 gives a mnemonic that summarizes the mapping (e.g., solar system atom). Note that the mnemonic is not used as input for any of the algorithms, nor was the mnemonic shown to the participants in the experiment.

The agreement figures in Table 5 for each individual mapping problem are averages over the mappings for each problem. Appendix A gives a more detailed view, showing the agreement for each individual mapping in the mappings. The twenty problems contain a total of 140 individual mappings (). Appendix A shows that every one of these 140 mappings has an agreement of 50% or higher. That is, in every case, the majority of the participants agreed with our intended mapping. (There are two cases where the agreement is exactly 50%. See problems A5 in Table 14 and M5 in Table 16 in Appendix A.)

If we select the mapping that is chosen by the majority of the 22 participants, then we will get a perfect score on all twenty problems. More precisely, if we try all mappings for each problem, and select the mapping that maximizes the sum of the number of participants who agree with each individual mapping in the mappings, then we will have a score of 100% on all twenty problems. This is strong support for the intended mappings that are given in Appendix A.

In Section 3, we applied Genter’s (?) categories – mere appearance (mostly attributional similarity), analogy (mostly relational similarity), and literal similarity (a mixture of attributional and relational similarity) – to the mappings and , where is the best mapping according to and is the best mapping according to . The twenty mapping problems were chosen as analogy problems; that is, the intended mappings in Appendix A are meant to be relational mappings, ; mappings that maximize relational similarity, . We have tried to avoid mere appearance and literal similarity.

In Section 7 we use the twenty mapping problems to evaluate a relational mapping algorithm (LRME), and in Section 8 we use them to evaluate several different attributional mapping algorithms. Our hypothesis is that LRME will perform significantly better than any of the attributional mapping algorithms on the twenty mapping problems, because they are analogy problems (not mere appearance problems and not literal similarity problems). We expect relational and attributional mapping algorithms would perform approximately equally well on literal similarity problems, and we expect that mere appearance problems would favour attributional algorithms over relational algorithms, but we do not test these latter two hypotheses, because our primary interest in this paper is analogy-making.

Our goal is to test the hypothesis that there is a real, practical, effective, measurable difference between the output of LRME and the output of the various attributional mapping algorithms. A skeptic might claim that relational similarity can be reduced to attributional similarity ; therefore our relational mapping algorithm is a complicated solution to an illusory problem. A slightly less skeptical claim is that relational similarity versus attributional similarity is a valid distinction in cognitive psychology, but our relational mapping algorithm does not capture this distinction. To test our hypothesis and refute these skeptical claims, we have created twenty analogical mapping problems, and we will show that LRME handles these problems significantly better than the various attributional mapping algorithms.

7 The Latent Relation Mapping Engine

The Latent Relation Mapping Engine (LRME) seeks the mapping that maximizes the sum of the relational similarities.

(26)

We search for by exhaustively evaluating all of the possibilities. Ties are broken randomly. We use a simplified form of LRA (?) to calculate .

7.1 Algorithm

Briefly, the idea of LRME is to build a pair-pattern matrix , in which the rows correspond to pairs of terms and the columns correspond to patterns. For example, the row might correspond to the pair of terms sunsolar system and the column might correspond to the pattern “ centered ”. In these patterns, “” is a wild card, which can match any single word. The value of an element in is based on the frequency of the pattern for , when and are instantiated by the terms in the pair for . For example, if we take the pattern “ centered ” and instantiate with the pair sunsolar system, then we have the pattern “ sun centered solar system ”, and thus the value of the element is based on the frequency of “ sun centered solar system ” in the corpus. The matrix is smoothed with a truncated singular value decomposition (SVD) (?) and the relational similarity between two pairs of terms is given by the cosine of the angle between the two corresponding row vectors in .

In more detail, LRME takes as input a set of mapping problems and generates as output a corresponding set of mappings.

(27)
(28)

In the following experiments, all twenty mapping problems (Appendix A) are processed in one batch ().

The first step is to make a list that contains all pairs of terms in the input . For each mapping problem in , we add to all pairs , such that and are members of , , and all pairs , such that and are members of , . If , then there are pairs from and pairs from .333We have here, not , because we need the pairs in both orders. We only want to calculate for one order of the pairs, because is always less than in (26); however, to ensure that is symmetrical, as in (16), we need to make the matrix symmetrical, by having rows in the matrix for both orders of every pair. A typical pair in would be sunsolar system. We do not allow duplicates in ; is a list of pair types, not pair tokens. For our twenty mapping problems, is a list of 1,694 pairs.

For each pair in , we make a list of the phrases in the corpus that contain the pair . Let be the terms in the pair . We search in the corpus for all phrases of the following form:

“[0 to 1 words] [0 to 3 words] [0 to 1 words]” (29)

If is in , then is also in , so we find phrases with the members of the pairs in both orders, and . The search template (29) is the same as used by Turney (?).

In the following experiments, we search in a corpus of English words (about 280 GB of plain text), consisting of web pages gathered by a web crawler.444The corpus was collected by Charles Clarke at the University of Waterloo. We can provide copies of the corpus on request. To retrieve phrases from the corpus, we use Wumpus (?), an efficient search engine for passage retrieval from large corpora.555Wumpus was developed by Stefan Büttcher and it is available at http://www.wumpus-search.org/.

With the 1,694 pairs in , we find a total of 1,996,464 phrases in the corpus, an average of about 1,180 phrases per pair. For the pair = sunsolar system, a typical phrase in would be “a sun centered solar system illustrates”.

Next we make a list of patterns, based on the phrases we have found. For each pair in , where , if we found a phrase in , then we replace in with and we replace with . The remaining words may be either left as they are or replaced with a wild card symbol “”. We then replace in with and with , and replace the remaining words with wild cards or leave them as they are. If there are remaining words in , after and are replaced, then we generate patterns from , and we add these patterns to . We only add new patterns to ; that is, is a list of pattern types, not pattern tokens; there are no duplicates in .

For example, for the pair sunsolar system, we found the phrase “a sun centered solar system illustrates”. When we replace with , we have “a centered illustrates”. There are three remaining words, so we can generate eight patterns, such as “a illustrates”, “a centered ”, “ illustrates”, and so on. Each of these patterns is added to . Then we replace with , yielding “a centered illustrates”. This gives us another eight patterns, such as “a centered ”. Thus the phrase “a sun centered solar system illustrates” generates a total of sixteen patterns, which we add to .

Now we revise , to make a list of pairs that will correspond to rows in the frequency matrix . We remove any pairs from for which no phrases were found in the corpus, when the terms were in either order. Let be the terms in the pair . We remove from if both and are empty. We remove such rows because they would correspond to zero vectors in the matrix . This reduces from 1,694 pairs to 1,662 pairs. Let be the number of pairs in .

Next we revise , to make a list of patterns that will correspond to columns in the frequency matrix . In the following experiments, at this stage, contains millions of patterns, too many for efficient processing with a standard desktop computer. We need to reduce to a more manageable size. We select the patterns that are shared by the most pairs. Let be a pattern in . Let be a pair in . If there is a phrase in , such that there is a pattern generated from that is identical to , then we say that is one of the pairs that generated . We sort the patterns in in descending order of the number of pairs in that generated each pattern, and we select the top patterns from this sorted list. Following Turney (?), we set the parameter to 20; hence is reduced to the top 33,240 patterns ( = 20 1,662 = 33,240). Let be the number of patterns in (.

Now that the rows and columns are defined, we can build the frequency matrix . Let be the -th pair of terms in (e.g., let be sunsolar system) and let be the -th pattern in (e.g., let be “ centered ”). We instantiate and in the pattern with the terms in (“ sun centered solar system ”). The element in is the frequency of this instantiated pattern in the corpus.

Note that we do not need to search again in the corpus for the instantiated pattern for , in order to find its frequency. In the process of creating each pattern, we can keep track of how many phrases generated the pattern, for each pair. We can get the frequency for by checking our record of the patterns that were generated by .

The next step is to transform the matrix of raw frequencies into a form that enhances the similarity measurement. Turney (?) used the log entropy transformation, as suggested by Landauer and Dumais (?). This is a kind of tf-idf (term frequency times inverse document frequency) transformation, which gives more weight to elements in the matrix that are statistically surprising. However, Bullinaria and Levy (?) recently achieved good results with a new transformation, called PPMIC (Positive Pointwise Mutual Information with Cosine); therefore LRME uses PPMIC. The raw frequencies in are used to calculate probabilities, from which we can calculate the pointwise mutual information (PMI) of each element in the matrix. Any element with a negative PMI is then set to zero.

(30)
(31)
(32)
(33)
(34)

Let be the -th pair of terms in (e.g., let be sunsolar system) and let be the -th pattern in (e.g., let be “ centered ”). In (33), is the estimated probability of the of the pattern instantiated with the pair (“ sun centered solar system ”), is the estimated probability of , and is the estimated probability of . If and are statistically independent, then (by the definition of independence), and thus is zero (since ). If there is an interesting semantic relation between the terms in , and the pattern captures an aspect of that semantic relation, then we should expect to be larger than it would be if and were indepedent; hence we should find that , and thus is positive. (See Hypothesis 2 in Section 2.) On the other hand, terms from completely different domains may avoid each other, in which case we should find that is negative. PPMIC is designed to give a high value to when the pattern captures an aspect of the semantic relation between the terms in ; otherwise, should have a value of zero, indicating that the pattern tells us nothing about the semantic relation between the terms in .

In our experiments, has a density of 4.6% (the percentage of nonzero elements) and has a density of 3.8%. The lower density of is due to elements with a negative PMI, which are transformed to zero by PPMIC.

Now we smooth by applying a truncated singular value decomposition (SVD) (?). We use SVDLIBC to calculate the SVD of .666SVDLIBC is the work of Doug Rohde and it is available at http://tedlab.mit.edu/dr/svdlibc/. SVDLIBC is designed for sparse (low density) matrices. SVD decomposes into the product of three matrices , where and are in column orthonormal form (i.e., the columns are orthogonal and have unit length, ) and is a diagonal matrix of singular values (?). If is of rank , then is also of rank . Let , where , be the diagonal matrix formed from the top singular values, and let and be the matrices produced by selecting the corresponding columns from and . The matrix is the matrix of rank that best approximates the original matrix , in the sense that it minimizes the approximation errors. That is, minimizes over all matrices of rank , where denotes the Frobenius norm (?). We may think of this matrix as a smoothed or compressed version of the original matrix . Following Turney (?), we set the parameter to 300.

The relational similarity between two pairs in is the inner product of the two corresponding rows in , after the rows have been normalized to unit length. We can simplify calculations by dropping (?). We take the matrix and normalize each row to unit length. Let be the resulting matrix. Now let be , a square matrix of size . This matrix contains the cosines of all combinations of two pairs in .

For a mapping problem in , let be a pair of terms from and let be a pair of terms from . Suppose that and , where and are the -th and -th pairs in . Then , where is the element in the -th row and -th column of . If either or is not in , because , , , or is empty, then we set the similarity to zero. Finally, for each mapping problem in , we output the map that maximizes the sum of the relational similarities.

(35)

The simplified form of LRA used here to calculate differs from LRA used by Turney (?) in several ways. In LRME, there is no use of synonyms to generate alternate forms of the pairs of terms. In LRME, there is no morphological processing of the terms. LRME uses PPMIC (?) to process the raw frequencies, instead of log entropy. Following Turney (?), LRME uses a slightly different search template (29) and LRME sets the number of columns to , instead of using a constant. In Section 7.2, we evaluate the impact of two of these changes (PPMIC and ), but we have not tested the other changes, which were mainly motivated by a desire for increased efficiency and simplicity.

7.2 Experiments

We implemented LRME in Perl, making external calls to Wumpus for searching the corpus and to SVDLIBC for calculating SVD. We used the Perl Net::Telnet package for interprocess communication with Wumpus, the PDL (Perl Data Language) package for matrix manipulations (e.g., calculating cosines), and the List::Permutor package to generate permutations (i.e., to loop through ).

We ran the following experiments on a dual core AMD Opteron 64 computer, running 64 bit Linux. Most of the running time is spent searching the corpus for phrases. It took 16 hours and 27 minutes for Wumpus to fetch the 1,996,464 phrases. The remaining steps took 52 minutes, of which SVD took 10 minutes. The running time could be cut in half by using RAID 0 to speed up disk access.

Table 6 shows the performance of LRME in its baseline configuration. For comparison, the agreement of the 22 volunteers with our intended mapping has been copied from Table 5. The difference between the performance of LRME (91.5%) and the human participants (87.6%) is not statistically significant (paired t-test, 95% confidence level).

Accuracy
Mapping Source Target LRME Humans
A1 solar system atom 100.0 90.9
A2 water flow heat transfer 100.0 86.9
A3 waves sounds 100.0 81.8
A4 combustion respiration 100.0 79.0
A5 sound light 71.4 79.2
A6 projectile planet 100.0 97.4
A7 artificial selection natural selection 71.4 74.7
A8 billiard balls gas molecules 100.0 88.1
A9 computer mind 55.6 84.3
A10 slot machine bacterial mutation 100.0 83.6
M1 war argument 71.4 93.5
M2 buying an item accepting a belief 100.0 96.1
M3 grounds for a building reasons for a theory 100.0 87.9
M4 impediments to travel difficulties 100.0 100.0
M5 money time 100.0 77.3
M6 seeds ideas 100.0 89.0
M7 machine mind 100.0 98.7
M8 object idea 60.0 89.1
M9 following understanding 100.0 96.6
M10 seeing understanding 100.0 78.8
Average 91.5 87.6
Table 6: LRME in its baseline configuration, compared with human performance.

In Table 6, the column labeled Humans is the average of 22 people, whereas the LRME column is only one algorithm (it is not an average). Comparing an average of several scores to an individual score (whether the individual is a human or an algorithm) may give a misleading impression. In the results for any individual person, there are typically several 100% scores and a few scores in the 55-75% range. The average mapping problem has seven terms. It is not possible to have exactly one term mapped incorrectly; if there are any incorrect mappings, then there must be two or more incorrect mappings. This follows from the nature of bijections. Therefore a score of is not uncommon.

Table 7 looks at the results from another perspective. The column labeled LRME wrong gives the number of incorrect mappings made by LRME for each of the twenty problems. The five columns labeled Number of people with wrong show, for various values of , how may of the 22 people made incorrect mappings. For the average mapping problem, 15 out of 22 participants had a perfect score (); of the remaining 7 participants, 5 made only two mistakes (). Table 7 shows more clearly than Table 6 that LRME’s performance is not significantly different from (individual) human performance. (For yet another perspective, see Section 9.1).

LRME Number of people with wrong
Mapping wrong
A1 0 16 0 4 2 0 7
A2 0 14 0 5 0 3 8
A3 0 9 0 9 2 2 8
A4 0 9 0 9 0 4 8
A5 2 10 0 7 2 3 7
A6 0 20 0 2 0 0 7
A7 2 8 0 6 6 2 7
A8 0 13 0 8 0 1 8
A9 4 11 0 7 2 2 9
A10 0 13 0 9 0 0 5
M1 2 17 0 5 0 0 7
M2 0 19 0 3 0 0 7
M3 0 14 0 8 0 0 6
M4 0 22 0 0 0 0 7
M5 0 9 0 11 0 2 6
M6 0 15 0 4 3 0 7
M7 0 21 0 1 0 0 7
M8 2 18 0 2 1 1 5
M9 0 19 0 3 0 0 8
M10 0 13 0 3 3 3 6
Average 1 15 0 5 1 1 7
Table 7: Another way of viewing LRME versus human performance.

In Table 8, we examine the sensitivity of LRME to the parameter settings. The first row shows the accuracy of the baseline configuration, as in Table 6. The next eight rows show the impact of varying , the dimensionality of the truncated singular value decomposition, from 50 to 400. The eight rows after that show the effect of varying , the column factor, from 5 to 40. The number of columns in the matrix () is given by the number of rows ( = 1,662) multiplied by . The second last row shows the effect of eliminating the singular value decomposition from LRME. This is equivalent to setting to 1,662, the number of rows in the matrix. The final row gives the result when PPMIC (?) is replaced with log entropy (?). LRME is not sensitive to any of these manipulations: None of the variations in Table 8 perform significantly differently from the baseline configuration (paired t-test, 95% confidence level). (This does not necessarily mean that the manipulations have no effect; rather, it suggests that a larger sample of problems would be needed to show a significant effect.)

Experiment Accuracy
baseline configuration 300 20 33,240 91.5
varying 50 20 33,240 89.3
100 20 33,240 92.8
150 20 33,240 91.3
200 20 33,240 92.6
250 20 33,240 90.6
300 20 33,240 91.5
350 20 33,240 90.6
400 20 33,240 90.6
varying 300 5 8,310 86.9
300 10 16,620 94.0
300 15 24,930 94.0
300 20 33,240 91.5
300 25 41,550 90.1
300 30 49,860 90.6
300 35 58,170 89.5
300 40 66,480 91.7
dropping SVD 1662 20 33,240 89.7
log entropy 300 20 33,240 83.9
Table 8: Exploring the sensitivity of LRME to various parameter settings and modifications.

8 Attribute Mapping Approaches

In this section, we explore a variety of attribute mapping approaches for the twenty mapping problems. All of these approaches seek the mapping that maximizes the sum of the attributional similarities.

(36)

We search for by exhaustively evaluating all of the possibilities. Ties are broken randomly. We use a variety of different algorithms to calculate .

8.1 Algorithms

In the following experiments, we test five lexicon-based attributional similarity measures that use WordNet:777WordNet was developed by a team at Princeton and it is available at http://wordnet.princeton.edu/. HSO (?), JC (?), LC (?), LIN (?), and RES (?). All five are implemented in the Perl package WordNet::Similarity,888Ted Pedersen’s WordNet::Similarity package is at http://www.d.umn.edu/tpederse/similarity.html. which builds on the WordNet::QueryData999Jason Rennie’s WordNet::QueryData package is at http://people.csail.mit.edu/jrennie/WordNet/. package. The core idea behind them is to treat WordNet as a graph and measure the semantic distance between two terms by the length of the shortest path between them in the graph. Similarity increases as distance decreases.

HSO works with nouns, verbs, adjectives, and adverbs, but JC, LC, LIN, and RES only work with nouns and verbs. We used WordNet::Similarity to try all possible parts of speech and all possible senses for each input word. Many adjectives, such as true and valuable, also have noun and verb senses in WordNet, so JC, LC, LIN, and RES are still able to calculate similarity for them. When the raw form of a word is not found in WordNet, WordNet::Similarity searches for morphological variations of the word. When there are multiple similarity scores, for multiple parts of speech and multiple senses, we select the highest similarity score. When there is no similarity score, because a word is not in WordNet, or because JC, LC, LIN, or RES could not find an alternative noun or verb form for an adjective or adverb, we set the score to zero.

We also evaluate two corpus-based attributional similarity measures: PMI-IR (?) and LSA (?). The core idea behind them is that “a word is characterized by the company it keeps” (?). The similarity of two terms is measured by the similarity of their statistical distributions in a corpus. We used the corpus of Section 7 along with Wumpus to implement PMI-IR (Pointwise Mutual Information with Information Retrieval). For LSA (Latent Semantic Analysis), we used the online demonstration.101010The online demonstration of LSA is the work of a team at the University of Colorado at Boulder. It is available at http://lsa.colorado.edu/. We selected the Matrix Comparison option with the General Reading up to 1st year college (300 factors) topic space and the term-to-term comparison type. PMI-IR and LSA work with all parts of speech.

Our eighth similarity measure is based on the observation that our intended mappings map terms that have the same part of speech (see Appendix A). Let be the part-of-speech tag assigned to the term . We use part-of-speech tags to define a measure of attributional similarity, , as follows.

(37)

We hand-labeled the terms in the mapping problems with part-of-speech tags (?). Automatic taggers assume that the words that are to be tagged are embedded in a sentence, but the terms in our mapping problems are not in sentences, so their tags are ambiguous. We used our knowledge of the intended mappings to manually disambiguate the part-of-speech tags for the terms, thus guaranteeing that corresponding terms in the intended mapping always have the same tags.

For each of the first seven attributional similarity measures above, we created seven more similarity measures by combining them with . For example, let be the Hirst and St-Onge (?) similarity measure. We combine and by simply adding them.

(38)

The values returned by range from 0 to 100, whereas the values returned by are much smaller. We chose large values in (37) so that getting POS tags to match up has more weight than any of the other similarity measures. The manual POS tags and the high weight of give an unfair advantage to the attributional mapping approach, but the relational mapping approach can afford to be generous.

8.2 Experiments

Table 9 presents the accuracy of the various measures of attributional similarity. The best result without POS labels is 55.9% (HSO). The best result with POS labels is 76.8% (LIN+POS). The 91.5% accuracy of LRME (see Table 6) is significantly higher than the 76.8% accuracy of LIN+POS (and thus, of course, significantly higher than everything else in Table 9; paired t-test, 95% confidence level). The average human performance of 87.6% (see Table 5) is also significantly higher than the 76.8% accuracy of LIN+POS (paired t-test, 95% confidence level). In summary, humans and LRME perform significantly better than all of the variations of attributional mapping approaches that were tested.

Algorithm Reference Accuracy
HSO Hirst and St-Onge (?) 55.9
JC Jiang and Conrath (?) 54.7
LC Leacock and Chodrow (?) 48.5
LIN Lin (?) 48.2
RES Resnik (?) 43.8
PMI-IR Turney (?) 54.4
LSA Landauer and Dumais (?) 39.6
POS (hand-labeled) Santorini (?) 44.8
HSO+POS Hirst and St-Onge (?) 71.1
JC+POS Jiang and Conrath (?) 73.6
LC+POS Leacock and Chodrow (?) 69.5
LIN+POS Lin (?) 76.8
RES+POS Resnik (?) 71.6
PMI-IR+POS Turney (?) 72.8
LSA+POS Landauer and Dumais (?) 65.8
Table 9: The accuracy of attribute mapping approaches for a wide variety of measures of attributional similarity.

9 Discussion

In this section, we examine three questions that are suggested by the preceding results. Is there a difference between the science analogy problems and the common metaphor problems? Is there an advantage to combining the relational and attributional mapping approaches? What is the advantage of the relational mapping approach over the attributional mapping approach?

9.1 Science Analogies versus Common Metaphors

Table 5 suggests that science analogies may be more difficult than common metaphors. This is supported by Table 10, which shows how the agreement of the 22 participants with our intended mapping (see Section 6) varies between the science problems and the metaphor problems. The science problems have a lower average performance and greater variation in performance. The difference between the science problems and the metaphor problems is statistically significant (paired t-test, 95% confidence level).

Average Accuracy
Participant All 20 10 Science 10 Metaphor
1 72.6 59.9 85.4
2 88.2 85.9 90.5
3 90.0 86.3 93.8
4 71.8 56.4 87.1
5 95.7 94.2 97.1
6 83.4 83.9 82.9
7 79.6 73.6 85.7
8 91.9 95.0 88.8
9 89.7 90.0 89.3
10 80.7 81.4 80.0
11 94.5 95.7 93.3
12 90.6 87.4 93.8
13 93.2 89.6 96.7
14 97.1 94.3 100.0
15 86.6 88.5 84.8
16 80.5 80.2 80.7
17 93.3 89.9 96.7
18 86.5 78.9 94.2
19 92.9 96.0 89.8
20 90.4 84.1 96.7
21 82.7 74.9 90.5
22 96.2 94.9 97.5
Average 87.6 84.6 90.7
Standard deviation 7.2 10.8 5.8
Table 10: A comparison of the difficulty of the science problems versus the metaphor problems for the 22 participants. The numbers in bold font are the scores that are above the scores of LRME.

The average science problem has more terms (7.4) than the average metaphor problem (6.6), which might contribute to the difficulty of the science problems. However, Table 11 shows that there is no clear relation between the number of terms in a problem ( in Table 5) and the level of agreement. We believe that people find the metaphor problems easier than the science problems because these common metaphors are entrenched in our language, whereas the science analogies are more peripheral.

Num terms Agreement
5 86.4
6 81.3
7 91.1
8 86.5
9 84.3
Table 11: The average agreement among the 22 participants as a function of the number of terms in the problems.

Table 12 shows that the 16 algorithms studied here perform slightly worse on the science problems than on the metaphor problems, but the difference is not statistically significant (paired t-test, 95% confidence level). We hypothesize that the attributional mapping approaches are not performing well enough to be sensitive to subtle differences between science analogies and common metaphors.

Average Accuracy
Algorithm All 20 10 Science 10 Metaphor
LRME 91.5 89.8 93.1
HSO 55.9 57.4 54.3
JC 54.7 57.4 52.1
LC 48.5 49.6 47.5
LIN 48.2 46.7 49.7
RES 43.8 39.0 48.6
PMI-IR 54.4 49.5 59.2
LSA 39.6 37.3 41.9
POS 44.8 42.1 47.4
HSO+POS 71.1 66.9 75.2
JC+POS 73.6 78.1 69.2
LC+POS 69.5 70.8 68.2
LIN+POS 76.8 68.8 84.8
RES+POS 71.6 70.3 72.9
PMI-IR+POS 72.8 65.7 79.9
LSA+POS 65.8 69.1 62.4
Average 61.4 59.9 62.9
Standard deviation 14.7 15.0 15.3
Table 12: A comparison of the difficulty of the science problems versus the metaphor problems for the 16 algorithms.

Incidentally, these tables give us another view of the performance of LRME in comparison to human performance. The first row in Table 12 shows the performance of LRME on the science and metaphor problems. In Table 10, we have marked in bold font the cases where human scores are greater than LRME’s scores. For all 20 problems, there are 8 such cases; for the 10 science problems, there are 8 such cases; for the 10 metaphor problems, there are 10 such cases. This is further evidence that LRME’s performance is not significantly different from human performance. LRME is near the middle of the range of performance of the 22 human participants.

9.2 Hybrid Relational-Attributional Approaches

Recall the definitions of and given in Section 3.

(39)
(40)

We can combine the scores by simply adding them or multiplying them, but and may be quite different in the scales and distributions of their values; therefore we first normalize them to probabilities.

(41)
(42)

For these probability estimates, we assume that and . If necessary, a constant value may be added to the scores, to ensure that they are not negative. Now we can combine the scores by adding or multiplying the probabilities.

(43)
(44)

Table 13 shows the accuracy when LRME is combined with LIN+POS (the best attributional mapping algorithm in Table 9, with an accuracy of 76.8%) or with HSO (the best attributional mapping algorithm that does not use the manual POS tags, with an accuracy of 55.9%). We try both adding and multiplying probabilities. On its own, LRME has an accuracy of 91.5%. Combining LRME with LIN+POS increases the accuracy to 94.0%, but this improvement is not statistically significant (paired t-test, 95% confidence level). Combining LRME with HSO results in a decrease in accuracy. The decrease is not significant when the probabilities are multiplied (85.4%), but it is significant when the probabilities are added (78.5%).

Components
Relational Attributional Combination Accuracy
LRME LIN+POS add probabilities 94.0
LRME LIN+POS multiply probabilities 94.0
LRME HSO add probabilities 78.5
LRME HSO multiply probabilities 85.4
Table 13: The performance of four different hybrids of relational and attributional mapping approaches.

In summary, the experiments show no significant advantage to combining LRME with attributional mapping. However, it is possible that a larger sample of problems would show a significant advantage. Also, the combination methods we explored (addition and multiplication of probabilities) are elementary. A more sophisticated approach, such as a weighted combination, may perform better.

9.3 Coherent Relations

We hypothesize that LRME benefits from a kind of coherence among the relations. On the other hand, attributional mapping approaches do not involve this kind of coherence.

Suppose we swap two of the terms in a mapping. Let be the original mapping and let be the new mapping, where , , and for . With attributional similarity, the impact of this swap on the score of the mapping is limited. Part of the score is not affected.

(45)
(46)

On the other hand, with relational similarity, the impact of a swap is not limited in this way. A change to any part of the mapping affects the whole score. There is a kind of global coherence to relational similarity that is lacking in attributional similarity.

Testing the hypothesis that LRME benefits from coherence is somewhat complicated, because we need to design the experiment so that the coherence effect is isolated from any other effects. To do this, we move some of the terms outside of the accuracy calculation.

Let be one of our twenty mapping problems, where is our intended mapping and . Let be a randomly selected subset of of size . Let be , the subset of to which maps .

(47)
(48)
(49)
(50)
(51)

There are two ways that we might use LRME to generate a mapping for this new reduced mapping problem, internal coherence and total coherence.

  1. Internal coherence: We can select based on alone.

    (52)
    (53)
    (54)

    In this case, is chosen based only on the relations that are internal to .

  2. Total coherence: We can select based on and the knowledge that must satisfy the constraint that . (This knowledge is also embedded in internal coherence.)

    (55)
    (56)
    (57)