Aligning Intraobserver Agreement by Transitivity

Aligning Intraobserver Agreement by Transitivity

Abstract

Annotation reproducibility and accuracy rely on good consistency within annotators. We propose a novel method for measuring within annotator consistency or annotator Intraobserver Agreement (IA). The proposed approach is based on transitivity, a measure that has been thoroughly studied in the context of rational decision-making. The transitivity measure, in contrast with the commonly used test-retest strategy for annotator IA, is less sensitive to the several types of bias introduced by the test-retest strategy. We present a representation theorem to the effect that relative judgement data that meet transitivity can be mapped to a scale (in terms of measurement theory). We also discuss a further application of transitivity as part of data collection design for addressing the problem of the quadratic complexity of data collection of relative judgements.

1 Introduction

Annotation reliability plays a pivotal role in data reliability. Krippendorff, in his prominent book Krippendorff (1980), delineates three type of reliability, that is, stability, reproducibility and accuracy. Although IA is strictly speaking a measure of stability, it plays an essential role in reproducibility, which is measured by the Inter-Annotator Agreement (IAA) and accuracy, which is measured calculating the deviations from a given standard. Both reproducibility and accuracy are negatively affected by a low IA.

The standard method for calculating IA is the test-retest strategy, which is based on the resubmission, after some time, of some items to the annotators. That is, a annotator has to re-assess the same items after some time has elapsed. The comparison of the annotations of the same items provides a measure of the annotator consistency. Test-retest strategy is a measure of the consistency of each of the annotators with themselves over the time, which has the drawback of being time and money consuming. Furthermore, as suggested by Krippendorff (1980, p. 215) test-retest strategy can increase various types of bias which are not strictly related to the annotation task, such as: carelessness, openness to distractions, or the tendency to relax performance standards when tired. All of which are amplified by the increase of the annotation time.

In this paper we introduce a new measure of IA based on the concept of transitivity. Such a measure can be used in the case of relative annotations but not in the case of absolute annotations. We recall that, in absolute annotations, human annotators are asked to annotate an item based on some default ranking or Likert scale. For example, in the case of NLG evaluation, this could involve measuring the grammaticality of a sentence, rating it with a number between 1 to 5. In relative annotations, the human annotators are asked to asses the preference between the subjects under analysis based on some criteria. For example, in the case of NLG evaluation, this could involve choosing between two sentences based on a grammatical preference judgement. Absolute annotations have the advantage of sup-porting a more fine-grained analysis, for example by using numeric scales, which is not immediately accessible to relative annotations. However, relative annotations, besides having the convenience of being more intuitive and quicker than absolute annotation Carterette et al. (2008), have some features which make them very attractive. Firstly, relative annotations allows us to attain higher IAA than absolute annotation does (Jekaterina et al., 2018; Belz and Kow, 2010). Secondly, in the case of NLG evaluation, absolute annotations, although designed to assess the quality of a system, are often used to compare quality across systems. As we showed in Section 5, absolute annotations can be obtained, under some conditions, from relative annotations in a constructive way.

This work is a working in progress, and it presents the theoretical construction of our paradigm. Future developed are presented in the Section 7.

2 Related work

In this paper, we suggest considering a new kind of annotator consistency in the case of annotations based on preference choice, as for instance in relative judgments evaluations. We propose to use the property of transitivity as a measure of annotation stability. To the best of our knowledge this is an original contribution. However, several papers inspired our proposal.

The idea of transitivity as a measure of consistency is not new and can be found for example in Siegel and Castellan (1988).

The classical concept of rationality as defined in Decision Theory (see for example Luce and Raiffa (1957)), use the property of transitivity as basic for the concept of rational preferences. Following classical Decision Theory, rational preferences have to be transitive. Due to the fact that transitivity is ordinal in nature, our proposal is introduced with regard to relative annotation methodologies. Some advantages of using relative annotations instead of absolute annotations are presented, for example, by Carterette et al. (2008), Belz and Kow (2010) and Jekaterina et al. (2018). As shown in Carterette et al. (2008) and Belz and Kow (2010), such methodologies are more intuitive and quicker than absolute annotations. Furthermore, as shown in Jekaterina et al. (2018) and Belz and Kow (2010), relative annotations reach a higher IAA than absolute annotations. Recently preferences annotations have been investigated as an alternative to absolute annotations in the areas of machine translation and information retrieval systems: see for examples, Vilar et al. (2007), Rorvig (1990), Carterette et al. (2008), Song et al. (2011) and Bashir et al. (2013).

Finally, taking inspiration from Measurement theory (Roberts, 1985) we show how to infer absolute annotations from relative annotations. A similar result was introduced by Rorvig (1990) using the concept of Simple Scalability. Whereas the Simple Scalability uses the property of transitivity, substitutibility and independence, the representation theorem we present uses transitivity and completeness.

3 Using transitivity to measure IA

The test-retest strategy is aimed at determining what we can refer to as logical consistency. If an annotator prefers subject over subject then (s)he is not consistent if at the same time (s)he prefer subject over subject . From a more general point of view, we can state that an annotator is inconsistent if his(her) claims are not compatible with each other, where this incompatibility can be also of a different nature than the logical one. For example, given the subjects , and , the following claims are incompatible between them:

  1. is preferred to ;

  2. is preferred to ;

  3. is preferred to .

The property in question is known as transitivity. Given the subjects , and , transitivity states that if is preferred to , and is preferred to , then is preferred to . In classical decision theory (see for example, Luce and Raiffa (1957)), transitivity plays a pivotal role in defining the concept of rational preference under the normative interpretation. Indeed, following this interpretation, a rational man should make judgments that are transitive. Although transitivity has raised several discussions about its adequacy as a property of rationality, see for example Fishburn (1982) and Regenwetter et al. (2011), we believe that it can be safely used as a measure of the annotators’ internal consistency.

It is important to say that, in the annotation tasks that we are interested in this paper, annotators have to assess items based on the same criteria. For instance, suppose that the annotation consists of relative annotations about the sentences , and . It could happen that an annotator prefers sentence to sentence and sentence to sentence , based on the sentences’ grammaticality, and at the same time sentence to , based on the sentences’ fluency. However, in this case, the annotator is assessing the quality of the three sentences based on different criteria, which compromises the object of the specific evaluation. For example, evaluating which system generated grammatically better sentences. Note that because our use of transitivity assumes that preferences are made using constant fixed criteria, where the annotation criteria are very general (e.g. overall quality) and prone to unstable interpretation, our measure is less appropriate.

As the above example highlights, the property of transitivity can be used to assess the IA. Indeed, let us suppose that we have a not-transitive preference in an evaluation about grammaticality. So, in this case, an annotator declares that is more grammatical than , is more grammatical than but is more grammatical than . In this case we cannot reach consistent conclusions about the grammaticality of the sentences , and . If there are many inconsistencies in the annotator’s preferences, this weakens the basis for comparing systems based on those preferences.

Differently from the test-retest strategy, transitivity is part of a single test/annotation scenario, which reduces the cost of the annotation. It can also reduce subjective bias linked to the time elapsed, for instance, the tendency to relax performance standards when tired. This is especially true in a procedure that resubmits the items during the same annotation.1 Such a procedure made use of the annotator’s time to judge again items already annotated. When using the transitivity we can directly measure the IA by taking triplets of subjects for which annotators have given pairwise preferences during the annotation. In this case we can save the time and the money usually involved in the second annotation.

Additionally, the property of transitivity can play an important role in the area of evaluation of NLG systems. As we show in the Section 5, transitivity can be used, alongside the property of completeness2, to obtain absolute annotations from relative annotations.

3.1 How to calculate the IA with transitivity

Preference annotation can be strict or weak. In strict preference annotation, given two subjects and , the annotator is requested to express one and only one of the following preference: either is preferred to or is preferred to . In the case of weak preference the two alternatives can be chosen together, that is, is preferred to and is preferred to . In this case an annotator expresses equal preference between the two subjects. In this section we consider the case of weak preference, which gives more freedom to the annotators. Indeed, in the case of strict preference, the annotators are forced to give a preference, as well as a case when they do not have a clear preference between two subjects. In order to measure the transitivity, all the judgements are performed within pairwise preferences of triplets of subjects , and . That is, (, ), (, ) and (, ).

The standard methods used for measuring IAA and IA can be used in our case. Since the pivotal work of Carletta (1996), kappa coefficients are used for measuring annotation agreement. Using the more general formulation, as given by Carletta (1996), the kappa coefficient can be expressed with the following formulation:

where is the proportion of times the annotators agree, whereas is the proportion of times the annotators would be expected to agree by chance. In our case, the IA is calculated for each annotator based on each triplet of items between which (s)he have to express his(her) preferences. Consequently, is the proportion of times that the annotator is transitive, and is the proportion of times that s(he) would be transitive by chance, that is . Such a number is calculated by counting the proportion of times the annotator is transitive by chance versus not transitive by chance. In a preference annotation task, given two subjects and , an annotator can express three preferences: is preferred to (in symbol ), is preferred to (in symbol ) or and are equally preferred (in symbol ). For the sake of simplicity, let us merge the symbols and in the symbol . Given three subjects , and there are eight possible preference judgements.

  1. and and .

  2. and and .

  3. and and .

  4. and and .

  5. and and .

  6. and and .

  7. and and .

  8. and and .

We note that for each preference above the symbol have two meaning. Indeed, it can be interpreted either as or . Consequently, each of the eight points above can be split into 8 different combinations of or . This means that we have 64 possible combinations at the end. Over these we get 37 repetitions. For example, from both the point 1 and 2 we get the preferences: and and . There are 13 transitive assessments left out of 27 possibilities. That is, is the probability that an annotator is transitive by chance.

We note that in the case of strict preference the value of has to be considered . Indeed, by using the same methodology we can see that there is a total of 6 transitive assessments out of 8. Indeed, if we only use the symbol we can see that only points 3 and 6 are not transitive. This fact makes our measure not suitable for the case of strict preferences. We note that this can explain the conclusion reached by Hui and Berberich (2017), where strict preference is considered transitive across annotators, whereas weak preference is considered not transitive.

Let us give an example of how to calculate the IA. We remind that, in order to measure the transitivity, all the judgements are performed within pairwise preferences of triplets of subjects , and . Suppose the annotation is performed by three annotators , and on the triplets of items , and . This mean that annotators have to give preference between the pairwise of subjects in the triplets , and . Table 1 reports the artificial annotations.

Annotator
T T T
T NT T
NT T NT
Table 1: Annotators’ preference assessment. T means transitive preferences and NT means non transitive preferences.

The annotators’ IA are reported in the last column of Table 2.

Annotator
3/3 0.48 1
2/3 0.48 0.34
1/3 0.48 -0.28
Table 2: Annotators’ IA. We remind that values range from 1 (perfect agreement) to -1.

4 The use of transitivity in experimental design

The transitivity property can also be considered from a normative point of view, as in Decision Theory. This allows thinking about it as a condition to guarantee a specific idea of annotators’ consistency as stability. In this case transitivity is not used to check the annotators’ internal consistency, but rather it is assumed. Of course, we may only want to make such an assumption based on evidence. For instance, we can first test annotators’ consistency on a sample of our dataset. Once we have established consistency on the sample (or eliminated inconsistent annotators), we can then work with the assumption of consistency for the remaining data – this practice is also common when it comes to the application of IAA.

If we assume consistency, we can drastically reduce the number of annotations we request from annotators. As an example, we can imagine an interactive software which removes the couple, whose order can be deducted from the assessment done before, from the set of couples. For instance, coming back to the example with the three sentences and , let’s suppose that an annotator prefers sentence over sentence and sentence over sentence . Then, in this case, the software does not present the couple of sentences and and infers from the annotation before that the annotator prefers sentence over sentence . This annotation task design, besides guaranteeing the transitivity of the evaluation, allows us to reduce the problem of the quadratically explosion of the possible alternatives, which is present in the relative annotation evaluation (Carterette et al., 2008). This has the advantage of reducing the time required to complete the annotation.

5 From relative to absolute human annotations

Inspired by Measurement Theory (Roberts, 1985), in this section, we show how to extract absolute annotations from relative annotations.

Let us begin by giving a definition.

Definition 1

Let be a set. A binary relation on is:

  • Strongly complete if for each such that , or ;

  • Transitive if and then , for each .

An example of strongly complete relation is the strict order () between natural numbers.

It is commonly believed that it is possible to measure some entity if we can associate, in a default fashion, a number to it. More generally, in order to measure a set of entities, we would like to have a function which associated with each of them a real number. For example, given a set of people, suppose we are interested in measuring their weight. In this case we would like to have a tool, for instance a weight scale, which associates a number with each person of the set in a compact manner. Such number will be interpreted as the weight of that person. Suppose now we are interested in measuring the weight of those people, instead of associating a number with them, by ordering them from the lightest to the heaviest. Both of the approaches are informative about the weight of the people. The questions that arises are: Are these two approaches correlated? How do they correlate? A goal of Measurement Theory is to explain under which conditions it is possible to answer these questions. The general idea is to find properties (technically called axioms) satisfied by the order, such that it is possible to define a numerical function which mirrors the order. The proof that sanctions this result is called a representation theorem. Theorem 1 is an example of representation theorem.

Theorem 1

Let be a set and a binary relation on . There exist a real function on which, for each , satisfies the following:

if and only if

if and only if is transitive and strongly complete.

Although we do not report the proof, which can be found in (Roberts, 1985, p. 107), we present the construction of . The function is defined as follows:

= the number of in such that .

In words, counts how many times an element in is preferred to some other element in which is related by . An example can illustrate this definition. Let be the set and let be the set then is defined by:

  • ;

  • ;

The function can be interpreted as absolute annotation. We note that any linear transformations of satisfy Theorem 1. For example, each natural number different from 0, satisfies Theorem 1. Given its construction, we suggest using the as presented above to be considered as absolute annotation derived from a relative one.

6 Limits of the present proposal

We have shown some advantages of the property of transitivity as a measure of IA. Let us now present a couple of the limits of our proposal.

On one hand, the use of transitivity as a measure of IA can be used only in the case of relative annotations, whereas it cannot be used for absolute annotations. Although several papers (see for example Jekaterina et al. (2018), Belz and Kow (2010) and Carterette et al. (2008)) demonstrate advantages of relative over absolute annotations, the latter are used much more widely. Additionally, as we observed in Section 3, one limit is linked to the fuzziness of the criterion annotated. As with the sorites paradox (or the paradox of the heap), this problem arises from vague predicates. If the criterion is too general then transitivity can fail due to the differences in the aspect used by the annotators to assess different items. This cannot be strictly considered as a failure of annotator internal consistency as stability.

7 Conclusion and future work

In this paper we have introduced a new approach for checking the annotator IA. Inspired by the concept of rational preference as defined in classical decision theory, we suggested using the property of transitivity to check the annotators’ stability. We presented some advantages introduced by the concept of transitivity with respect to the test-retest strategy, among which the possibility of using some results from Measurement Theory to constructively derive absolute annotations from relative annotations is present. Furthermore, from a normative point of view, assuming transitivity can allow for more efficient collection of human annotations (avoiding the quadratic explosion of annotations that would otherwise be required) as observed originally by Carterette et al. (2008) in the context of relevance annotations for information retrieval.

This paper presents the theoretical construction of our paradigm. Future developments consist of:

  1. An extensive study and evaluation of our theoretical ideas;

  2. the use of possible weight in the preference annotation, representing the intensity of annotators’ preference.

Footnotes

  1. In a preference annotation task we are interested in, an item is made by a couple of subjects from which annotators have to express a preference.
  2. This property ensures that, given two subjects and , if is different from then the annotator has to prefer over or over or can express indifference preference between or . Given the finiteness of the set of sentences to be assessed, although the total number of the sentences to be split between the annotators can be quite large, the completeness seems a reasonable requirement for annotation tasks.

References

  1. A document rating system for preference judgements. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pp. 909–912. Cited by: §2.
  2. Comparing rating scales and preference judgements in language evaluation. In Proceedings of the Sixth International Natural Language Generation Conference, pp. 7–9. Cited by: §1, §2, §6.
  3. Assessing agreement on classification tasks: the kappa statistic. Computational Linguistics 22, pp. 249–254. Cited by: §3.1.
  4. Here or there: preference judgments for relevance. Computer Science Department Faculty Publication Series 46. Cited by: §1, §2, §4, §6, §7.
  5. Nontransitive measurable utility. Journal of mathematical psychology 26, pp. 31–67. Cited by: §3.
  6. Transitivity, time consumption, and quality of preference judgments in crowdsourcing. In European Conference on Information Retrieval, pp. 239–251. Cited by: §3.1.
  7. RankME: reliable human ratings for natural language generation. In Proceedings of NAACL-HLT, pp. 72–78. Cited by: §1, §2, §6.
  8. Content analysis: an introduction to its methodology. Sage Publications, Beverly Hills, CA. Cited by: §1, §1.
  9. Games and decisions: introduction and critical survey. Wiley, New York. Cited by: §2, §3.
  10. Transitivity of preferences. Psychological Review 118 (1), pp. 42–56. Cited by: §3.
  11. Measurement theory, with applications to decisionmaking, utility, and the social sciences. Cambridge University Press, Cambridge, UK. Cited by: §2, §5, §5.
  12. The simple scalability of documents. Journal of the Association for Information Science and Technology 41 (8), pp. 590–598. Cited by: §2, §2.
  13. Nonparametric statistics for the behavioral sciences. McGraw-hill, New York. Cited by: §2.
  14. Select-the-best-ones: a new way to judge relative relevance. Information processing & management 47 (1), pp. 37–52. Cited by: §2.
  15. Human evaluation of machine translation through binary system comparisons. In Proceedings of the Second Workshop on Statistical Machine Translation, pp. 96–103. Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414583
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description