A Survey of Refactoring Detection TechniquesBased on Change History Analysis1footnote 11footnote 1This article is a private translation of the article published in the JSSST journal Computer Software: Choi, E., Fujiwara, K., Yoshida, N., and Hayashi, S.: A Survey of Refactoring Detection Techniques Based on Change History Analysis, Vol. 32, No. 1(2015), pp. 47–59. The electronic copy of the original version can be obtained fromhttp://doi.org/10.11309/jssst.32.1_47. 2footnote 22footnote 2Notice for the use of this material: The copyright of this material is retained by the Japan Society for Software Science and Technology (JSSST). This material is published on this web site with the agreement of the JSSST. Please be complied with Copyright Law of Japan if any users wish to reproduce, make derivative work, distribute or make available to the public any part or whole thereof.

A Survey of Refactoring Detection Techniques Based on Change History Analysis

Abstract

Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. Not only researchers, but also practitioners, need to know about past refactoring instances performed in a software development project. So far, a number of techniques have been proposed for automatic detection of refactoring instances. Those techniques have been presented in various international conferences and journals, however, it is difficult for researchers and practitioners to grasp the current status of studies on refactoring detection techniques. In this survey paper, we review various refactoring detection techniques, especially techniques based on change history analysis. First, we give the definition and categorization of refactoring detection methods in this paper, and then introduce refactoring detection techniques based on change history analysis. Finally, we discuss possible future research directions for refactoring detection.

1 Introduction

Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure [16, 45]. Refactoring is performed for various reasons [16]. For example, it can help prevent the introduction of new defects into source code by improving the maintainability of source code with high complexity or low readability. Not only researchers, but also practitioners, are interested in detecting refactoring instances, and many books and papers on refactoring detection techniques have been published [16, 29, 37].

Both practitioners and researchers need to know about past refactoring instances performed in a software development project. Their needs:

  • Practitioners want to use refactoring information to determine whether and how to track software to be maintained, by understanding the refactoring implemented on libraries, frameworks and Application Programming Interfaces (API) being used  [11, 59].

  • Researchers want to collect this information to conduct empirical studies of refactoring and its effects and to support techniques by collecting refactoring instances [9, 30].

However, when refactoring changes are saved together with non-refactoring changes, it takes much time to determine whether the source code was modified by refactoring [18, 23, 42]. Since large-scale software projects often have thousands of modifications in their change histories, it is difficult to check whether refactoring was performed by manually analyzing all the changes.

Several techniques for automatically detecting refactoring instances, hereafter referred to as ‘refactoring detection techniques’, have been proposed. These techniques are published in various journals and international conferences, making it difficult to review all of these techniques. In 2004, several papers surveying the research and techniques for refactoring detection were published [13, 37, 38]. However, since 2004, many additional papers on refactoring detection techniques have been published. Therefore, it is difficult to grasp the current trend of research on refactoring detection techniques from the 2004 survey papers.

In this paper, we introduce refactoring detection techniques based on change history analysis. Section 2 defines the refactoring terms used in this paper. Section 3 classifies refactoring detection techniques, and then introduces the techniques based on change history analysis. Section 4 presents refactoring detection techniques based on change history analysis of artifacts, while Section 5 discusses directions for future research. Section 6, finally, concludes the paper with a brief summary.

2 Definition of Refactoring Detection Terms

In general, details of refactoring instances are listed in refactoring catalogs. Each entry in a refactoring catalog includes the preconditions, postconditions, detailed procedures, and other parts of a refactoring operation, along with the name of the refactoring operation. Some catalogs describe refactoring operation as a software pattern, in which case these patterns are called refactoring patterns. For example, the refactoring operation that moves a method belonging to a class into another class is called the Move Method, and this pattern is listed in a catalog along with the preconditions to perform it and other parts of the operations. Refactoring catalogs are usually published as books or on the web [15, 16, 29].

In this paper, we use the refactoring detection terms. We also define refactoring detection as follows. When a pair of versions extracted from the version sequence of a software product is given as , we denote the changes from to as . In this case, we define refactoring detection as inferring whether a refactoring operation, included in a refactoring catalog, is contained in the non-empty subset of the change set . In general, tools for refactoring detection output information such as “Pull up Method and Move Field are performed from version to ” when the pair of versions are input. However, there are some tools that do not output specific refactoring names, but just suggest the existence of refactoring instances [53].

A refactoring instance does not always exist as an individual change in a pair of versions, but exists along with other modifications [25, 42, 46]. Görg and Weißgerber called a change that contains mixed with refactoring and non-refactoring modifications impure refactoring [20]. Unlike pure refactoring, a pair of versions related by impure refactoring does not always keep the external behavior fixed. Murphy-Hill et al. pointed out that refactoring is often performed while also adding features and fixing bugs, and they call such refactoring floss refactoring [42, 43]. Compared with root-canal refactoring that distinguishes refactorings from other changes, floss refactoring often generates pairs of versions that include both refactoring and other non-refactoring modifications. Murphy-Hill et al. reported that floss refactoring is often performed [42]. Also, Herzig and Zeller reported that there are tangled code changes containing various kind of changes [26].

Since compound changes often occur in real software development as mentioned above, it is necessary for refactoring detection techniques to detect the performance of refactoring despite the changes from version to including not only refactoring operations, but also bug fixes and/or feature additions [25, 42]. In this paper, we include refactoring detection for these compound changes in our survey.

context fidelity
explicit A1: commit log mining A3: tool usage logs
implicit A2: developer observation A4: analyzing histories
Table 1: Research methods of refactoring detection techniques

Refactoring detection techniques share their technical background with several differential analysis techniques. For example, research on adding a well-known name to a set of changes, such as systematic change detection has been conducted [32, 33]. Moreover, several origin analyses that identify the correspondence between code fragment in a certain version and in previous version include techniques that recognize when the name of a program entity is replaced [19, 34]. Similarly, some techniques used to recognize comprehensive differences in source code or software models have analysis methods similar to those used in refactoring detection [14]. In this paper, we do not cover all these differential analysis techniques because our main purpose is investigating techniques that detect refactoring operations listed in refactoring catalogs. As an exception, we do survey techniques used to verify consistency of program behavior for refactoring detection, even though these techniques do not identify concrete refactoring patterns.

3 Categorization of Refactoring Detection Techniques

This section classifies the refactoring detection techniques described in the previous section into four different research methods, and then describes the target of this survey, refactoring detection techniques based on change history analysis.

Murphy-Hill et al. categorized refactoring detection techniques into four research methods based on two perspectives, context versus fidelity and explicit versus implicit information [41]. Table 1 shows their four research methods. For one of their axes, their categorization depends on whether or not refactoring instances are identified by using explicit information about refactoring events (explict or implicit). For the other axis, their categorization uses whether refactoring instances are determined by subjective judgments or observable facts (context or fidelity). Next, we discuss the details of each of these four research methods.

First, the A1: commit log mining set of techniques identify refactoring by analyzing the commit logs of version control systems [44, 51, 58]. If a developer has noted the performance of refactoring in the commit log, refactoring instances are identified by extracting the correct log entry. Therefore, these techniques search for words expressing refactoring activities such as ‘refactor’ or ‘extract’ in the commit logs. A characteristic of these techniques is the use of explicit records of refactoring performed by developers. However, the accuracy of identifying refactoring performance and its descriptions depends highly on the subjective judgment of the developer. These techniques can be applied to any software system which has a history of software development using a version control system. However, a disadvantage of this method is that the accuracy of the refactoring information depends on the judgment of the developers, and the location of performed refactoring may be missed. Murphy-Hill et al. compared commit logs of version control systems with performed refactoring and found that commit logs contain unreliable information of refactoring [42]. Therefore, when researchers use this method to investigate the refactoring performed by developers, they should take into consideration that this method provides biased information about refactoring.

Next, in the A2: developer observation set of techniques, researchers identify past refactoring by directly observing developers’ works or by using screen capturing tools for indirect observation [6, 40, 48]. A concrete example of this set of techniques is a technique that periodically captures developers’ screen activities while doing software development using a tool, and then identifies refactoring instances from the recorded information. The records used in this method do not provide explicit information about the performed refactorings. Moreover, since the researcher determines whether a developer conducted refactoring or not, the refactoring information in this method is based on the subjective judgments of the researcher. Although the applicability of this method is limited, it provides detailed information about development histories.

Third, the techniques classified as A3: tool usage logs identify refactoring operations by collecting the logs of refactoring support tools [12, 39, 52]. These refactoring support tools enable the automatic application of representative refactoring patterns in an integrated development environment. It is obvious that developers adopt these tools in order to conduct refactoring. This method can collect information of pure refactoring since refactoring support tools guarantee the preservation of external behavior. However, this method only captures certain kinds of refactoring patterns that are supported by the refactoring support tools.

Finally, techniques classified as A4: analyzing histories identify refactoring instances by analyzing a sequence of versions of software development artifacts. In this method, refactoring is not determined by the subjective judgments of developers or researchers, but by observable facts based on changes in the artifacts such as source code. However, this method might miss past refactoring instances performed by developers.

In this paper, we mainly introduce techniques in the A4 method. Recently, recording software change histories has become very popular in software development companies and open source software projects. Therefore, these techniques can be widely applied, and it is expected that more refactoring instances can be identified than with the techniques classified as A1 to A3. This paper focuses on these techniques as a target because they can also identify refactoring instances applied to libraries and frameworks, and these techniques support empirical research on refactoring and its impacts.

4 Refactoring Detection Techniques

4.1 Techniques Based on Change History Analysis

In this study, we selected techniques based on change history analysis as the target from the four methods of techniques described in Section 3. This set of techniques was selected because change history analysis can identify more refactoring instances, as compared to the other techniques, because performed refactoring always remains in the history.

Furthermore, we investigated papers on refactoring detection techniques based on change history analysis that have been published in major international conferences on software engineering (APSEC, ASE, CSMR, FSE, ICSE, ICSM, MSR, OOPSLA, SCAM, and WCRE) and journals (IEEE Transactions on Software Engineering, Information and Software Technology, Journal of Systems and Software, and Journal of Software: Evolution and Process) and then analyzed their approaches. Based on our results, we categorized the techniques into the following six types:

  • Rule-based approach,

  • Code clone analysis-based approach,

  • Metrics-based approach,

  • Dynamic analysis-based approach,

  • Graph matching-based approach, and

  • Search-based approach.

Table 2 categorizes a number of refactoring detection approaches based on change history analysis into these six approaches. The approaches introduced in this paper were selected from papers published in the target international conferences and journals based on the importance of the papers in these categories. We included all the important papers, as far as we know. Next, in Sections 4.2 to 4.7, we describe the details of these on refactoring detection techniques in these categories. Note that techniques that use multiple approaches to detect refactoring instances may be included in multiple categories.

Year Rule Code clone analysis Metrics Dynamic analysis Graph matching Search
2000 Demeyer [10]
2004 Antoniol [2]
2005 Görg [20]
Xing [62, 63, 64]
2006 Advani [1]
Weißgerber [61]
Dig [11]
2007 Pérez [47]
Taneja [59]
2008 Hayashi [24, 25]
2010 Kim [31], Prete [50]
2011 Biegel [5]
Soares [53]
Kehrer [28]
Thangthumachit [60]
2012 Fadhel [3]
2013 Mahouachi [36]
Soetens [57]
Fujiwara [17]
Table 2: Categorization of refactoring detection approaches

4.2 Rule-Based Approach

Figure 1: Process of a rule-based refactoring detection approach

In this approach, rules are the criteria used to determine whether refactoring was performed based on changes, e.g., additions, deletions, and movements of the program elements, e.g., classes, methods, and parameters, and the similarity of the elements between two versions. For example, to detect Extract Method refactoring instances, a technique proposed by Prete et al. extracts program elements as facts and then computes the similarities in the facts between two versions [31, 50]. Then, if the computed results match a predefined rule that states the “source code of a new method is extracted from a changed method in the old version, the new method calls the old method, and the source code of the new and old methods are similar to each other,” then the target source code is detected as an Extract Method refactoring instance. Figure 1 summarizes the process of this detection approach. The advantage of this approach is that it is easy to describe the detection rule for each refactoring pattern, because this approach can directly and declaratively express the changes between two versions. However, the disadvantages of this approach are that the detection accuracy is low if the pre-defined rule is inadequate, and this approach is not suitable for detecting impure refactoring instances containing both refactoring and non-refactoring changes.

The techniques proposed by Antoniol et al. and Advani et al. detect refactoring instances based on Fowler’s definition of refactoring patterns [1, 2]. Antoniol et al. presented a technique for detecting refactoring instances at the class level, such as Class Extraction and Class Split, based on predefined conditions, which they used to investigate the evolution of classes in Java software systems. Their technique extracts identifiers from each class, and then weights the extracted identifiers based on the Term Frequency-Inverse Document Frequency (TF-IDF). Next, it converts the classes in each version into a vector based on the weights of the classes, and finally determines the refactoring instances according to the conditions based on the changes, e.g., a newly added class, in each class, along with the cosine of the angle between the two vectors representing the classes. They applied this technique to 40 releases of dnsjava and identified the Class Replacement, Merge and Split, and Factor Out refactoring in these releases.

Advani et al. developed a tool for detecting refactoring instances according to predefined criteria aimed at investigating whether certain refactoring patterns are related [1]. This tool reports refactoring instances when predefined criteria for 15 refactoring patterns are matched by changes in the class entities, e.g., methods and fields. By applying this tool to seven open source software projects, this study found that the Rename Method, Rename Field, Move Method, and Move Field refactoring patterns are frequently related with other refactoring patterns.

Görg and Weißgerber implemented a tool called REFVIS for detecting refactoring instances based on changes, e.g., add, remove and unchanged, in the signatures of the classes and methods between two versions [20]. REFVIS also provides a feature that visualizes the detection results at the classes and methods levels. Weißgerber and Diehl presented a technique for detecting refactoring instances based on added, changed, or removed classes, interfaces, methods, and fields between two versions [61]. Their technique then ranks the refactoring instances based on similarities in the source code between the two versions using CCFinder, a token-based code clone detection tool. This technique is able to detect similar sets of source codes as refactoring instances, whereas REFVIS only reports two exact matching sets of source code as refactoring instances.

Xing and Stroulia’s UMLDiff detects refactoring instances between two versions [62, 63, 64]. UMLDiff extracts logical elements such as the types, names, and modifiers of the packages and classes from two input program versions. It then computes their similarities based on changes, additions, movements, and deletions. Finally, if the computed similarities are matched with rules representing a specific refactoring pattern, it identifies the target source code as a refactoring instance.

Prete et al. developed an Eclipse plug-in called Ref-Finder that detects refactoring instances of 63 refactoring patterns between two program versions based on predefined rules [31, 50]. Ref-Finder extracts code elements, e.g., packages, classes, and interfaces, structural dependencies, e.g., containment and overriding relationships, and the contents of the code elements, e.g., if-then-else control structures, as elements from two input program versions. It then computes the differences in the elements between the two program versions. Finally, it determines the refactoring instances based on the predefined rules of the refactoring patterns [49]. Ref-Finder detects both atomic refactoring and complex refactoring using other atomic refactoring as a pre-requisite. Furthermore, it can detect more refactoring patterns than UMLDiff by using code information such as conditional branch and exception handling.

A technique proposed by Fujiwara et al. detects refactoring instances in multiple revisions [17]. This technique speeds up the refactoring detection by extracting code elements from each revision and matching them using Historage [22].

A rule-based approach can also be used to detect refactoring instances from changes in histories of the components. However, in general, because of backward compatibility where obsolete source codes coexist with their newer counterparts until the older code is, it is difficult to detect refactoring instances from changes in the histories of the components. To address this problem, Dig et al. and Taneja et al. presented techniques for detecting refactoring instances between two versions of components based on predefined rules [11, 59]. Dig et al. developed an Eclipse plug-in called RefactoringCrawler [11], which identifies similar pairs of entities, e.g., methods and classes, in two versions of components. This plug-in uses Shingles [7] to find refactoring candidates, and then analyzes references among the source code entities in each of the two versions of the components to detect real refactoring instances. Taneja et al. developed a tool called Refac Lib, which extracts similar entities from the source code from two API versions and then reports refactoring instances based on syntactic analysis, comparison of the similarities and sizes of the entities, and information regarding obsolete entities.

4.3 Code Clone Analysis-Based Approach

Research has also been done on the detection of refactoring using code clone detection tools that identify pairs or sets of duplicated code fragments in source code [4, 27]. For refactoring detection, code clone detection tools can be used to identify moving and extraction of code fragments between versions. The code clone analysis-based approach is able to identify code fragments that are not only identical code fragments but also code fragments that are slightly changed between versions. It is difficult for this approach to detect refactoring instances that include various sorts of changes (e.g., impure refactoring).

As mentioned in Section 4.2, Weißgerber et al. [61] proposed a technique to detect refactoring instances using a clone detection tool named CCFinder [27]. Their technique categorizes code changes between versions based on code similarity into three exclusive categories, EQUAL (exact match), CLONE (CCFinder-based match) and NONCLONE (all others). The authors also investigated the relationship between these categories and refactoring. Their results showed that the detection of EQUAL and CLONE cases increased precision and did not decrease recall. Biegel et al. defined three types of code similarity, and then compared the performance of refactoring detection using these three types. Two of their types were defined by either the similarity of token sequences and abstract syntax trees, or measured by the code clone detection tools CCFinder and JCCD [4]. The other type was defined by string similarity, using shingles [7] to represent the distance between strings. Their investigation results showed that they could not confirm any significant difference between the three types of similarity in terms of precision and recall. Also, the authors reported that only a limited number of refactoring instances could not be detected without a specific similarity. In other words, the detected refactoring instances were mostly common between the three types of similarities.

4.4 Metrics-Based Approach

The approach based on metrics detects refactoring instances by difference in metric values between the two versions. This approach detects instances rapidly because it adopts a lightweight analysis compared to the rule-based approach described in Section 4.2. However, this rapid approach has the low accuracy in refactoring detection because it does not analyze source code syntactically. The technique described by Demeyer et al. selects metrics such as method and class size and inheritance, from metrics of Chidamber & Kemerer [8] and Lorenz & Kidd [35] and then uses combinations of these metrics as heuristics to detect refactoring instances such as splitting of methods, and merging and splitting of child classes or parent classes [10].

Moreover, Mahouachi et al. also proposed a search-based technique that detects refactoring based on the differences in structural metrics between two versions [36]. Their technique is built around a search-based process that minimizes the difference in metrics using a genetic algorithm. Their approach also is described in Section 4.7.

4.5 Dynamic Analysis-Based Approach

Research have also been done on detection of refactoring with attention to the assumption of maintaining program behavior while refactoring. The dynamic analysis-based approach verifies that program behavior is maintained by executing test cases with related modifications and examining the consistency of the results. This approach only enables detecting pure refactoring without any impure refactoring. On the other hand, a disadvantage of the dynamic analysis-based approach is that it is impossible to identify the kind of refactoring performed from the information in the dynamic analysis.

Soares et al. proposed a refactoring detection technique using a tool called SafeRefactor [55], to detect changes in program behavior while refactoring [53]. SafeRefactor automatically generates unit tests for non-changed methods among versions. Then, it executes the generated tests and identifies any changes in behavior shown by failing the tests. Their technique inputs an original version of the source code and a modified version to SafeRefactor and identifies refactoring if SafeRefactor verifies program behavior is maintained.

4.6 Graph Matching-Based Approach

Some researchers have proposed techniques to detect refactoring instances by regarding a program or a program change as a graph structure and by checking whether patterns of refactoring operations are included in the graph as a subgraph of it. Since most software design models, such as UML class diagrams, can be regarded as graphs, it is straightforward to use graph matching to detect model refactorings. Handling refactorings to be detected as patterns leads to simplicity in the definition of the detection mechanism. In addition, handling a code change as a graph enables us to detect refactorings as a subgraph even when mixed with other changes. However, there are several disadvantages to this approach such as the difficulty of defining patterns for complicated models and for some refactoring types.

Kehrer et al. proposed a technique to increase the abstraction level of a difference in models by extracting high-level changes such as refactoring operations from an operation sequence of EMF models, i.e., a model difference [28]. Changes among versions in a model can be expressed as primitive operations in a graph such as additions or deletions of nodes or edges. High-level changes included in such a graph are then detected as a subgraph, and grouped to identify a higher abstraction level representation of the change. Kehrer et al. called such grouping manipulation semantic lifting and automated this process.

Soetens et al. proposed a detection technique of floss refactoring based on matching of an operation history [57]. Since such floss refactoring is performed together with other changes, information obtained from the versions may be mixed, containing multiple modifications, which makes it difficult to detect refactoring instances in it. In their technique, a graph representing code changes is constructed based on the edit operation history of the source code as recorded by a tool named ChEOPSJ [56]. This technique then confirms whether this graph contains subgraph patterns representing refactoring operations using a graph transformation tool. If it contains patterns, then the technique detects the corresponding refactorings that have been performed. The authors claimed that refactoring patterns such as Rename Method and Move Method can be detected more accurately using operation histories than by other existing techniques.

4.7 Search-Based Approach

It is important to properly detect compound refactoring and floss refactoring, refactoring operations mixed together with other changes. When impure refactoring is performed, the changes made between versions before and after the refactoring session are affected by not only the single refactoring, but also by other refactoring and/or non-refactoring operations. In such cases, refactoring instances cannot be correctly detected by only looking for the pre- and post-conditions of a single refactoring instance because the difference between versions before and after the changes will not correspond to the conditions.

In Search-Based Software Engineering (SBSE) [21], a software engineering problem is regarded as a sort of optimization problem, and the results are obtained using search techniques. There are several applications of SBSE in refactoring detection techniques. In search-based refactoring detection, a program and a refactoring application are respectively regarded as a state and an operator of the state transition, and an optimal sequence of operators representing the changes in the program between versions is discovered. The search progresses by repeatedly invoking refactoring of the program as an operator application, obtaining a new program. On the one hand, the search-based approach has the advantage that it does not require detection rules for impure refactoring directly because it can indirectly handle intermediate program states where only some of the mixed changes were applied. On the other hand, the disadvantages of the search-based approach include that some search techniques require a large computational time.

As an example of this approach, Pérez and Crespo proposed a search-based technique to identify refactoring operations from the structural differences in a program, such as changes in a UML class diagram [47]. In this technique, a depth-first search is applied to find a sequence of refactorings, and invoking the detected refactoring candidates on the program.

Hayashi et al. proposed a technique to detect refactoring operations using the A search [24, 25]. In this technique, refactoring detection is formulated as a path search problem, regarding the size of the structural differences in the program as a heuristic distance, the weighted count of the applied refactorings as the path distance, and the sum of these as the evaluation function. The solution path is then discovered using the A search.

This approach also includes applications of the genetic algorithm. In this technique, a refactoring sequence is represented as a chromosome consisting of multiple genes. The algorithm then finds an optimal chromosome, i.e., a chromosome which has the maximum value of the fitness function, as the most appropriate sequence of refactorings explaining the changes between versions, by iteratively applying the updating operators such as selection, crossover, and mutation. Fadhel et al. proposed a technique to detect model refactorings using the genetic algorithm [3]. Mahouachi et al. proposed a technique applying a similar approach to source code to obtain a sequence of code refactorings [36]. In this technique, a fitness function is defined as minimizing the differences in product metrics between versions.

Thangthumachit et al. proposed a technique to detect refactoring operations based on the similarity of child and referencing elements in an abstract syntax tree [60]. In this technique, refactoring detection is performed at each level, such as package, file, class, and method, and refactorings detected at a coarser-grained level are applied to the target program before trying to detect refactorings at a finer-grained level. Refactorings that failed to apply were excluded from the detection result. By repeating the detection and the application in this manner, an accurate detection result is achieved even for a difference in which multiple refactoring operations are mixed. In addition, a tool has also been proposed to visualize the sequence of refactorings obtained in this way [23].

5 Future Directions

5.1 Combination of Multiple Techniques

In future work, first, techniques for refactoring detection should be combined and evaluated. Soares et al. suggested a quantitative comparison of combined techniques for refactoring detection as future work in their paper on a quantitative comparison [54].

As can be seen in Table 2, very little research has been done on combined approaches so far. For example, as future work, a search-based approach could be combined not only with rule-based approach that considers program structures, but also with a clone detection-based approach that identifies moving of code fragments between versions. Then these combined approaches should be compared with other existing techniques. Similarly, very little research has been done on dynamic analysis-based detection. Since a static analysis-based detection approach, such as code clone analysis-based approach, cannot identify a change in external behavior, it should be combined with a dynamic analysis-based approach.

5.2 Quantitative Evaluation

Second, very little research has been done on the quantitative comparison of refactoring detection techniques. One reason is that it is difficult to define refactoring instances that should be detected. To enable such comparison, the research community should provide datasets to be used for quantitative comparison, and then set up a mechanism to share the datasets. Soares et al. performed a quantitative comparison of Ref-Finder [31, 50], a dynamic analysis-based technique, and a commit-log-based technique, and then published the results on their website [54]. To improve the techniques, it is necessary to compare the various refactoring detection techniques and publish the comparison results. In a dataset used for this comparison, the definition of the refactoring instances should be clear, e.g., whether a dataset includes impure/floss refactoring instances or not. In addition to the quantitative evaluation of detection performance, quantitative evaluation of scalability with respect to the number of revisions is also needed. For a large-scale empirical study of refactoring, a scalable approach is needed that completes the detection process within a practical duration by analyzing only differences and associated code among revisions. Since most of the existing techniques are aimed at detecting refactoring instances between a revision pair by analyzing all of the source code of each revision, analyzing a longitudinal sequence of revisions requires a large computational cost. As future work, then, once the quantitative evaluation of scalability with respect to the number of revisions is performed, a scalable tool needs to be developed.

6 Conclusion

In this paper, we surveyed refactoring detection techniques mainly focuses on analysis of change histories, which has been well studied recently. First, we explained the definition of the refactoring detection terms in this paper, and classified refactoring detection techniques into four categories: mining commit logs, observing developers, analyzing tool usage logs, and analyzing change histories. Next, we classified the techniques based on change history analysis into six subcategories, and we introduced the techniques belonging to each subcategory. Finally, we discussed two directions for future research, combinations of multiple techniques, and quantitative evaluation of the techniques.

We hope that this paper will help encourage further improvements in refactoring detection techniques.

Acknowledgments

This work was partially supported by MEXT/JSPS KAKENHI Grant Numbers 26730036, and 23700030.

Footnotes

  1. This article is a private translation of the article published in the JSSST journal Computer Software: Choi, E., Fujiwara, K., Yoshida, N., and Hayashi, S.: A Survey of Refactoring Detection Techniques Based on Change History Analysis, Vol. 32, No. 1(2015), pp. 47–59. The electronic copy of the original version can be obtained from
    http://doi.org/10.11309/jssst.32.1_47.
  2. Notice for the use of this material: The copyright of this material is retained by the Japan Society for Software Science and Technology (JSSST). This material is published on this web site with the agreement of the JSSST. Please be complied with Copyright Law of Japan if any users wish to reproduce, make derivative work, distribute or make available to the public any part or whole thereof.
  3. The author is currently with Graduate School of Science and Technology, Nara Institute of Science and Technology, Japan. Email: choi@is.naist.jp
  4. The author is currently with National Institute of Technology, Toyota College, Japan. Email: fujiwara@toyota-ct.ac.jp
  5. The author is currently with Center for Embedded Computing Systems, Graduate School of Informatics, Nagoya University, Japan. Email: yoshida@ertl.jp
  6. The author is currently with School of Computing, Tokyo Institute of Technology, Japan. Email: hayashi@c.titech.ac.jp

References

  1. Advani, D., Hassoun, Y., and Counsell, S.: Extracting Refactoring Trends from Open-source Software and a Possible Solution to the ‘Related Refactoring’ Conundrum, in Proc. of the 21st ACM Symposium on Applied Computing (SAC’06), 2006, pp.  1713–1720.
  2. Antoniol, G., Di Penta, M., and Merlo, E.: An Automatic Approach to Identify Class Evolution Discontinuities, in Proc. of the 7th International Workshop on Principles of Software Evolution (IWPSE’04), 2004, pp.  31–40.
  3. ben Fadhel, A., Kessentini, M., Langer, P., and Wimmer, M.: Search-based Detection of High-level Model Changes, in Proc. of the 28th IEEE International Conference on Software Maintenance (ICSM’12), 2012, pp.  212–221.
  4. Biegel, B. and Diehl, S.: Highly Configurable and Extensible Code Clone Detection, in Proc. of the 17th Working Conference on Reverse Engineering (WCRE’10), 2010, pp.  237–241.
  5. Biegel, B., Soetens, Q. D., Hornig, W., Diehl, S., and Demeyer, S.: Comparison of Similarity Metrics for Refactoring Detection, in Proc. of the 8th Working Conference on Mining Software Repositories (MSR’11), 2011, pp.  53–62.
  6. Boshernitsan, M., Graham, S. L., and Hearst, M. A.: Aligning Development Tools with the Way Programmers Think About Code Changes, in Proc. of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07), 2007, pp.  567–576.
  7. Broder, A. Z.: On the Resemblance and Containment of Documents, in Proc. of the Compression and Complexity of Sequences (SEQUENCES’97), 1997, pp.  21–29.
  8. Chidamber, S. R. and Kemerer, C. F.: A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, Vol. 20, No. 6(1994), pp. 476–493.
  9. Choi, E., Yoshida, N., and Inoue, K.: An Investigation into the Characteristics of Merged Code Clones during Software Evolution, IEICE Transactions on Information and Systems, Vol. E97-D, No. 5(2014), pp. 1244–1253.
  10. Demeyer, S., Ducasse, S., and Nierstrasz, O.: Finding Refactorings via Change Metrics, in Proc. of the 15th ACM SIGPLAN Conference on Object-oriented Programming, Systems, Languages, and Applications (OOPSLA’00), 2000, pp.  166–177.
  11. Dig, D., Comertoglu, C., Marinov, D., and Johnson, R.: Automated Detection of Refactorings in Evolving Components, in Proc. of the 20th European Conference on Object-Oriented Programming (ECOOP’06), 2006, pp.  404–428.
  12. Dig, D., Manzoor, K., Johnson, R., and Nguyen, T. N.: Refactoring-Aware Configuration Management for Object-Oriented Programs, in Proc. of the 29th International Conference on Software Engineering (ICSE’07), 2007, pp.  427–436.
  13. Du Bois, B., Van Gorp, P., Amsel, A., Van Eetvelde, N., Stenten, H., Demeyer, S., and Mens, T.: A Discussion of Refactoring in Research and Practice, Technical report, University of Antwerp, 2004.
  14. Fluri, B., Wursch, M., PInzger, M., and Gall, H. C.: Change Distilling: Tree Differencing for Fine-Grained Source Code Change Extraction, IEEE Transactions on Software Engineering, Vol. 33, No. 11(2007), pp. 725–743.
  15. Fowler, M.: Refactoring, http://refactoring.com/.
  16. Fowler, M.: Refactoring: Improving the Design of Existing Code, Addison Wesley, 1999.
  17. Fujiwara, K., Yoshida, N., and Iida, H.: An Approach for Fine-Grained Detection of Refactoring Instances from a Software Repository, in Foundation of Software Engineering XX (Proc. FOSE’13), 2013, pp.  101–106.
  18. Ge, X., Sarkar, S., and Murphy-Hill, E.: Towards Refactoring-Aware Code Review, in Proc. of the 7th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE’14), 2014, pp.  99–102.
  19. Godfrey, M. W. and Zou, L.: Using Origin Analysis to Detect Merging and Splitting of Source Code Entities, IEEE Transactions on Software Engineering, Vol. 31, No. 2(2005), pp. 166–181.
  20. Görg, C. and Weißgerber, P.: Detecting and Visualizing Refactorings from Software Archives, in Proc. of the 13th International Workshop on Program Comprehension (IWPC’05), 2005, pp.  205–214.
  21. Harman, M.: Software Engineering Meets Evolutionary Computation, IEEE Computer, Vol. 44, No. 10(2011), pp. 31–39.
  22. Hata, H., Mizuno, O., and Kikuno, T.: Historage: Fine-grained Version Control System for Java, in Proc. of the 12th International Workshop on Principles on Software Evolution and 7th ERCIM Workshop on Software Evolution (IWPSE-EVOL’11), 2011, pp.  96–100.
  23. Hayashi, S., Thangthumachit, S., and Saeki, M.: REdiffs: Refactoring-Aware Difference Viewer for Java, in Proc. of the 20th Working Conference on Reverse Engineering (WCRE’13), 2013, pp.  487–488.
  24. Hayashi, S., Tsuda, Y., and Saeki, M.: Detecting Occurrences of Refactoring with Heuristic Search, in Proc. of the 15th Asia-Pacific Software Engineering Conference (APSEC’08), 2008, pp.  453–460.
  25. Hayashi, S., Tsuda, Y., and Saeki, M.: Search-Based Refactoring Detection from Source Code Revisions, IEICE Transactions on Information and Systems, Vol. E93-D, No. 4(2010), pp. 754–762.
  26. Herzig, K. and Zeller, A.: The Impact of Tangled Code Changes, in Proc. of the 10th Working Conference on Mining Software Repositories (MSR’13), 2013, pp.  121–130.
  27. Kamiya, T., Kusumoto, S., and Inoue, K.: CCFinder: A Multilinguistic Token-Based Code Clone Detection System for Large Scale Source Code, IEEE Transactions on Software Engineering, Vol. 28, No. 7(2002), pp. 654–670.
  28. Kehrer, T., Kelter, U., and Taentzer, G.: A Rule-Based Approach to the Semantic Lifting of Model Differences in the Context of Model Versioning, in Proc. of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE’11), 2011, pp.  163–172.
  29. Kerievsky, J.: Refactoring to Patterns, Addison-Wesley, 2005.
  30. Kim, M., Cai, D., and Kim, S.: An Empirical Investigation into the Role of API-Level Refactorings during Software Evolution, in Proc. of the 33rd International Conference on Software Engineering (ICSE’11), 2011, pp.  151–160.
  31. Kim, M., Gee, M., Loh, A., and Rachatasumrit, N.: Ref-Finder: A Refactoring Reconstruction Tool Based on Logic Query Templates, in Proc. of the 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’10), 2010, pp.  371–372.
  32. Kim, M. and Notkin, D.: Discovering and Representing Systematic Code Changes, in Proc. of the 31st International Conference on Software Engineering (ICSE’09), 2009, pp.  309–319.
  33. Kim, M., Notkin, D., and Grossman, D.: Automatic Inference of Structural Changes for Matching across Program Versions, in Proc. of the 29th International Conference on Software Engineering (ICSE’07), 2007, pp.  333–343.
  34. Kim, S., Pan, K., and E. James Whitehead, J.: When Functions Change Their Names: Automatic Detection of Origin Relationships, in Proc. of the 12th Working Conference on Reverse Engineering (WCRE’05), 2005, pp.  143–152.
  35. Lorenz, M. and Kidd, J.: Object-Oriented Software Metrics: A Practical Approach, Prentice Hall, 1994.
  36. Mahouachi, R., Kessentini, M., and Cinnéide, M. Ó.: Search-Based Refactoring Detection Using Software Metrics Variation, in Proc. of the 5th International Symposium on Search-Based Software Engineering (SSBSE’13), 2013, pp.  126–140.
  37. Mens, T. and Tourwé, T.: A Survey of Software Refactoring, IEEE Transactions on Software Engineering, Vol. 30, No. 2(2004), pp. 126–139.
  38. Mens, T. and Van Deursen, A.: Refactoring: Emerging trends and open problems, in Proc. of the 1st International Workshop on REFactoring: Achievements, Challenges, Effects (REFACE’03), 2003.
  39. Murphy, G. C., Kersten, M., and Findlater, L.: How Are Java Software Developers Using the Eclipse IDE?, IEEE Software, Vol. 23, No. 4(2006), pp. 76–83.
  40. Murphy-Hill, E. and Black, A. P.: Breaking the Barriers to Successful Refactoring: Observations and Tools for Extract Method, in Proc. of the 30th International Conference on Software Engineering (ICSE’08), 2008, pp.  421–430.
  41. Murphy-Hill, E., Black, A. P., Dig, D., and Parnin, C.: Gathering Refactoring Data: A Comparison of Four Methods, in Proc. of the 2nd ACM Workshop on Refactoring Tools (WRT’08), 2008.
  42. Murphy-Hill, E., Parnin, C., and Black, A. P.: How We Refactor, and How We Know It, IEEE Transactions on Software Engineering, Vol. 38, No. 1(2012), pp. 5–18.
  43. Murphy-Hill, E. R. and Black, A. P.: Why Don’t People Use Refactoring Tools?, in Proc. of the 1st ACM Workshop on Refactoring Tools (WRT’07), 2007, pp.  60–61.
  44. Oba, S.: {CJK}UTF8minソフトウェアのバージョン間で実施されたリファクタリング検出手法の改良, Master’s thesis, Shinshu University, Graduate School of Sciecne and Technology, 2013.
  45. Opdyke, W. F.: Refactoring Object-oriented Frameworks, PhD Thesis, University of Illinois, 1992.
  46. Parnin, C. and Görg, C.: Lightweight Visualizations for Inspecting Code Smells, in Proc. of the 2006 ACM Symposium on Software Visualization (SoftVis’06), 2006, pp.  171–172.
  47. Pérez, J. and Crespo, Y.: Exploring a Method to Detect Behaviour-Preserving Evolution Using Graph Transformation, in Proc. of the 3rd International ERCIM Workshop on Software Evolution (EVOL’07), 2007, pp.  114–122.
  48. Pizka, M.: Straightening Spaghetti-Code with Refactoring?, in Proc. of the 2004 International Conference on Software Engineering Research and Practice (SERP’04), 2004, pp.  846–852.
  49. Prete, K., Rachatasumrit, N., and Kim, M.: A Catalogue of Template Refactoring Rules, Technical Report UTAUSTINECE-TR-041610, The University of Texas at Austin, 2010.
  50. Prete, K., Rachatasumrit, N., Sudan, N., and Kim, M.: Template-based Reconstruction of Complex Refactorings, in Proc. of the 26th International Conference on Software Maintenance (ICSM’10), 2010.
  51. Ratzinger, J., Sigmund, T., and Gall, H. C.: On the Relation of Refactorings and Software Defect Prediction, in Proc. of the 5th Working Conference on Mining Software Repositories (MSR’08), 2008, pp.  35–38.
  52. Robbes, R. and Lanza, M.: SpyWare: A Change-aware Development Toolset, in Proc. of the 30th International Conference on Software Engineering (ICSE’08), 2008, pp.  847–850.
  53. Soares, G., Catao, B., Varjao, C., Aguiar, S., Gheyi, R., and Massoni, T.: Analyzing Refactorings on Software Repositories, in Proc. of the 2011 25th Brazilian Symposium on Software Engineering (SBSE’11), 2011, pp.  164–173.
  54. Soares, G., Gheyi, R., Murphy-Hill, E., and Johnson, B.: Comparing Approaches to Analyze Refactoring Activity on Software Repositories, Journal of Systems and Software, Vol. 86, No. 4(2013), pp. 1006–1022.
  55. Soares, G., Gheyi, R., Serey, D., and Massoni, T.: Making Program Refactoring Safer, IEEE Software, Vol. 27, No. 4(2010), pp. 52–57.
  56. Soetens, Q. and Demeyer, S.: ChEOPSJ: Change-Based Test Optimization, in Proc. of the 16th European Conference on Software Maintenance and Reengineering (CSMR’12), 2012, pp.  535–538.
  57. Soetens, Q. D., Pérez, J., and Demeyer, S.: An Initial Investigation into Change-Based Reconstruction of Floss-Refactorings, in Proc. of the 29th IEEE International Conference on Software Maintenance (ICSM’13), 2013, pp.  384–387.
  58. Stroggylos, K. and Spinellis, D.: Refactoring–Does It Improve Software Quality?, in Proc. of the 5th International Workshop on Software Quality (WoSQ’07), 2007.
  59. Taneja, K., Dig, D., and Xie, T.: Automated Detection of API Refactorings in Libraries, in Proc. of the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE’07), 2007, pp.  377–380.
  60. Thangthumachit, S., Hayashi, S., and Saeki, M.: Understanding Source Code Differences by Separating Refactoring Effects, in Proc. of the 18th Asia-Pacific Software Engineering Conference (APSEC’11), 2011, pp.  339–347.
  61. Weißgerber, P. and Diehl, S.: Identifying Refactorings from Source-Code Changes, in Proc. of the 21st International Conference on Automated Software Engineering (ASE’06), 2006, pp.  231–240.
  62. Xing, Z. and Stroulia, E.: UMLDiff: An Algorithm for Object-oriented Design Differencing, in Proc. of the 20th International Conference on Automated Software Engineering (ASE’05), 2005, pp.  54–65.
  63. Xing, Z. and Stroulia, E.: Refactoring Detection Based on UMLDiff Change-Facts Queries, in Proc. of the 13th Working Conference on Reverse Engineering (WCRE’06), 2006, pp.  263–274.
  64. Xing, Z. and Stroulia, E.: Differencing logical UML models, Automated Software Engineering, Vol. 14, No. 2(2007), pp. 215–259.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
279365
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description