References
Abstract

Pain assessment through observational pain scales is necessary for special categories of patients such as neonates, patients with dementia, critically ill patients, etc. The recently introduced Prkachin-Solomon score allows pain assessment directly from facial images opening the path for multiple assistive applications. In this paper, we introduce the Histograms of Topographical (HoT) features, which are a generalization of the topographical primal sketch, for the description of the face parts contributing to the mentioned score. We propose a semi-supervised, clustering oriented self–taught learning procedure developed on the emotion oriented Cohn-Kanade database. We use this procedure to improve the discrimination between different pain intensity levels and the generalization with respect to the monitored persons, while testing on the UNBC McMaster Shoulder Pain database.

 

Pain Intensity Estimation by a Self–Taught Selection of Histograms of Topographical Features

 

Corneliu Florea corneliu.florea@upb.ro

Image Processing and Analysis Laboratory
University ”Politehnica” of Bucharest, Romania, Address Splaiul Independenţei 313

Laura Florea laura.florea@upb.ro

Image Processing and Analysis Laboratory
University ”Politehnica” of Bucharest, Romania, Address Splaiul Independenţei 313

Raluca Boia rboia@imag.pub.ro

Image Processing and Analysis Laboratory
University ”Politehnica” of Bucharest, Romania, Address Splaiul Independenţei 313

Alessandra Bandrabur abandrabur@imag.pub.ro

Image Processing and Analysis Laboratory
University ”Politehnica” of Bucharest, Romania, Address Splaiul Independenţei 313

Constantin Vertan constantin.vertan@upb.ro

Image Processing and Analysis Laboratory
University ”Politehnica” of Bucharest, Romania, Address Splaiul Independenţei 313


\@xsect

In the past, the calculator was a mere tool for easing math. The rapid progress in the computer science and in integrated micro-mechatronic, helped the appearance of assistive technologies. They can improve the quality of life for all disabled, patients and elderly, but also for healthy people. Assistive technologies include monitoring systems connected to an alarm system to help caregivers while managing the activities associated with vulnerable people. Such an example is automatic non-intrusive monitoring for pain assessment.

The International Association for the Study of Pain defines pain as ”an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage” (J. Boyd et al., 2011). Assessment of pain was showed to be a critical factor for psychological comfort in the periods spent waiting at emergency units (Gawande, 2004). Typically, the assessment is based primary on the self–report and several procedures are at hand; details can be retrieved from (Hugueta et al., 2010) and from the references therein. Complementary to the self-report, there are observational scales for pain assessment and a review may be followed in (von Baeyer & Spagrud, 2007). If both methods are available, the self report should be the preferred choice (Shavit et al., 2008).

Yet, there are several aspects that strongly motivate the necessity of the observational scales: (1) Adult patients, typically, self-assess the pain intensity using a no-reference system, which leads to inconsistent properties across scale, reactivity to suggestion, efforts at impressing unit personnel etc. (Hadjistavropoulos & Craig, 2004); (2) Patients with difficulties in communication (e.g. newborns, patients with dementia, patients critically ill) cannot self–report and assessment by specialized personnel is demanded (von Baeyer & Spagrud, 2007), (Haslam et al., 2011); (3) Pain assessment by nurses encounters several difficulties. The third criteria is detailed by Manias et al. (Manias et al., 2002) by naming four practical barriers emerged from thorough field observations: (a) nurses encounter interruptions while responding to activities relating to pain; (b) nurses’ attentiveness to the patient cues of pain vary due to other activities related to the patients; (c) nurses’ interpretations of pain vary with the incisional pain being the primary target of attention, and (d) nurses’ attempt to address competing demands of fellow nurses, doctors and patients. To respond to these aspects, automatic appraisal of pain by observational scales is urged.

Among the multiple observational scales existing to the moment, the revised Adult Nonverbal Pain Scale (ANPS-R) and the Critical Care Pain Observation Tool (CPOT) have been consistently found reliable (Stites, 2013), (Topolovec-Vranic et al., 2013), (Chanques et al., 2014). Both scales include evaluation of multiple factors, out of which the first is the dynamic of the face expression. Intense pain is marked by frequent grimace, tearing, frowning, wrinkled forehead (in ANPS-R) and, respectively, frowning, brow lowering, orbit tightening, levator contraction and eyelid tightly closed (in CPOT).

The mentioned facial dynamics, in fact, overlap some of the action units (AU) as they have been described by the seminal Facial Action Coding Systems (FACS) introduced by 2002. A practical formula to contribute to the overall pain intensity assessment from facial dynamics is the Prkachin - Solomon formula (Prkachin & P. Solomon, 2008). Here, the pain is quantized in 16 discrete levels (0 to 15) obtained from the quantization of the 6 contributing face AUs :

(1)

The Prkachin - Solomon formula has the cogent merit of permitting direct appraisal of the pain intensity from digital face image sequences acquired by regular video-cameras and image analysis. Thus, it clears the path for multiple applications in the assistive computer vision domain. For instance, in probably the most intuitive implementation (Ashraf et al., 2009), by means of digital recording, a patient is continuously monitored and when an expression of pain is detected, an alert signal triggers the nurse’s attention; he/she will further check the patient’s state and will consider measures for pain alleviation. Such a system may be employed in intensive care units, where its main purpose would be to reduce the workload and increase the efficiency of the nursing staff. Alternatively, it could be used for continuous monitoring of patients with communication disabilities (e.g. neonates) and reduce the cost for permanent caring.

Following further developments (i.e. reaching high accuracy), in both computer vision and pain assessment and management, automatic systems that use the information extracted from video sequences could be applied to infer the pain intensity level and to automatically administer the palliative care.

Another area of applicability is to monitor people performing physical exercises. For patients recovering from orthopedic procedures, such an application would permit near real-time identification of the movements causing pain, thus leading to more efficient adjustments of the recovering program. For athletes or for normal persons training, such an application would contribute to the identification of the weaker muscle groups and to fast improvement of the training program.

In this paper we propose a system for face analysis and, more precisely, for pain intensity estimation, as measured by the Prkachin–Solomon formula, from video sequences. We claim the following contributions111This paper extends the work from 2014 by improving the transfer method, by supplementary and more intensive testing and by adding filtering of the temporal sequences and, thus boosting, the overall performance.: (1) we introduce the Histogram of Topographical (HoT) features that are able to address variability in face images; (2) in order to surmount the limited number of persons, a trait typical for the medical–oriented image databases, we propose a semi-supervised, clustering–oriented, self–taught learning procedure; (3) we propose a machine learning based, temporal filtering to reduce the influence of the blinks and to increase the overall accuracy; (4) we propose a system for face dynamic analysis that applied to pain intensity estimation leads to qualitative results.

\@xsect

Although other means of investigation (e.g. bio-medical signals) were discussed (Werner et al., 2013), in the last period significant efforts have been made to identify reliable and valid facial indicators of pain, in an effort to develop non-invasive systems. Mainly, these are correlated with the appearance of three databases: the Classification of Pain Expressions (COPE) database (Brahnam et al., 2007) which focuses on infant classification of pain expressions, the Bio-Heat-Vid (Werner et al., 2013) database containing records of induced pain and the UNBC McMaster Pain Database (Lucey et al., 2011) with adult subjects suffering from shoulder pain. As said in the introduction, the majority of the face–based pain estimation methods exploit the Action Unit (AU) face description, previously used in emotion detection, and to which is correlated. A detailed review of the emotion detection methods is in the work of Zeng et al. (Zeng et al., 2009) and, more recently, in the work of Cohn and De La Torre (Cohn & De la Torre, 2014).

On the COPE database, 2007 exploited Discrete Cosine Transform (DCT) for image description followed by Sequential Forward Selection for reducing the dimensionality and nearest neighbor classification for infant pain detection. On the same database, 2010 relied on relevance vector machine (RVM) applied directly on manually selected infant faces for improved binary pain detection. 2012 used Local Binary Pattern (LBP) and its extension for improved face description and accuracy. We note that the COPE database, containing 204 images of 26 neonates is rather limited in extent and it is marked with only binary annotations (i.e. pain and no-pain).

2013 fused data acquired from multiple sources and information from a head pose estimator to detect the triggering level and the maximum level of pain supportability, while testing on the BioVid Heat Pain database. One of their contributions was to show that various persons have highly different levels of pain triggers and of supportability levels, thus arguing for pain assessment with multiple grades in order to accommodate personal pain profiles. At the moment of writing this paper, the database is not public yet.

The pain recognition from facial expressions was referred in the work of 2007, who applied a previously developed AU detector complemented by Gabor filters, AdaBoost and Support Vector Machines (SVM) to separate fake versus genuine cases of pain; their work is based on AUs, thus anticipating the more recent proposals built in conjunction with the UNBC McMaster Pain Database.

Thus, due to its size and the fact that it was made public with expert annotation, the UNBC McMaster Pain Database is currently the factum dataset for facial based pain estimation. In this direction, 2012 used Active Appearance Models (AAM) to track and align the faces on manually labelled key-frames and further fed them to a SVM for frame-level classification. A frame is labelled as “with pain” if any of the pain related AUs found earlier by 2008 to be relevant is present (i.e. pain score higher than 0). 2013 transferred information from other patients to the current patient, within the UNBC database, in order to enhance the pain classification accuracy over Local Binary Pattern (LBP) features and AAM landmarks provided by 2012. 2013 introduced an approach based on Kernel Mean Matching named Selective Transfer Machine (STM) and trained for person-specific AU detection, that is further tested on pain detection. 2014 and 2014 trained a person specific classifiers augmented with transductive parameter transfer for expression detection with applicability in pain.

We note that all these methods focus on binary detection (i.e. pain/no pain) thus experimenting only with the first level of potential applications. Furthermore, pain (i.e. true case) appears if at least one of the AU from eq. (1) is present, case which happens in other expressions too. For instance, AU 9 and 10 are also associated with disgust (Lucey et al., 2010). Another corner case is related to the binary AU 43 which signals the blink; obviously not all blinks are related to pain and the annotation of the UNBC database acknowledges this fact.

Multi-level pain intensity is estimated by the methods proposed in (Kaltwang et al., 2012) and (Rudovic et al., 2013). 2012 jointly used LBP, Discrete Cosine Transform (DCT) and AAM landmarks in order to estimate the pain intensity either via AU or directly at a sequence level processing. 2013 introduced a Conditional Random Field that is further particularized for the person and, for the expression dynamics and timing so to obtain increased accuracy.

Given the mentioned possible confusion between pain and other expressions and, respectively, the explicit findings from (Werner et al., 2013) regarding person dependent pain variability and the implicit assumption from the pain scales, which use multiple degree for pain intensity, our work focuses on pain intensity estimation. A byproduct will be pain detection.

We propose a method working in a typical pattern recognition framework. Given a face image and its facial landmarks, out method will identify the regions of interest, that are further described by Histogram of Topographical features. The important dimensions of the face description are selected by a self-taught learning process that is followed by actual pain assessment via a machine learning procedure. An overview of the proposed method is presented in figure 1.

Figure 1: The schematic of the proposed continuous pain estimation method.
\@xsect

The remainder of the paper is structured as follows: in section id1 we present the used databases. In section id1 we review state of the art feature descriptors and introduce the here proposed Histogram of Topographical features. The procedure chosen for transfer learning, as well as discussing alternatives, is presented in section id1. The system for still, independent, image–based pain estimation is presented in section id1; we follow by the description of the temporal filtering of video sequences. Implementation details and results are detailed in section id1. The paper ends with discussions and conclusions.

\@xsect

As mentioned, to our best knowledge, there exist three databases with pain annotations. The COPE (Brahnam et al., 2007) is rather small and with binary pain annotations, while the Bio-Heat-Vid (Werner et al., 2013) is to be made public. The UNBC-McMaster Pain Database provides intensity pain annotations for more than 48000 images.

\@xsect

We test the proposed system over the publicly available UNBC-McMaster Shoulder Pain Expression Archive Database (Lucey et al., 2011). This database contains face videos of patients suffering from shoulder pain as they perform motion tests of their arms. The movement is either voluntary, or the subject’s arm is moved by the physiotherapist. Only one of the arms is affected by pain, but movements of the other arm are recorded as well, to form a control set. The database contains 200 sequences of 25 subjects, totalling 48,398 frames. One of the subjects lacks pain annotations and, thus, it will be excluded from testing/training. Examples of pain faces proving the variability of expressions is showed in figure 2.

Figure 2: Face crops from UNBC-McMaster Shoulder Pain Expression Archive Database (Lucey et al., 2011). The top two rows illustrate the variability of pain faces while the bottom row illustrates non-pain cases. Note the similarity between the two situations.

The Prkachin - Solomon score for pain intensity is provided by the database creators, therefore acting as a ground-truth for the estimation process. While in our work we do not focus on computing separately the AUs, yet eq. (1) explicitly confirms that databases build for AU recognition are relevant for the pain intensity estimation.

The training testing scheme is the same as in the cases of (Lucey et al., 2011) or (Kaltwang et al., 2012): leave one person out cross validation; our choice is further motivated in section id1.

\@xsect

Noting the limited number of persons available within the UNBC database (i.e. only 23 for the training phase), we extend the data utilized for learning with additional examples from a non-pain specific database, more precisely, the Cohn-Kanade database (Kanade et al., 2000). This contains 486 sequences from 97 persons and each sequence begins with a neutral expression and proceeds to a peak expression. The peak expression for each sequence is coded in the FACS system thus having the AU annotated. Relevant pairs of neutral/expression from the Cohn-Kanade database may be followed in figure 3.

Figure 3: Face crops pairs (neutral - top row and respectively with expression bottom row) from the Cohn-Kanade database.
\@xsect

To extract the facial deformation due to expression, we introduce a novel local/global descriptor, namely the Histogram of Topographical (HoT) features. To proper place it in a context, we will start by reviewing the most important image descriptors.

\@xsect

Many types of local image descriptors are used across the plethora of computer vision applications (Tuytelaars & Mikolajczyk, 2008). The majority of the solutions computed in the image support domain222Here, alternatively to the image domain we assume the spectral domains where popular descriptors such as DCT or wavelet coefficients are defined. are approachable within the framework of the Taylor series expansion of the image function, namely with respect to the order of the derivative used.

Considering the zero-order coefficient of the Taylor series, i.e. the image values themselves, one of the most popular descriptors is the histogram of image values and, respectively the data directly, which was employed for instance in AAM (Cootes et al., 2001) to complement the landmarks shape. Next, relying on the first derivative (i.e. the directional gradient), several histogram related descriptors such as HOG (Dalal & Triggs, 2005) or SIFT (Lowe, 2004) gained popularity.

The second-order image derivative (i.e. the Hessian matrix) is stable with respect to image intensity and scale and was part of SIFT (Lowe, 2004) and SURF (Bay et al., 2008) image key-point detectors. 2007 used the dominant eigenvalue of the Hessian matrix to describe the regions in terms of principal curvature, while 1998 deployed a hard classification of the Hessian eigenvalues in each pixel (thus identifying the degree of local curviness) to describe tubular structures (e.g. blood vessels) in medical images.

Summarizing, we stress that all the mentioned state of the art systems rely on information gathered form a single Taylor coefficient of either order zero, one or two in order to describe images globally, or locally.

The approximation of the image in terms of the first two Taylor series coefficients is the foundation of the topographical primal sketch introduced by 1983 which is inspired by the prior 1980 Laplacian based sketch representation. The primal sketch was further adopted for face description by 2007. In the primal sketch, the description of the image is limited to a maximum number of 12 (or 16) classes which correspond to the basic topographical elements. Further extension lays in the work of 2009, who plied the Hessian for locating key-points and described their vicinity with the histogram of color values (order zero) and with the histogram of oriented gradients (order one). 2006 developed both the first and second derivative blob measures for an approach derived from primal sketch features in terms of scale-invariant edge and ridge features; yet they focus only on interest point and use different measures than our proposal.

In parallel to our work, 2014 proposes four strength measures extracted by similarity with second order moment based Harris and Shi-Tomasi operator (Shi & Tomasi, 1994), but from the Hessian’s eigenvalues, that can be used to identify interest points.

We consider that all pixels from a region of interest carry important topographic information which can be gathered in orientation histograms or in normalized magnitude histograms. In certain cases, only a combination of these may prove to be informative enough for a complete description of images.

\@xsect

In a seminal work, 1983 introduced the so-called topographical primal sketch. The gray-scale image is considered as a function . Given such a function, its approximation in any location is done using the second-order Taylor series expansion:

(2)

where is the two-dimensional gradient and is the Hessian matrix.

Eq. (2) states that a surface is composed by a continuous component and some local variation. A first order expansion uses only the term (the inclination amplitude) to detail the ”local variation”, while the second order expansion (i.e. the Hessian), , complements with information about the curvature of the local surface. Considering the gradient and Hessian eigenvalues, a region could be classified into several primal topographical features. This implies a hard classification and carries a limitation burden as it is not able to distinguish, for instance, between a deep and a shallow pit. We further propose a smoother and more adaptive feature set by considering the normalized local histograms extracted from the magnitude of Hessian eigenvalues, the eigenvectors orientation and, respectively, the magnitude and the orientation of the gradient.

1998 employed the concepts of linear scale space theory (Iijima, 1962), (Florack et al., 1992), (Lindeberg, 1994) to elegantly compute the image derivatives. Here, the image space is replaced by the scale space of an image :

(3)

where stands for convolutions and is a Gaussian rotationally symmetric kernel with variance (the scale parameter):

(4)

The differentiation is computed by a convolution with the derivative of the Gaussian kernel:

(5)

In the scale space, the Hessian matrix at location and scale is defined as:

(6)

where is the convolution of the Gaussian second order derivative with the image at location , and similarly for and . Further analysis requires the computation of the eigenvalues and eigenvectors of the Hessian matrix.

The switch from the initial image space to the scale space, not only simplifies the calculus, but the implicit smoothing reduces the noise influence over the topographic representation, influence that was signaled as a weak point from inception by 1983.

The decomposition of the Hessian in eigenvalue representation acquiesce the principal directions in which the local second order structure of the image can be decomposed. The second order hints to the surface curvature and, thus, to the direction of the largest/smallest bending. We will denote the two eigenvalues of the Hessian matrix by . The eigenvector corresponding to the largest eigenvalue is oriented in the direction of the largest local curvature; this direction of the principal curvature is denoted by . A visual example with gradient and curvature images of a face is shown in figure 4.

(a) (b)
(c) (d) (e) (f)
Figure 4: Computing the HoT features for a face: (a) Original face image. (b) The image represented as a surface, (c) Gradient orientation image, (d) Gradient magnitude image, (e) Curvature orientation image and (f) Curvature strength image.
\@xsect

In the remainder of the work, for each region of interest , the following HoT descriptors will be used:

  • Second order data (Hessian):

    • The histogram of hard voting of image surface curvature orientation. For each pixel in , “1” is added to the orientation of the ridge/valley extracted by computing the angle of the first Hessian eigenvector, if the second eigenvalue is larger than a threshold, .

      (7)
    • The histogram of soft voting ridge orientation adds, instead of “1”, the difference between the absolute values of the Hessian eigenvalues.

      (8)

      The and histograms produce, each, a vector of length equal to the number of orientation bins and describe the curvature strength in the image pixels.

    • The range–histogram of the smallest eigenvalue, given a predefined range interval (e.g. ).

      (9)

      Inspired from the Shi-Tomasi operator (Shi & Tomasi, 1994), Lindeberg (Lindeberg, 2014) proposed to scan in that region the smaller Hessian eigenvalues and select the maximum of them as a measure of that region interest points. We differ by considering that not only the extremum of the minimum eigenvalues matters, but we gather all data in a histogram to have the region’s global representation.

    • The range–histogram of the differences between the eigenvalues given a predefined differences range interval (e.g. .

      (10)
  • First order data (gradient):

    • Histogram of orientation, (Dalal & Triggs, 2005); each pixel having a gradient larger than a threshold, casts one vote;

    • Histogram of gradient magnitude, . The magnitudes are between 0 and a maximum value (100).

The constants ensure that each histogram is normalized. Experimentally chosen values for the thresholds are: and . Each of the histograms is computed on 8 bins.

\@xsect

The target database of the proposed system, UNBC, is highly extensive as number of frames, but is also rather limited with respect to the number of persons (only 25) and to inter-person similarity. This is a typical trait of the medical–oriented image databases as there are not so many ill persons to be recorded. To increase the robustness of the proposed algorithm, a new mechanism for transfer learning is proposed.

We have inspired our work from the “self–taught learning” paradigm (Raina et al., 2007) which is conceptually similar to the inductive transfer learning (Jialin-Pan & Yang, 2010). A source database, described by the unlabelled data is used to learn the underlying data structure so to enhance the classification over the labelled data of the target database: , where is the data and are labels. According to (Raina et al., 2007), the data structure could be learned by solving the following optimization problem:

(11)

The minimization problem from eq. (11) may be interpreted as a generalization of the Principal Component Analysis concept333PCA is retrieved by solving s.t. and - orthogonal. as it optimizes an overall representation, with the purpose of identifying the best minimum set of linear projections. The PCA aims to decompose the original data into a low-rank data and a small perturbation in contrast with Robust PCA (Candes et al., 2011) which decomposes the data into a low-rank sparse matrix.

Taking into account that the interest is in classification/regression, we consider that: 1. the source database should be relevant to the classification task over the target database; 2. original features should form relevant clusters such that, 3. the optimization over the source database preserves local grouping. A modality to preserve the original data clustering is to compute the Locality Preserving Indexing with the similarity matrix using on the cosine distance:

(12)

We replaced the cosine distance used in (Florea et al., 2014) with the heat kernel, as in the case of Locality Preserving Projection (He & Niyogi, 2003) with a further adaptation to our problem:

(13)

where contains the closest neighbors of and if contains at least one of the action units from eq. (1). The optimization runs over the similarity matrix, such that we solved the following regularized least squares problem over the unlabelled source database:

(14)

where is the -th element of the eigenvector of the symmetrical similarity matrix . This process of extracting the data representation (eq. (13) - if removed the adaptation to our problem and (14)) form the so called spectral regression introduced by 2007. A similar transfer learning method was proposed by 2012, with two core differences: data similarity is computed using a hard assignment compared to the soft approach from eq. (13) and unsupervised clustering was performed on the target database.

Finally, the labelled new data is obtained by classification of the projected vectors , determined as:

(15)

where .

In our algorithm, the neutral image and respectively the images with the apex emotion from Cohn-Kanade database were the unlabelled data from the source database, while the UNBC was the target, labelled, database. The transfer learning process and the projection equation, (15), were applied independently on the Hessian based histograms, and, respectively, on the gradient based histograms .

The transfer learning includes also a dimensionality reduction (i.e. feature selection procedure). The full HoT feature has 240 dimensions, while there are 7937 images with Prkachin-Solomon score higher than 0. Taking into account that part are utilized for training, feature selection is required to prevent the classifier from falling into the curse of dimensionality. The Hessian based histogram are reduced to dimensions while gradient ones to .

\@xsect\@xsect

The schematic of the proposed system for pain intensity assessment in independent, still images is presented in figure 5 (b). The procedure for HoT features extraction is presented in figure 5 (a).

\@xsect

The UNBC landmarks are accurate (Lucey et al., 2011), yet their information is insufficient to provide robust pain estimation. In this sense, 2012 reported that using only points, for direct pain intensity estimation, a mean square error of 2.592 and a correlation coefficient of 0.363 is achieved (as also shown in table 1).

Due to the specific nature of the AUs contributing to pain, and based on the 22 landmarks, we have selected 5 areas of interest, showed in figure 5 (a), as carrying potentially useful data for pain intensity estimation.

Due to the variability of the encountered head poses, we started by roughly normalizing the images: we ensured that the eyes were horizontal and the inter–ocular distance was always the same (i.e. 50). Out–of–plane rotation was not dealt with explicitly, but implicitly by the use of the histograms as features. Since the 8 histogram bins span 360 degrees, the head robustness is up to .

(a) (b)
Figure 5: (a) The features extraction procedure. (b) The transfer knowledge system. Data internal representation is computed on unlabelled data from Cohn-Kanade database to make use of the larger number of persons. The reduced data is fitted in order to predict pain intensity.
\@xsect

In our previous work on the topic (Florea et al., 2014) and also in (Kaltwang et al., 2012) it was acquainted that while marked by equation (1), the blink does not always signal pain. Unfortunately, the blink is sufficiently obvious such that an automatic system concludes that it is pain. The main difference between the blink and the pain face is duration: blinks typically take less than 15 frames, while pain faces are longer. To further differentiate between those two, we consider three versions of temporal filtering of the sequences.

The first solution is a simple filtering aimed at reducing the noise. Here we started with a median filter on a vicinity of width followed by a linear regression (LR), over the same window to estimate the current value. The preferred window size is .

The second and third solutions rely on machine learning approaches, where given the data from the vicinity of the current pixel, a classifier attempts to estimate a better value.

The difference between the two considered solutions lies in data description:

  • The feature vector is formed by the pain estimates for the frames in the vicinity taken from the sequence. One expects that given a large enough window size , the classifier will learn to skip the blink. Here the only classifier that produces good results was a MLP with two hidden layers (of 40 neurons each) and single output. The MLP may be clearly seen as a generalization over the linear regression, in the sense that different feature dimensions contribute with different weights.

  • The feature is obtained by considering statistical moments computed on increasing vicinities of still image pain estimates. With such description, we estimate that typically patterns that erroneously appear in the estimated data are learned and skipped in the testing. Here, the feature of the frame is :

    (16)

    where is the pain estimate for the frame , while is the variance of the pain estimates over a centered window in having the width . This description is inspired from the total strict pixel ordering (Coltuc & Bolon, 1999), (Florea et al., 2007). Again, we empirically found the best value for window size to be .

    In this case a SVR leads to better correlation, while the 2-hidden layers MLP shows smaller mean square error.

The main idea behind these solutions is to gather data from vicinities larger than blink duration and to allow the classifier to distinguish between blinks relevant to pain and those which are not relevant. Further more, we determined that still pain estimation produces patterns of estimates in pain onset and offset and we aim to improve the performance in such cases.

\@xsect\@xsect

To objectively evaluate the performance of the proposed approach for the task of continuous pain intensity estimation according to the Prkachin-Solomon formula, several metrics are at hand. The mean squared error () and the Pearson correlation coefficient () between the predicted pain intensity and ground truth pain intensity are used for continuous pain intensity accuracy appraisal. These measures were also used by (Kaltwang et al., 2012), thus direct comparison is straight-forward.

For the pain detection all frames with Prkachin-Solomon higher than zero are considered with pain and the measure adopted is Area Under ROC curved (AUC). While we argue against the relevance of this method for the pain estimation in assistive computer vision, yet the measure is relevant to evaluate the theoretical performance of a face analysis method. The AUC was used also by several other works (Lucey et al., 2012), (Chen et al., 2013), (Chu et al., 2013), (Zen et al., 2014), (Sangineto et al., 2014), and it facilitates direct comparison with state of the art solutions.

\@xsect

The used training-testing scheme, for both still and sequence related pain estimation is the leave–one–person–out cross-validation. The same scenario is employed in previous works on the topic (Lucey et al., 2012), (Chen et al., 2013), (Chu et al., 2013), (Zen et al., 2014), (Sangineto et al., 2014), (Kaltwang et al., 2012): at a time, data from 23 persons is used for training and from the 1 person for testing.

Furthermore, a scenario where testing and training datasets are disjoint with respect to the person is motivated by use–cases for emergency units and critically ill persons where it is not possible to have neutral (i.e. without pain) images for the incoming patients. Thus, we consider that image oriented k-fold scenarios, (e.g. in (Rudovic et al., 2013)) are more theoretically oriented than practically.

As the number of images with positive examples (with a specific AU or with Pain label) is much lower than the one containing negative data, for the actual training the two sets were made even; the chosen negative examples were randomly selected. To increase the robustness of the system, three classifiers were trained in parallel with independently drawn examples and the system output was taken as the average of the classifiers.

For the actual discrimination of the pain intensity, we plied the same model as in the case of similar works, (Lucey et al., 2012), (Kaltwang et al., 2012). We used two levels of classifiers (late fusion scheme): first, each category of features was fed into the set of three Support Vector Regressors (SVR) (with radial basis kernel function, cost 4 and ). Landmarks were not spectrally regressed (i.e were not re-represented with eq. (15) ) but directly passed to the SVRs. The results were fused together within a second level of boosted ensemble of four SVRs. The implementation of the SVR is based on LibSVM (Chang & Lin, 2011).

\@xsect

The preferred implementation was by direct estimation of Prkachin - Solomon score of pain. Alternatively, one may consider as intermediate step the AU estimation, followed by pain prediction using equation (1); yet previous research (Kaltwang et al., 2012), (Florea et al., 2007) showed that this method produces weaker results since errors are cumulated.

The best performing method for individual image based Prkachin-Solomon pain score estimation produces a correlation coefficient of , and a mean square error of . The area under curve is . The best temporal filtering increased the correlation to and decreased the mean square error to . Best AUC achieved was . The next subsection will further detail these results and their implications.

Given a new UNBC image and the relevant landmarks positions, the query to determine the pain intensity for that image takes approximately 0.15 seconds on a single thread Matlab implementation on an Intel Xeon at 3.3 GHz. Temporal filtering adds a delay due to the consideration of a temporal window around the current frame; this window is larger then a blink (which has a typical duration of 300-400 milliseconds) and it adds a delay of 1 second.

Figure 6: Sum of absolute differences when comparing all images without pain and respectively with intense pain to a chosen no-pain reference image. Ideally, we aim for large values in the left plot and zeros in the right one. refers to the first area of interest (i.e. around the left eye), to the second one (around right eye), etc.
\@xsect\@xsect

First, we investigate the capabilities of the HoT features by considering the following example: we take the first frontal image without pain for each person and consider its HoT features as reference; next, we compute the HoT features of all the images with a pain intensity higher than 4 and of all the images without pain for each person separately. We plot the sum of absolute differences between the set considered as reference to the mentioned images with and without pain respectively. The results are presented in figure 6. Ideally, large values are aimed in the left plot and zeros in the right one. We note that, for this particular example, the largest contribution in discriminating between pain and no-pain cases was due to Hessian based and histograms. Gradient based histograms lead to inconclusive differences in the case of intense pain, while and produced large values also for the no-pain case.

Furthermore, if considering the first 3 dimensions as selected by the transferred SR-M, the first 4000 no-pain and all the intense pain (i.e. higher than 4) cases are clustered as shown in figure 7. The clusters in the Hessian based space are fairly visible suggesting that: (1) HoT features are more powerful if they include Hessian based data, while addressing the pain problem and (2) identification of high pain is doable. Yet, we did not plot the data corresponding to low levels of pain which fills the intermediate space and, in fact, makes the discrimination difficult.

Figure 7: Data clustering for Hessian based histograms (left) and respectively gradient based histograms (right). in each case the first three axes are retained. With red are frames with high pain intensity and with blue are the first 4000 no-pain images. As one can see, the data is fairly clustered for Hessian based features and less for gradient based ones.
\@xsect

To appraise the overall contribution of each histogram type to the facial based pain intensity estimation we present the results in table 1. To have a reference with respect to state of the art features, we fill in with results achieved for the same problem by Kaltwang et al. (Kaltwang et al., 2012). As one may see, if taken individually the proposed histograms under-perform state of the art features. Yet different categories complement each other well and by combining them we obtain improved results.

Work Proposed (Kaltwang et al., 2012)
Feature Hess Grad HoT HoT+PTS PTS DCT LBP
Measure Mean Square Error
3.76 4.67 3.35 1.187 2.592 1.712 1.812
Measure Correlation,
0.252 0.341 0.417 0.551 0.363 0.528 0.483
Table 1: Accuracy of pain intensity estimation using the Prkachin - Solomon formula. We report the achieved results for various versions of features used: containing only Hessian based histograms ( - Hess), only gradient based histograms ( - Grad) and both of them to form the so called Histogram of Topographical (HoT = Grad+Hess) features; the complete version contains landmarks (marked as PTS) and HoT. The relevant features were in each case learned with the modified version of Spectral Regression (SR-M) on the Cohn-Kanade database (CK) via self–taught learning. The Prkachin - Solomon score is estimated directly by the classifiers which were trained accordingly.

To detail the contribution of each histogram type, as defined in section id1, we remove one type of histogram and see the overall effect over the pain score. In table 2 we report the achieved relative accuracy obtained with only part of histogram types. The decrease is larger for the more important types. Landmarks are skipped for this experiment. As one can see, all the histograms contribute positively.

Histogram removed None - HoT
Correlation, 0.331 0.368 0.355 0.358 0.351 0.192 0.417
Table 2: Contribution of each of the histogram types used. We report the Pearson correlation coefficient, when the mentioned type of histogram is removed. The reference is the right-most result (all histograms used). Thus, smaller is the value (i.e. larger is the decrease), higher is the contribution of the specific type of histogram.
\@xsect

In table 3 we present the overall performance when various possibilities of transfer learning are considered. The internal data representation may be perceived as unsupervised feature selection. In this sense, beyond the proposed modified Spectral Regression (SR-M), we tested the standard Spectral Regression (SR) (Cai et al., 2007) and the Locality Preserving Projection (LPP) (He & Niyogi, 2003) as it is the inspiration for SR. We also tested the standard Principal Component Analysis as being the foremost dimensionality reduction method and its derivation through Expectation–Maximization, namely Probabilistic PCA (PPCA), (Tipping & Bishop, 1999); further, we included the Factor Analysis (FA) as it is a generalization of PCA and a more recent derivation of the PCA: the Robust PCA (RPCA) based on Mixture of Gaussian for the noise model (RPCA-MOG) (Zhao et al., 2014), which is an improvement over the standard RPCA introduced by Candes et al. (Candes et al., 2011) that uses Principal Component Pursuit to find a unique solution.

Feature SR-M SR LPP PCA PPCA RPCA-MOG FA
Measure Mean Square Error
1.187 1.183 1.203 1.181 1.173 3.891 2.746
Measure Correlation,
0.551 0.545 0.544 0.541 0.545 0.522 0.540
Table 3: Accuracy of pain intensity estimation achieved results when self–taught learning (i.e. feature selection was learned on the Cohn-Kanade database and used on UNBC) with dimensionality reduction method. Details are in text accordingly.
Database
for learning
Cohn - Kanade UNBC
Feature SR-M PPCA SR-M PPCA
Measure Mean Square Error
1.187 1.173 1.203 1.181
Measure Correlation,
0.551 0.545 0.532 0.532
Table 4: Comparison of the achieved accuracy of pain intensity estimation when feature selection is learned on the Cohn-Kanade database (i.e. self–taught learning) or directly on the UNBC database (i.e. no transfer).

Other considered alternatives are to perform no transfer at all, or to extract the inner data representation directly from the labelled UNBC database. The comparative results for these cases are presented in table 4. The results show that specifically relying on the adapted similarity measure (SR-M) and taking into account a larger number of persons, the discrimination capability increases.

A numerical comparison between our modified version of spectral regression and the probabilistic PCA, in transfer, shows little difference. Yet, we argue for the superiority of our method based on analysis of the continuous pain intensity signals: the major difference is that our method shows a bias towards blink and consistent results, the reduction based on PCA simply fails in some situations without being able to make any correlation between them. A typical case is illustrated in figure 8.

Figure 8: A wave form for continuous pain intensity estimation taken from person 1 of the database. The red line is the pain estimation using Spectral Regression while the blue is with PPCA. The modes on SR plot are much more visible.
\@xsect

To give a quantitative comparison of the performance of the proposed self–taught learning method, we note that multiple methods report transfer learning enhanced performances on the UNBC McMaster Pain database. All of them applied the same evaluation procedure.

2013, 2014 and 2014 used histograms of LBP followed by PCA reduction of dimensionality and various classification methods, by directly applying it to training data or by relying on transductive transfer learning; 2013 report results for AdaBoost and Transductive Transfer AdaBoost (TTA); 2013 for Selective Transfer Machine (STM); 2014 report results for Transductive Support Vector Machine (TSVM) and Support Vector-based Transductive Parameter Transfer (SVTPT) (Zen et al., 2014); Sangineto et al. for Transductive Parameter Transfer with Density Estimate Kernel. As one may note, all the methods are transductive transfer learning (i.e. the source and target tasks are the same, while the source and target domains are different) while our method is part of the inductive transfer learning category (i.e. the target task is different from the source task, no matter when the source and target domains are the same or not (Jialin-Pan & Yang, 2010)).

In table 5 we present the results reported by the mentioned works comparatively to the performance of the proposed method. As one can see, our method reaches the best accuracy.

Method AUC
AdaBoost (Chen et al., 2013) 76.9
TTA (Chen et al., 2013) 76.5
TSVM (Zen et al., 2014) 69.8
STM
(Chu et al., 2013) 76.8
TPT
(Sangineto et al., 2014) 76.7
SVTPT
(Zen et al., 2014) 78.4
Proposed 80.9
Table 5: Comparison with state of the art transfer learning methods using the achieved Area Under Curve (AUC). The explanation for the acronyms is in text.
\@xsect

The results achieved with the three methods of temporal filtering, given the still image pain estimation are presented in table 6.

Method Still Temporal–LR Vicinity–MLP Strict ordering–MLP Strict ordering–SVM
Measure Mean Square Error
1.187 0.885 1.137 1.200 1.280
Measure Correlation,
0.551 0.562 0.535 0.529 0.558
Table 6: Comparison of the achieved accuracy of pain intensity estimation when the three methods for temporal filtering were included: based on linear regression (LR), when the vicinity was a feature of MLP and with strict ordering description.

While analyzing the results, all methods lead to improved mean square error and area under curve. Regarding the correlation, from a quantitative point of view, the method based on linear regression (LR), which has the main purpose of removing the noise in the estimated values, performs the best. Yet this method is an incremental improvement of the still pain estimation.

The other filtering solutions produce, in fact, mixt results; while overall they indicate a decrease or a small increase of the correlation, in fact they boost the performance of results on half of the persons with more than 0.05, in average. The persons with increase are the ones where the methods performed better than average (i.e. correlation was ); here, the blinks were correctly removed and the temporal filtered signal comes much closer to the ground truth. However, on persons with below average initial results, the filtering de-correlates even more the estimated values with respect to ground truth. These are persons that exhibit different pain faces, such as opening the mouth (e.g. the person from the last column in figure 2) or bowing the head to the low left. Concluding, if either more data is available for learning or if the system is further robustified with respect to the person, the machine learning temporal filtering will be more useful; now it is a mere noise reduction does the work.

\@xsect

As mentioned in section id1, there exist several methods reporting results on the UNBC Pain database. Yet, only 2012 and our previous work (Florea et al., 2014) tested on the entire database, with separation between persons when considering testing/training folds and reported continuous pain intensity. The work from (Kaltwang et al., 2012) consists in trying several combination of feature coupled with a Relevance Vector Machine (RVM- which is the SVM reinterpreted under Bayesian framework) and fused with a second layer of RVM; we present all of them to have a better comparison in the left hand table from figure 9.

Method MSE, Correlation,
Proposed–Still 1.187 0.551
Proposed–Temporal 0.885 0.562
PTS+DCT
(Kaltwang et al., 2012)
1.801 0.489
PTS+LPB
(Kaltwang et al., 2012)
1.567 0.485
PTS+DCT+LPB
(Kaltwang et al., 2012)
1.386 0.590
HoT+SR
(Florea et al., 2014)
1.183 0.545
(a) (b)
Figure 9: (a) Numerical comparison of the achieved accuracy of pain intensity estimation with various state of the art methods. (b)Pearson correlation coefficient vs mean square error for the methods presented in the left-hand table. The perfect method is placed in the top left corner.

Mainly, the highest mean square error is obtained by the still image identification followed by temporal filtering outperforming the next competitor by near 0.3 pain levels. In this category, it is followed by our previous method (Florea et al., 2014) and by the here proposed still image estimation. Regarding the correlation coefficient, our methods set ranks second after the combination of DCT with LPB fused by a RVM. Surprisingly, the direct combination of landmarks with features reported by (Kaltwang et al., 2012) does not lead to very good results.

Taking into account that there are different winners at different categories, to have a better image of relation between them, we plotted the results from table (a) 9 as MSE vs axis (see figure 9 (b) ). In such a plot, a perfect method will have and and it will be placed in the top left corner. As one can see, the proposed temporal method is closer to the perfect one’s position.

\@xsect

In this paper we introduced the Histogram of Topographic features to describe faces. The addition of Hessian based terms allowed separation of various face movements and, thus, of pain intensity levels. The robustness of the system was further enhanced by a new transfer learning method which was inspired from the self–taught learning paradigm and relied on preserving the local similarity of the feature vectors as learned over a more consistent database in terms of persons, to ensure that relevant dimensions of the features are used in the subsequent classification process.

Regarding the addition of the actual features, while their individual contribution was rather small when compared with consecrated features, they complemented each other well, as showed by the increase of the overall performance when all feature types were used. As showed in table 9 (a), this is not the case for features employed in previous solutions, which argues for the consideration of the complete topographical description.

The transfer learning from a database with larger number of persons increased the system robustness. More precisely, the solution that did not use the transfer procedure on some persons lead to better results, with the cost of providing smaller accuracy on others that are more different from the training faces. The transfer provided more consistent results overall, a fact which was proved by the entropy of the correlation coefficient increase from 9.10 to 9.37, enhancing the generalization with respect to person change. Such property is desirable taking into account the different characteristics of pain expressivity, trait which impedes the temporal filter to have an overall beneficial effect with results of noise reduction. Furthermore, the proposed transfer learning method performs better when compared with similar attempts but based on transductive transfer learning, as showed in table 5.

The system provides indeed some failures. The AU 43 (closing eyes), according to eq. (1), contributes to pain intensity, not all blinks are pain-related; the system, as in the case of (Kaltwang et al., 2012), mistakenly associate blinks of specific persons with pain. Other failures are in cases where the person’s method of expressing pain is rather different from most of the others; for instance, the second person widely opens the eyes, instead of closing them, leading the system to produce false negatives. Other errors are related to the fact that the person is speaking during the test; false positives are associated with persons bow (AU 54) or jerk (AU 58) the head while feeling pain; yet the behavior is not general. Still, while the effort of the UNBC Pain Database creators was notable and made the foundation for advances on non-invasive pain estimation from facial analysis, the database should be increased with more subjects to have illustration of variability in pain faces.

\@xsect

At the end we consider that further research on the topic is beneficial and we would like to emphasize several aspects, that in our opinion motivate such a necessity. First, the Prkachin - Solomon score was found to be only moderately strong correlated with self–report (i.e. a Pearson correlation coefficient of 0.66 or higher) (Hammal & Cohn, 2012) (Prkachin & P. Solomon, 2008). Secondly, the self-report was found to be the more accurate mean for appraisal of the pain intensity (Shavit et al., 2008). Thirdly, the observational scores that were found to be more reliable, such as the revised Adult Nonverbal Pain Scale (ANPS-R) and the Critical Care Pain Observation Tool (CPOT), contain additional indicators of pain such as the rigidness and the stiffness positions or restless and excessive activity; these are gestures recognizable by a system for analysis of the body posture. Concluding, additional data with annotation to inter-correlate the body posture estimation with facial pain assessment for facilitating further contribution on the topic of automatic pain assessment, will make possible a gradual evolution to a fully developed, autonomous system of assistive computer vision.

\@xsect

The work has been partially funded by the Sectoral Operational Programme Human Resources Development 2007-2013 of the Ministry of European Funds through the Financial Agreement POSDRU/159/1.5/S/ 134398 and POSDRU/159/1.5/S/132395.

References

  • Ashraf et al. (2009) Ashraf, A. B., Lucey, S., Cohn, J. F., Chen, T., Z. Ambadar and, K. Prkachin, and Solomon, P. The painful face – pain expression recognition using active appearance models. Image Vis Comput., 27(12):1788–1796, 2009.
  • Bay et al. (2008) Bay, H., Ess, A., Tuytelaars, T., and Gool, L. Van. Speeded-up robust features (SURF). Comp. Vis. Image Und., 110(3):346–359, 2008.
  • Brahnam et al. (2007) Brahnam, S., Nanni, L., and Sexton, R. Introduction to neonatal facial pain detection using common and advanced face classification techniques. Stud. Comput. Intel., 48:225–253, 2007.
  • Cai et al. (2007) Cai, D., He, X., and Han, J. Spectral regression for efficient regularized subspace learning. In Int. Conf. on Computer Vision, pp. 1–8, 2007.
  • Candes et al. (2011) Candes, Emmanuel, Li, Xiaodong, Ma, Yi, and Wright, John. Robust principal component analysis? Journal of the ACM, 58(3):11, 2011.
  • Chang & Lin (2011) Chang, C.-C. and Lin, C.-J. LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27, 2011.
  • Chanques et al. (2014) Chanques, Gerald, Pohlman, Anne, Kress, John P, Molinari, Nicolas, de Jong, Audrey, Jaber, Samir, and Hall, Jesse B. Psychometric comparison of three behavioural scales for the assessment of pain in critically ill patients unable to self-report. Critical Care, 18(5):160, 2014.
  • Chen et al. (2013) Chen, J., Liu, X., Tu, P., and Aragones, Amy. Learning person-specific models for facial expression and action unit recognition. Pat. Recog. Letters, 34(15):1964–1970, 2013.
  • Chu et al. (2013) Chu, W.-S., Torre, F. De La, and Cohn, J. F. Selective transfer machine for personalized facial action unit detection. In Computer Vision and Pattern Recognition, pp. 886–893, 2013.
  • Cohn & De la Torre (2014) Cohn, Jeffrey F. and De la Torre, Fernando. The Oxford Handbook of Affective Computing, chapter Automated Face Analysis for Affective Computing, pp. 131–150. Oxford University Press, Oxford, 2014.
  • Coltuc & Bolon (1999) Coltuc, D. and Bolon, P. Strict ordering on discrete images and applications. In Int. Conf. on Image Processing, pp. 150–153, 1999.
  • Cootes et al. (2001) Cootes, T. F., Edwards, G. J., and Taylor, C. J. Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell., 23(6):681–685, 2001.
  • Dalal & Triggs (2005) Dalal, N. and Triggs, B. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, pp. 886–893, 2005.
  • Deng et al. (2007) Deng, H., Zhang, W., Mortensen, E., Dietterich, T., and Shapiro, L. Principal curvature-based region detector for object recognition. In Computer Vision and Pattern Recognition, pp. 2578–2585, 2007.
  • Ekman et al. (2002) Ekman, P., Friesen, W., and Hager, J.C. Facial action coding system (2nd ed.). Research Information; Salt Lake City, 2002.
  • Florack et al. (1992) Florack, L., Haar-Romeny, B. M., Koenderink, J., and Viergever, M. Scale and the differential structure of images. Image Vis. Comp., 10(6):376–388, 1992.
  • Florea et al. (2014) Florea, C., Florea, L., and Vertan, C. Learning pain from emotion: Transferred HoT data representation for pain intensity estimation. In European Conf. on Computer Vision: workshop on Assitive Computer Vision and Robotics, volume 8927-LNCS, pp. 778–790, 2014.
  • Florea et al. (2007) Florea, L., Vertan, C., Florea, C., and Oprea, A. Dynamic range enhancement of consumer digital camera acquired hip prosthesis xray images. In European Signal Processing Conference, pp. 1103–1106, 2007.
  • Frangi et al. (1998) Frangi, A.F., Niessen, W.J., Vincken, K.L., and Viergever, M.A. Multiscale vessel enhancement filtering. In Medical Image Computing and Computer Assisted Intervention, pp. 130–137, 1998.
  • Gawande (2004) Gawande, A. The Checklist Manifesto: How to Get Things Right. Metropolitan Books, 2004.
  • Gholami et al. (2010) Gholami, Behnood, Haddad, Wassim M., and Tannenbaum, Allen R. Relevance vector machine learning for neonate pain intensity assessment using digital imaging. IEEE Trans. on Biomedical Engineering, 57(6):1457–1466, 2010.
  • Guo et al. (2012) Guo, Y., Zhao, G., and Pietikäinen, M. Discriminative features for texture description. Pattern Recognition, 45(10):3834–3843, 2012.
  • Hadjistavropoulos & Craig (2004) Hadjistavropoulos, T. and Craig, K.D. Pain: Psychological perspectives, chapter Social influences and the communication of pain, pp. 87–112. Erlbaum; New York, 2004.
  • Hammal & Cohn (2012) Hammal, Z. and Cohn, J. Automatic detection of pain intensity. In ACM International Conference on Multimodal Interaction, pp. 47–52, 2012.
  • Haralick et al. (1983) Haralick, R., Watson, L., and Laffey, T. The topographic primal sketch. The Intl. J. of Robotics Research, 2(1):50–71, 1983.
  • Haslam et al. (2011) Haslam, L., Dale, C., Knechtel, L., and Rose, L. Pain descriptors for critically ill patients unable to self-report. Journal of Advanced Nursing, 68(5):329–336, 2011.
  • He & Niyogi (2003) He, Xiaofei and Niyogi, Partha. Locality preserving projections. In Advances in Neural Information Processing Systems, pp. 153–160, 2003.
  • Hugueta et al. (2010) Hugueta, Anna, Stinsonb, Jennifer N., and McGratha, Patrick J. Measurement of self-reported pain intensity in children and adolescents. Journal of Psychosomatic Research, 68:329–336, 2010.
  • Iijima (1962) Iijima, T. Observation theory of two-dimensional visual patterns. Technical report, Papers of Technical Group on Automata and Automatic Control, IECE, Japan, 1962.
  • J. Boyd et al. (2011) J. Boyd et al. Classification of chronic pain. Descriptions of chronic pain syndromes and definitions of pain terms. International Association for Study of Pain, 2011.
  • Jialin-Pan & Yang (2010) Jialin-Pan, Sinno and Yang, Qiang. A survey on transfer learning. IEEE Trans. on Knowledge and Data Egineering, 22(10):1345–1359, 2010.
  • Jiang & Chung (2012) Jiang, W. and Chung, F.L. Transfer spectral clustering. In European Conf. on Machine Learning, Principles and Practice of Knowledge Discovery in Databases, pp. 789–803, 2012.
  • Kaltwang et al. (2012) Kaltwang, S., Rudovic, O., and Pantic, M. Continuous pain intensity estimation from facial expressions. In International Symposium on Visual Computing, pp. 368–377, 2012.
  • Kanade et al. (2000) Kanade, T., Cohn, J. F., and Tian, Y. Comprehensive database for facial expression analysis. In IEEE Face and Gesture, pp. 46–53, 2000.
  • Kokkinos et al. (2006) Kokkinos, I., Maragos, P., and Yuille, A. Bottom-up & top-down object detection using primal sketch features and graphical models. In Computer Vision and Pattern Recognition, pp. 1893–1900, 2006.
  • Lee & Chen (2009) Lee, Wei-Ting and Chen, Hwann-Tzong. Histogram-based interest point detectors. In Computer Vision and Pattern Recognition, pp. 1590–1596, 2009.
  • Lindeberg (1994) Lindeberg, T. Scale-space theory: a basic tool for analysing structures at different scales. Journal of Applied Statistics, 21(2):225–270, 1994.
  • Lindeberg (2014) Lindeberg, Tony. Image matching using generalized scale-space interest points. J. Math. Imaging Vis., 2014. DOI:10.1007/s10851-014-0541-0.
  • Littlewort et al. (2007) Littlewort, G., Bartlett, M., and Lee, K. Faces of pain: Automated measurement of spontaneous facial expressions of genuine and posed pain. In ACM Int. Conf. on Multimodal Interaction, pp. 15–21, 2007.
  • Lowe (2004) Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis., 60(2):91–110, 2004.
  • Lucey et al. (2010) Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. The extended Cohn-Kande dataset (ck+): A complete facial expression dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition workshop on Human Communicative Behavior Analysis, pp. 94–101, 2010.
  • Lucey et al. (2011) Lucey, P., Cohn, J., Prkachin, K., Solomon, P., and Matthews, I. Painful data: The UNBC McMaster shoulder pain expression archive database. In IEEE Face and Gesture, pp. 57–64, 2011.
  • Lucey et al. (2012) Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P.E., Chew, S., and Matthews, I. Painful monitoring: Automatic pain monitoring using the UNBC-McMaster shoulder pain expression archive database. Image Vis. Comp., 30:197–205, 2012.
  • Manias et al. (2002) Manias, E., Botti, M., and Bucknall, T. Observation of pain assessment and management–the complexities of clinical practice. Journal of Clinical Nursing, 11(6):724–733, 2002.
  • Marr & Hildreth (1980) Marr, D. and Hildreth, E. Theory of edge detection. Proc. Royal Soc. Lond, 207:187–217, 1980.
  • Prkachin & P. Solomon (2008) Prkachin, K. and P. Solomon, P. The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain, 139:267–274, 2008.
  • Raina et al. (2007) Raina, R., Battle, A., Lee, H., Packer, B., and Ng, A. Self-taught learning: Transfer learning from unlabeled data. In Int. Conf. on Machine Learning, pp. 759–766, 2007.
  • Rudovic et al. (2013) Rudovic, O., Pavlovic, V., and Pantic, M. Context-sensitive conditional ordinal random fields for facial action intensity estimation. In Int. Conf. on Computer Vision Workshops, pp. 492–499, 2013.
  • Sangineto et al. (2014) Sangineto, Enver, Zen, Gloria, Ricci, Elisa, and Sebe, Nicu. We are not all equal: Personalizing models for facial expression analysis with transductive parameter transfer. In ACM Multimedia, pp. 357–366, 2014.
  • Shavit et al. (2008) Shavit, I., Kofman, M., Leder, M., Hod, T., and Kozer, E. Observational pain assessment versus self-report in paediatric triage. Emergency Medicine Journal, 25:552–555, 2008.
  • Shi & Tomasi (1994) Shi, J. and Tomasi, C. Good features to track. In Computer Vision and Pattern Recognition, pp. 593–600, 1994.
  • Stites (2013) Stites, Mindy. Observational pain scales in critically ill adults. Critical Care Nurse, 33(3):68–78, 2013.
  • Tipping & Bishop (1999) Tipping, M. E. and Bishop, C. M. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61:611–612, 1999.
  • Topolovec-Vranic et al. (2013) Topolovec-Vranic, J., Gelinas, C., Li, Y, Pollmann-Mudryj, M.A., Innis, J., McFarlan, A, and Canzian, S. Validation and evaluation of two observational pain assessment tools in a trauma and neurosurgical intensive care unit. Pain Research and Management, 18(6):107–114, 2013.
  • Tuytelaars & Mikolajczyk (2008) Tuytelaars, T. and Mikolajczyk, K. Local invariant feature detectors: A survey. Foundations and Trends in Computer Graphics and Vision, 3(3):177–280, 2008.
  • von Baeyer & Spagrud (2007) von Baeyer, Carl L. and Spagrud, Lara J. Systematic review of observational (behavioral) measures of pain for children and adolescents aged 3 to 18 years. Pain, 127:140–150, 2007.
  • Wang & Yin (2007) Wang, J. and Yin, L. Static topographic modeling for facial expression recognition and analysis. Comput. Vis. Image Und., 108(1-2):19–34, 2007.
  • Werner et al. (2013) Werner, P., Al-Hamadi, A., Niese, R., Walter, S., Gruss, S., and Traue, H. Towards pain monitoring: Facial expression, head pose, a new database, an automatic system and remaining challenges. In British Machine Vision Conference, pp. 1–11, 2013.
  • Zen et al. (2014) Zen, Gloria, Sangineto, Enver, Ricci, Elisa, and Sebe, Nicu. Unsupervised domain adaptation for personalized facial emotion recognition. In ACM Int. Conf. on Multimodal Interaction, pp. 128–135, 2014.
  • Zeng et al. (2009) Zeng, Z., Pantic, M., Roisman, G., and Huang, T. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell., 31(1):39–58, 2009.
  • Zhao et al. (2014) Zhao, Qian, Meng, Deyu, Xu, Zongben, Zuo, Wangmeng, and Zhang, Lei. Robust principal component analysis with complex noise. In Int. Conf. on Machine Learning, pp. 55–63, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49836
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description