Scene Graph Prediction with Limited Labels

Scene Graph Prediction with Limited Labels

Abstract

Visual knowledge bases such as Visual Genome power numerous applications in computer vision, including visual question answering and captioning, but suffer from sparse, incomplete relationships. All scene graph models to date are limited to training on a small set of visual relationships that have thousands of training labels each. Hiring human annotators is expensive, and using textual knowledge base completion methods are incompatible with visual data. In this paper, we introduce a semi-supervised method that assigns probabilistic relationship labels to a large number of unlabeled images using few labeled examples. We analyze visual relationships to suggest two types of image-agnostic features that are used to generate noisy heuristics, whose outputs are aggregated using a factor graph-based generative model. With as few as labeled examples per relationship, the generative model creates enough training data to train any existing state-of-the-art scene graph model. We demonstrate that our method outperforms all baseline approaches on scene graph prediction by recall@100 for PREDCLS. In our limited label setting, we define a complexity metric for relationships that serves as an indicator () for conditions under which our method succeeds over transfer learning, the de-facto approach for training with limited labels.

\iccvfinalcopy

1 Introduction

In an effort to formalize a structured representation for images, Visual Genome [27] defined scene graphs, a formalization similar to those widely used to represent knowledge bases [18, 13, 56]. Scene graphs encode objects (e.g. person, bike) as nodes connected via pairwise relationships (e.g., riding) as edges. This formalization has led to state-of-the-art models in image captioning [3], image retrieval [25, 42], visual question answering [24], relationship modeling [26] and image generation [23]. However, all existing scene graph models ignore more than of relationship categories that do not have sufficient labeled instances (see Figure 2) and instead focus on modeling the few relationships that have thousands of labels [49, 31, 54].

Hiring more human workers is an ineffective solution to labeling relationships because image annotation is so tedious that seemingly obvious labels are left unannotated. To complement human annotators, traditional text-based knowledge completion tasks have leveraged numerous semi-supervised or distant supervision approaches [7, 6, 17, 33]. These methods find syntactical or lexical patterns from a small labeled set to extract missing relationships from a large unlabeled set. In text, pattern-based methods are successful, as relationships in text are usually document-agnostic (e.g. Tokyo - is capital of - Japan). Visual relationships are often incidental: they depend on the contents of the particular image they appear in. Therefore, methods that rely on external knowledge or on patterns over concepts (e.g. most instances of dog next to frisbee are playing with it) do not generalize well. The inability to utilize the progress in text-based methods necessitates specialized methods for visual knowledge.

Figure 1: Our semi-supervised method automatically generates probabilistic relationship labels to train any scene graph model.
Num. Labeled () 200 175 150 125 100 75 50 25 10 5
% Relationships 99.09 99.00 98.87 98.74 98.52 98.15 97.57 96.09 92.26 87.28
Figure 2: Visual relationships have a long tail (left) of infrequent relationships. Current models [54, 49] only focus on the top relationships (middle) in the Visual Genome dataset, which all have thousands of labeled instances. This ignores more than of the relationships with few labeled instances (right, top/table).
Figure 3: Relationships, such as fly, eat, and sit can be characterized effectively by their categorical (s and o refer to subject and object, respectively) or spatial features. Some relationships like fly rely heavily only on a few features — kites are often seen high up in the sky.

In this paper, we automatically generate missing relationships labels using a small, labeled dataset and use these generated labels to train downstream scene graph models (see Figure 1). We begin by exploring how to define image-agnostic features for relationships so they follow patterns across images. For example, eat usually consists of one object consuming another object smaller than itself, whereas look often consists of common objects: phone, laptop, or window (see Figure 3). These rules are not dependent on raw pixel values; they can be derived from image-agnostic features like object categories and relative spatial positions between objects in a relationship. While such rules are simple, their capacity to provide supervision for unannotated relationships has been unexplored. While image-agnostic features can characterize some visual relationships very well, they might fail to capture complex relationships with high variance. To quantify the efficacy of our image-agnostic features, we define “subtypes” that measure spatial and categorical complexity (Section 3).

Based on our analysis, we propose a semi-supervised approach that leverages image-agnostic features to label missing relationships using as few as labeled instances of each relationship. We learn simple heuristics over these features and assign probabilistic labels to the unlabeled images using a generative model [39, 46]. We evaluate our method’s labeling efficacy using the completely-labeled VRD dataset [31] and find that it achieves an F1 score of , which is points higher than other standard semi-supervised methods like label propagation [57]. To demonstrate the utility of our generated labels, we train a state-of-the-art scene graph model [54] (see Figure 6) and modify its loss function to support probabilistic labels. Our approach achieves 47.53 recall@1001 for predicate classification on Visual Genome, improving over the same model trained using only labeled instances by 40.97 points. For scene graph detection, our approach achieves within 8.65 recall@100 of the same model trained on the original Visual Genome dataset with more labeled data. We end by comparing our approach to transfer learning, the de-facto choice for learning from limited labels. We find that our approach improves by 5.16 recall@100 for predicate classification, especially for relationships with high complexity, as it generalizes well to unlabeled subtypes.

Our contributions are three-fold. (1) We introduce the first method to complete visual knowledge bases by finding missing visual relationships (Section 5.1). (2) We show the utility of our generated labels in training existing scene graph prediction models (Section 5.2). (3) We introduce a metric to characterize the complexity of visual relationships and show it is a strong indicator () for our semi-supervised method’s improvements over transfer learning (Section 5.3).

2 Related work

Textual knowledge bases were originally hand-curated by experts to structure facts [5, 44, 4] (e.g. Tokyo - capital of - Japan). To scale dataset curation efforts, recent approaches mine knowledge from the web [9] or hire non-expert annotators to manually curate knowledge [47, 5]. In semi-supervised solutions, a small amount of labeled text is used to extract and exploit patterns in unlabeled sentences [37, 33, 34, 35, 21, 2]. Unfortunately, such approaches cannot be directly applied to visual relationships; textual relations can often be captured by external knowledge or patterns, while visual relationships are often local to an image.

Visual relationships have been studied as spatial priors [14, 16], co-occurrences [51], language statistics [53, 31, 28], and within entity contexts [29]. Scene graph prediction models have dealt with the difficulty of learning from incomplete knowledge, as recent methods utilize statistical motifs [54] or object-relationship dependencies [49, 30, 50, 55]. All these methods limit their inference to the top most frequently occurring predicate categories and ignore those without enough labeled examples (Figure 2).

The de-facto solution for limited label problems is transfer learning [15, 52], which requires that the source domain used for pre-training follows a similar distribution as the target domain. In our setting, the source domain is a dataset of frequently-labeled relationships with thousands of examples [49, 30, 50, 55], and the target domain is a set of limited label relationships. Despite similar objects in source and target domains, we find that transfer learning has difficulty generalizing to new relationships. Our method does not rely on availability of a larger, labeled set of relationships; instead, we use a small labeled set to annotate the unlabeled set of images.

To address the issue of gathering enough training labels for machine learning models, data programming has emerged as a popular paradigm. This approach learns to model imperfect labeling sources in order to assign training labels to unlabeled data. Imperfect labeling sources can come from crowdsourcing [10], user-defined heuristics [8, 43], multi-instance learning [22, 40], and distant supervision [12, 32]. Often, these imperfect labeling sources take advantage of domain expertise from the user. In our case, imperfect labeling sources are automatically generated heuristics, which we aggregate to assign a final probabilistic label to every pair of object proposals.

3 Analyzing visual relationships

Figure 4: We define the number of subtypes of a relationship as a measure of its complexity. Subtypes can be categorical — one subtype of ride can be expressed as person - ride - bike while another is dog - ride - surfboard. Subtypes can also be spatial — carry has a subtype with a small object carried to the side and another with a large object carried overhead.
Figure 5: A subset of visual relationships with different levels of complexity as defined by spatial and categorical subtypes. In Section 5.3, we show how this measure is a good indicator of our semi-supervised method’s effectiveness compared to baselines like transfer learning.

We define the formal terminology used in the rest of the paper and introduce the image-agnostic features that our semi-supervised method relies on. Then, we seek quantitative insights into how visual relationships can be described by the properties between its objects. We ask (1) what image-agnostic features can characterize visual relationships? and (2) given limited labels, how well do our chosen features characterize the complexity of relationships? With these in mind, we motivate our model design to generate heuristics that do not overfit to the small amount of labeled data and assign accurate labels to the larger, unlabeled set.

3.1 Terminology

A scene graph is a multi-graph that consists of objects as nodes and relationships as edges. Each object consists of a bounding box and its category where is the set of all possible object categories (e.g. dog, frisbee). Relationships are denoted subject - predicate - object or - - . is a predicate, such as ride and eat. We assume that we have a small labeled set of annotated relationships for each predicate . Usually, these datasets are on the order of a examples or fewer. For our semi-supervised approach, we also assume that there exists a large set of images without any labeled relationships.

3.2 Defining image-agnostic features

It has become common in computer vision to utilize pretrained convolutional neural networks to extract features that represent objects and visual relationships [49, 31, 50]. Models trained with these features have proven robust in the presence of enough training labels but tend to overfit when presented with limited data (Section 5). Consequently, an open question arises: what other features can we utilize to label relationships with limited data? Previous literature has combined deep learning features with extra information extracted from categorical object labels and relative spatial object locations [25, 31]. We define categorical features, , as a concatenation of one-hot vectors of the subject and object . We define spatial features as:

where and are the top-left bounding box coordinates and their widths and heights.

To explore how well spatial and categorical features can describe different visual relationships, we train a simple decision tree model for each relationship. We plot the importances for the top spatial and categorical features in Figure 3. Relationships like fly place high importance on the difference in y-coordinate between the subject and object, capturing a characteristic spatial pattern. look, on the other hand, depends on the category of the objects (e.g. phone, laptop, window) and not on any spatial orientations.

3.3 Complexity of relationships

To understand the efficacy of image-agnostic features, we’d like to measure how well they can characterize the complexity of particular visual relationships. As seen in Figure 4, a visual relationship can be defined by a number of image-agnostic features (e.g. a person can ride a bike, or a dog can ride a surfboard). To systematically define this notion of complexity, we identify subtypes for each visual relationship. Each subtype captures one way that a relationship manifests in the dataset. For example, in Figure 4, ride contains one categorical subtype with person - ride - bike and another with dog - ride - surfboard. Similarly, a person might carry an object in different relative spatial orientations (e.g. on her head, to her side). As shown in Figure 5, visual relationships might have significantly different degrees of spatial and categorical complexity, and therefore a different number of subtypes for each. To compute spatial subtypes, we perform mean shift clustering [11] over the spatial features extracted from all the relationships in Visual Genome. To compute the categorical subtypes, we count the number of unique object categories associated with a relationship.

With access to or fewer labeled instances for these visual relationships, it is impossible to capture all the subtypes for given relationship and therefore difficult to learn a good representation for the relationship as a whole. Consequently, we turn to the rules extracted from image-agnostic features and use them to assign labels to the unlabeled data in order to capture a larger proportion of subtypes in each visual relationship. We posit that this will be advantageous over methods that only use the small labeled set to train a scene graph prediction model, especially for relationships with high complexity, or a large number of subtypes. In Section 5.3, we find a correlation between our definition of complexity and the performance of our method.

4 Approach

Figure 6: For a relationship (e.g., carry), we use image-agnostic features to automatically create heuristics and then use a generative model to assign probabilistic labels to a large unlabeled set of images. These labels can then be used to train any scene graph prediction model.

We aim to automatically generate labels for missing visual relationships that can be then used to train any downstream scene graph prediction model. We assume that in the long-tail of infrequent relationships, we have a small labeled set of annotated relationships for each predicate (often, on the order of a examples or less). As discussed in Section 3, we want to leverage image-agnostic features to learn rules that annotate unlabeled relationships.

1:   INPUT: — A small dataset of object pairs with multi-class labels for predicates.
2:   INPUT: — A large unlabeled dataset of images with objects but no relationship labels.
3:   INPUT: — A function that extracts features from a pair of objects.
4:   INPUT: — A decision tree.
5:   INPUT: — A generative model that assigns probabilistic labels given multiple labels for each datapoint
6:   INPUT: — Function used to train a scene graph detection model.
7:   Extract features and labels, for , for
8:   Generate heuristics by fitting decision trees
9:   Assign labels to , for decision trees.
10:   Learn generative model and assign probabilistic labels
11:   Train scene graph model,
12:   OUTPUT: SGM()
Algorithm 1 Semi-supervised Alg. to Label Relationships

Our approach assigns probabilistic labels to a set of un-annotated images in three steps: (1) we extract image-agnostic features from the objects in the labeled and from the object proposals extracted using an existing object detector [19] on unlabeled , (2) we generate heuristics over the image-agnostic features, and finally (3) we use a factor-graph based generative model to aggregate and assign probabilistic labels to the unlabeled object pairs in . These probabilistic labels, along with , are used to train any scene graph prediction model. We describe our approach in Algorithm 1 and show the end-to-end pipeline in Figure 6.

Feature extraction: Our approach uses the image-agnostic features defined in Section 3, which rely on object bounding box and category labels. The features are extracted from ground truth objects in or from object detection outputs in by running existing object detection models [19].

Heuristic generation: We fit decision trees over the labeled relationships’ spatial and categorical features to capture image-agnostic rules that define a relationship. These image-agnostic rules are threshold-based conditions that are automatically defined by the decision tree. To limit the complexity of these heuristics and thereby prevent overfitting, we use shallow decision trees [38] with different restrictions on depth over each feature set to produce different decision trees. We then predict labels for the unlabeled set using these heuristics, producing a matrix of predictions for the unlabeled relationships.

Moreover, we only use these heuristics when they have high confidence about their label; we modify by converting any predicted label with confidence less than a threshold (empirically chosen to be random) to an abstain, or no label assignment. An example of a heuristic is shown in Figure 6: if the subject is above the object, it assigns a positive label for the predicate carry.

Generative model: These heuristics, individually, are noisy and may not assign labels to all object pairs in . As a result, we aggregate the labels from all heuristics. To do so, we leverage a factor graph-based generative model popular in text-based weak supervision techniques [48, 39, 45, 41, 1]. This model learns the accuracies of each heuristic to combine their individual labels; the model’s output is a probabilistic label for each object pair.

The generative model uses the following distribution family to relate the latent variable , the true class, and the labels from the heuristics, :

where is a partition function to ensure is normalized. The parameter encodes the average accuracy of each heuristic and is estimated by maximizing the marginal likelihood of the observed heuristic . The generative model assigns probabilistic labels by computing for each object pair in .

Training scene graph model: Finally, these probabilistic labels are used to train any scene graph prediction model. While scene graph models are usually trained using a cross-entropy loss [49, 31, 54], we modify this loss function to take into account errors in the training annotations. We adopt a noise-aware empirical risk minimizer that is often seen in logistic regression as our loss function:

where is the learned parameters, is the distribution learned by the generative model, is the true label, and are features extracted by any scene graph prediction model.

Model () Prec. Recall F1 Acc.
Random 5.00 5.00 5.00 5.00
Decision Tree 46.79 35.32 40.25 36.92
Label Propagation 76.48 32.71 45.82 12.85
Ours (Majority Vote) 55.01 57.26 56.11 40.04
Ours (Categ. + Spat.) 54.83 60.79 57.66 50.31
Table 1: We validate our approach for labeling missing relationships using only labeled examples by evaluating our probabilistic labels from our semi-supervised approach over the fully-annotated VRD using macro metrics dataset [31].

5 Experiments

Figure 7: (a) Heuristics based on spatial features help predict man - fly - kite. (b) Our model learns that look is highly correlated with phone. (c) We overfit to the importance of chair as a categorical feature for sit, and fail to identify hang as the correct relationship. (d) We overfit to the spatial positioning associated with ride, where objects are typically longer and directly underneath the subject. (e) Given our image-agnostic features, we produce a reasonable label for glass - cover - face. However, our model is incorrect, as two typically different predicates (sit and cover) share a semantic meaning in the context of glasses - ? - face.

To test our semi-supervised approach for completing visual knowledge bases by annotating missing relationships, we perform a series of experiments and evaluate our framework in several stages. We start by discussing the datasets, baselines, and evaluation metrics used. (1) Our first experiment tests our generative model’s ability to find missing relationships in the completely-annotated VRD dataset [31]. (2) Our second experiment demonstrates the utility of our generated labels by using them to train a state-of-the-art scene graph model [54]. We compare our labels to those from the large Visual Genome dataset [27]. (3) Finally, to show that our semi-supervised method’s performance compared to strong baselines in limited label settings, we compare extensively to transfer learning; we focus on a subset of relationships with limited labels, allow the transfer learning model to pretrain on frequent relationships, and demonstrate that our semi-supervised method outperforms transfer learning, which has seen more data. Furthermore, we quantify when our method outperforms transfer learning using our metric for measuring relationship complexity (Section 3.3).

Scene Graph Detection Scene Graph Classification Predicate Classification
Model R20 R50 R100 R20 R50 R100 R20 R50 R100
Baselines Baseline [] 0.00 0.00 0.00 0.04 0.04 0.04 3.17 5.30 6.61
Freq 9.01 11.01 11.64 11.10 11.08 10.92 20.98 20.98 20.80
Freq+Overlap 10.16 10.84 10.86 9.90 9.91 9.91 20.39 20.90 22.21
Transfer Learning 11.99 14.40 16.48 17.10 17.91 18.16 39.69 41.65 42.37
Decision tree [38] 11.11 12.58 13.23 14.02 14.51 14.57 31.75 33.02 33.35
Label propagation [57] 6.48 6.74 6.83 9.67 9.91 9.97 24.28 25.17 25.41
Ablations Ours (Deep) 2.97 3.20 3.33 10.44 10.77 10.84 23.16 23.93 24.17
Ours (Spat.) 3.26 3.20 2.91 10.98 11.28 11.37 26.23 27.10 27.26
Ours (Categ.) 7.57 7.92 8.04 20.83 21.44 21.57 43.49 44.93 45.50
Ours (Categ. + Spat. + Deep) 7.33 7.70 7.79 17.03 17.35 17.39 38.90 39.87 40.02
Ours (Categ. + Spat. + WordVec) 8.43 9.04 9.27 20.39 20.90 21.21 45.15 46.82 47.32
Ours (Majority Vote) 16.86 18.31 18.57 18.96 19.57 19.66 44.18 45.99 46.63
Ours (Categ. + Spat.) 17.67 18.69 19.28 20.91 21.34 21.44 45.49 47.04 47.53
Oracle [] 24.42 29.67 30.15 30.15 30.89 31.09 69.23 71.40 72.15
Table 2: Results for scene graph prediction tasks with labeled examples per predicate, reported as recall@K. A state-of-the-art scene graph model trained on labels from our method outperforms those trained with labels generated by other baselines, like transfer learning.
Figure 8: A scene graph model [54] trained using our labels outperforms both using Transfer Learning labels and using only the Baseline labeled examples consistently across scene graph classification and predicate classification for different amounts of available labeled relationship instances. We also compare to Oracle, which is trained with more labeled data.
Figure 9: Our method’s improvement over transfer learning (in terms of R@100 for predicate classification) is correlated to the number of subtypes in the train set (left), the number of subtypes in the unlabeled set (middle), and the proportion of subtypes in the labeled set (right).

Eliminating synonyms and supersets. Typically, past scene graph approaches have used predicates from Visual Genome to study visual relationships. Unfortunately, these treat synonyms like laying on and lying on as separate classes. To make matters worse, some predicates can be considered a superset of others (i.e. above is a superset of riding). Our method, as well as the baselines, is unable to differentiate between synonyms and supersets. For the experiments in this section, we eliminate all supersets and merge all synonyms, resulting in unique predicates. In the Supplementary Material we include a list of these predicates and report our method’s performance on all predicates.

Dataset. We use two standard datasets, VRD [31] and Visual Genome [27], to evaluate on tasks related to visual relationships or scene graphs. Each scene graph contains objects localized as bounding boxes in the image along with pairwise relationships connecting them, categorized as action (e.g., carry), possessive (e.g., wear), spatial (e.g., above), or comparative (e.g., taller than) descriptors. Visual Genome is a large visual knowledge base containing images. Due to its scale, each scene graph is left with incomplete labels, making it difficult to measure the precision of our semi-supervised algorithm. VRD is a smaller but completely annotated dataset. To show the performance of our semi-supervised method, we measure our method’s generated labels on the VRD dataset (Section 5.1). Later, we show that the training labels produced can be used to train a large scale scene graph prediction model, evaluated on Visual Genome (Section 5.2).

Evaluation metrics. We measure precision and recall of our generated labels on the VRD dataset’s test set (Section 5.1). To evaluate a scene graph model trained on our labels, we use three standard evaluation modes for scene graph prediction [31]: (i) scene graph detection (SGDET) which expects input images and predicts bounding box locations, object categories, and predicate labels, (ii) scene graph classification (SGCLS) which expects ground truth boxes and predicts object categories and predicate labels, and (iii) predicate classification (PREDCLS), which expects ground truth bounding boxes and object categories to predict predicate labels. We refer the reader to the paper that introduced these tasks for more details [31]. Finally, we explore how relationship complexity, measured using our definition of subtypes, is correlated with our model’s performance relative to transfer learning (Section 5.3).

Baselines. We compare to alternative methods for generating training labels that can then be used to train downstream scene graph models. oracle is trained on all of Visual Genome, which amounts to the quantity of labeled relationships in ; this serves as the upper bound for how well we expect to perform. Decision tree [38] fits a single decision tree over the image-agnostic features, learns from labeled examples in , and assigns labels to . Label propagation [57] employs a widely-used semi-supervised method and considers the distribution of image-agnostic features in before propagating labels from to .

We compare to a strong frequency baselines: (Freq) uses the object counts as priors to make relationship predictions, and Freq+Overlap increments such counts only if the bounding boxes of objects overlap. We include a Transfer Learning baseline, which is the de-facto choice for training models with limited data [15, 52]. However, unlike all other methods, transfer learning requires a source dataset to pretrain. We treat the source domain as the remaining relationships from the top in Visual Genome that do not overlap with our chosen relationships. We then fine tune with the limited labeled examples for the predicates in . We note that Transfer Learning has an unfair advantage because there is overlap in objects between its source and target relationship sets. Our experiments will show that even with this advantage, our method performs better.

Ablations. We perform several ablation studies for the image-agnostic features and heuristic aggregation components of our model. (Categ.) uses only categorical features, (Spat.) uses only spatial features, (Deep) uses only deep learning features extracted using ResNet50 [20] from the union of the object pair’s bounding boxes, (Categ. + Spat.) uses both categorical concatenated with spatial features, (Categ. + Spat. + Deep) combines combines all three, and Ours (Categ. + Spat. + WordVec) includes word vectors as richer representations of the categorical features. (Majority Vote) uses the categorical and spatial features but replaces our generative model with a simple majority voting scheme to aggregate heuristic function outputs.

5.1 Labeling missing relationships

We evaluate our performance in annotating missing relationships in . Before we use these labels to train scene graph prediction models, we report results comparing our method to baselines in Table 1. On the fully annotated VRD dataset [31], Ours (Categ. + Spat.) achieves F1 given only labeled examples, which is , , and points better than Label Propagation, Decision Tree and Majority Vote, respectively.

Qualitative error analysis. We visualize labels assigned by Ours in Figure 7 and find that they correspond to image-agnostic rules explored in Figure 3. In Figure 7(a), Ours predicts fly because it learns that fly typically involves objects that have a large difference in y-coordinate. In Figure 7(b), we correctly label look because phone is an important categorical feature. In some difficult cases, our semi-supervised model fails to generalize beyond the image-agnostic features. In Figure 7(c), we mislabel hang as sit by incorrectly relying on the categorical feature chair, which is one of sit’s important features. In Figure 7(d), ride typically occurs directly above another object that is slightly larger and assumes book - ride - shelf instead of book - sitting on - shelf. In Figure 7(e), our model reasonably classifies glasses - cover - face. However, sit exhibits the same semantic meaning as cover in this context, and our model incorrectly classifies the example.

5.2 Training Scene graph prediction models

We compare our method’s labels to those generated by the baselines described earlier by using them to train three scene graph specific tasks and report results in Table 2. We improve over all baselines, including our primary baseline, Transfer Learning, by recall@100 for PREDCLS. We also achieve within recall@100 of Oracle for SGDET. We generate higher quality training labels than Decision Tree and Label Propagation, leading to an and recall@100 increase for PREDCLS.

Effect of labeled and unlabeled data. In Figure 8 (left two graphs), we visualize how SGCLS and PREDCLS performance varies as we reduce the number of labeled examples from to . We observe greater advantages over Transfer Learning as decreases, with an increase of recall@100 PREDCLS when . This result matches our observations from Section 3 because a larger set of labeled examples gives Transfer Learning information about a larger proportion of subtypes for each relationship. In Figure 8 (right two graphs), we visualize our performance as the number of unlabeled data points increase, finding that we approach Oracle performance with more unlabeled examples.

Ablations. Ours (Categ. + Spat. + Deep.) hurts performance by up to recall@100 for PREDCLS because it overfits to image features while Ours (Categ. + Spat.) performs the best. We show improvements of recall@100 for SGDET over Ours (MajorityVote), indicating that the generated heuristics indeed have different accuracies and should be weighted differently.

5.3 Transfer learning vs. semi-supervised learning

Inspired by the recent work comparing transfer learning and semi-supervised learning [36], we characterize when our method is preferred over transfer learning. Using the relationship complexity metric based on spatial and categorical subtypes of each predicate (Section 3), we show this trend in Figure 9. When the predicate has a high complexity (as measured by a high number of subtypes), Ours (Categ. + Spat.) outperforms Transfer Learning (Figure 9, left), with correlation coefficient . We also evaluate how the number of subtypes in the unlabeled set () affects the performance of our model (Figure 9, center). We find a strong correlation (); our method can effectively assign labels to unlabeled relationships with a large number of subtypes. We also compare the difference in performance to the proportion of subtypes captured in the labeled set (Figure 9, right). As we hypothesized earlier, Transfer Learning suffers in cases when the labeled set only captures a small portion of the relationship’s subtypes. This trend () explains how Ours (Categ. + Spat.) performs better when given a small portion of labeled subtypes.

6 Conclusion

We introduce the first method that completes visual knowledge bases like Visual Genome by finding missing visual relationships. We define categorical and spatial features as image-agnostic features and introduce a factor-graph based generative model that uses these features to assign probabilistic labels to unlabeled images. Our method outperforms baselines in F1 score when finding missing relationships in the complete VRD dataset. Our labels can also be used to train scene graph prediction models with minor modifications to their loss function to accept probabilistic labels. We outperform transfer learning and other baselines and come close to oracle performance of the same model trained on a fraction of labeled data. Finally, we introduce a metric to characterize the complexity of visual relationships and show it is a strong indicator of how our semi-supervised method performs compared to such baselines.

Acknowledgements.

This work was partially funded by the Brown Institute of Media Innovation, the Toyota Research Institute (“TRI”), DARPA under Nos. FA87501720095 and FA86501827865, NIH under No. U54EB020405, NSF under Nos. CCF1763315 and CCF1563078, ONR under No. N000141712266, the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, Google, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, and American Family Insurance, Google Cloud, Swiss Re, NSF Graduate Research Fellowship under No. DGE-114747, Joseph W. and Hon Mai Goodman Stanford Graduate Fellowship, and members of Stanford DAWN: Intel, Microsoft, Teradata, Facebook, Google, Ant Financial, NEC, SAP, VMWare, and Infosys. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government.

Footnotes

  1. Recall@ is a standard measure for scene graph prediction [31].

References

  1. E. Alfonseca, K. Filippova, J. Delort and G. Garrido (2012) Pattern learning for relation extraction with a hierarchical topic model. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pp. 54–59. Cited by: §4.
  2. C. J. Anderson, S. Wasserman and K. Faust (1992) Building stochastic blockmodels. Social networks 14 (1-2), pp. 137–161. Cited by: §2.
  3. P. Anderson, B. Fernando, M. Johnson and S. Gould (2016) Spice: semantic propositional image caption evaluation. In European Conference on Computer Vision, pp. 382–398. Cited by: §1.
  4. S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak and Z. Ives (2007) Dbpedia: a nucleus for a web of open data. In The semantic web, pp. 722–735. Cited by: §2.
  5. K. Bollacker, C. Evans, P. Paritosh, T. Sturge and J. Taylor (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247–1250. Cited by: §2.
  6. A. Bordes, X. Glorot, J. Weston and Y. Bengio (2014) A semantic matching energy function for learning with multi-relational data. Machine Learning 94 (2), pp. 233–259. Cited by: §1.
  7. A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787–2795. Cited by: §1.
  8. R. Bunescu and R. Mooney (2007) Learning to extract relations from the web using minimal supervision. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 576–583. Cited by: §2.
  9. A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka Jr and T. M. Mitchell (2010) Toward an architecture for never-ending language learning.. In AAAI, Vol. 5, pp. 3. Cited by: §2.
  10. J. Cheng and M. S. Bernstein (2015) Flock: hybrid crowd-machine learning classifiers. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, pp. 600–611. Cited by: §2.
  11. Y. Cheng (1995) Mean shift, mode seeking, and clustering. IEEE transactions on pattern analysis and machine intelligence 17 (8), pp. 790–799. Cited by: §3.3.
  12. M. Craven and J. Kumlien (1999) Constructing biological knowledge bases by extracting information from text sources.. In ISMB, Vol. 1999, pp. 77–86. Cited by: §2.
  13. A. Culotta and J. Sorensen (2004) Dependency tree kernels for relation extraction. In Proceedings of the 42nd annual meeting on association for computational linguistics, pp. 423. Cited by: §1.
  14. B. Dai, Y. Zhang and D. Lin (2017) Detecting visual relationships with deep relational networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3298–3308. Cited by: §2.
  15. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng and T. Darrell (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pp. 647–655. Cited by: §2, §5.
  16. C. Galleguillos, A. Rabinovich and S. Belongie (2008) Object categorization using co-occurrence, location and appearance. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–8. Cited by: §2.
  17. M. Gardner, P. Talukdar, J. Krishnamurthy and T. Mitchell (2014) Incorporating vector space similarity in random walk inference over knowledge bases. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 397–406. Cited by: §1.
  18. Z. GuoDong, S. Jian, Z. Jie and Z. Min (2005) Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pp. 427–434. Cited by: §1.
  19. K. He, G. Gkioxari, P. Dollár and R. Girshick (2017) Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2980–2988. Cited by: §4, §4.
  20. K. He, X. Zhang, S. Ren and J. Sun (2015) Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. Cited by: §5.
  21. P. Hoff (2008) Modeling homophily and stochastic equivalence in symmetric relational data. In Advances in neural information processing systems, pp. 657–664. Cited by: §2.
  22. R. Hoffmann, C. Zhang, X. Ling, L. Zettlemoyer and D. S. Weld (2011) Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 541–550. Cited by: §2.
  23. J. Johnson, A. Gupta and L. Fei-Fei (2018) Image generation from scene graphs. arXiv preprint arXiv:1804.01622. Cited by: §1.
  24. J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick and R. Girshick (2017) Inferring and executing programs for visual reasoning. arXiv preprint arXiv:1705.03633. Cited by: §1.
  25. J. Johnson, R. Krishna, M. Stark, L. Li, D. Shamma, M. Bernstein and L. Fei-Fei (2015) Image retrieval using scene graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3668–3678. Cited by: §1, §3.2.
  26. R. Krishna, I. Chami, M. Bernstein and L. Fei-Fei (2018) Referring relationships. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1.
  27. R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li and D. A. Shamma (2017) Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123 (1), pp. 32–73. Cited by: §1, §5, §5.
  28. Y. Li, W. Ouyang, X. Wang and X. Tang (2017) Vip-cnn: visual phrase guided convolutional neural network. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 7244–7253. Cited by: §2.
  29. Y. Li, W. Ouyang, B. Zhou, K. Wang and X. Wang (2017) Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1261–1270. Cited by: §2.
  30. X. Liang, L. Lee and E. P. Xing (2017) Deep variation-structured reinforcement learning for visual relationship and attribute detection. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 4408–4417. Cited by: §2, §2.
  31. C. Lu, R. Krishna, M. Bernstein and L. Fei-Fei (2016) Visual relationship detection with language priors. In European Conference on Computer Vision, pp. 852–869. Cited by: §1, §1, §2, §3.2, Table 1, §4, §5.1, §5, §5, §5, footnote 1.
  32. M. Mintz, S. Bills, R. Snow and D. Jurafsky (2009) Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pp. 1003–1011. Cited by: §2.
  33. M. Nickel, V. Tresp and H. Kriegel (2011) A three-way model for collective learning on multi-relational data.. In ICML, Vol. 11, pp. 809–816. Cited by: §1, §2.
  34. M. Nickel, V. Tresp and H. Kriegel (2012) Factorizing yago: scalable machine learning for linked data. In Proceedings of the 21st international conference on World Wide Web, pp. 271–280. Cited by: §2.
  35. M. Nickel (2013) Tensor factorization for relational learning. Ph.D. Thesis, lmu. Cited by: §2.
  36. A. Oliver, A. Odena, C. Raffel, E. D. Cubuk and I. J. Goodfellow (2018) Realistic evaluation of deep semi-supervised learning algorithms. arXiv preprint arXiv:1804.09170. Cited by: §5.3.
  37. P. Orbanz and D. M. Roy (2015) Bayesian models of graphs, arrays and other exchangeable random structures. IEEE transactions on pattern analysis and machine intelligence 37 (2), pp. 437–461. Cited by: §2.
  38. J. R. Quinlan (1986) Induction of decision trees. Machine learning 1 (1), pp. 81–106. Cited by: §4, Table 2, §5.
  39. A. J. Ratner, C. M. De Sa, S. Wu, D. Selsam and C. Ré (2016) Data programming: creating large training sets, quickly. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon and R. Garnett (Eds.), pp. 3567–3575. External Links: Link Cited by: §1, §4.
  40. S. Riedel, L. Yao and A. McCallum (2010) Modeling relations and their mentions without labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 148–163. Cited by: §2.
  41. B. Roth and D. Klakow (2013) Combining generative and discriminative model scores for distant supervision. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 24–29. Cited by: §4.
  42. S. Schuster, R. Krishna, A. Chang, L. Fei-Fei and C. D. Manning (2015) Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the fourth workshop on vision and language, pp. 70–80. Cited by: §1.
  43. J. Shin, S. Wu, F. Wang, C. De Sa, C. Zhang and C. Ré (2015) Incremental knowledge base construction using deepdive. Proceedings of the VLDB Endowment 8 (11), pp. 1310–1321. Cited by: §2.
  44. F. M. Suchanek, G. Kasneci and G. Weikum (2007) Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pp. 697–706. Cited by: §2.
  45. S. Takamatsu, I. Sato and H. Nakagawa (2012) Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pp. 721–729. Cited by: §4.
  46. P. Varma, B. D. He, P. Bajaj, N. Khandwala, I. Banerjee, D. Rubin and C. Ré (2017) Inferring generative model structure with static analysis. In Advances in Neural Information Processing Systems, pp. 239–249. Cited by: §1.
  47. D. Vrandečić and M. Krötzsch (2014) Wikidata: a free collaborative knowledgebase. Communications of the ACM 57 (10), pp. 78–85. Cited by: §2.
  48. T. Xiao, T. Xia, Y. Yang, C. Huang and X. Wang (2015) Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699. Cited by: §4.
  49. D. Xu, Y. Zhu, C. B. Choy and L. Fei-Fei (2017) Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2. Cited by: Figure 2, §1, §2, §2, §3.2, §4.
  50. J. Yang, J. Lu, S. Lee, D. Batra and D. Parikh (2018) Graph r-cnn for scene graph generation. arXiv preprint arXiv:1808.00191. Cited by: §2, §2, §3.2.
  51. B. Yao and L. Fei-Fei (2010) Modeling mutual context of object and human pose in human-object interaction activities. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 17–24. Cited by: §2.
  52. J. Yosinski, J. Clune, Y. Bengio and H. Lipson (2014) How transferable are features in deep neural networks?. In Advances in neural information processing systems, pp. 3320–3328. Cited by: §2, §5.
  53. R. Yu, A. Li, V. I. Morariu and L. S. Davis (2017) Visual relationship detection with internal and external linguistic knowledge distillation. arXiv preprint arXiv:1707.09423. Cited by: §2.
  54. R. Zellers, M. Yatskar, S. Thomson and Y. Choi (2017) Neural motifs: scene graph parsing with global context. arXiv preprint arXiv:1711.06640. Cited by: Figure 2, §1, §1, §2, §4, Figure 8, §5.
  55. J. Zhang, Y. Kalantidis, M. Rohrbach, M. Paluri, A. Elgammal and M. Elhoseiny (2018) Large-scale visual relationship understanding. arXiv preprint arXiv:1804.10660. Cited by: §2, §2.
  56. G. Zhou, M. Zhang, D. Ji and Q. Zhu (2007) Tree kernel-based relation extraction with context-sensitive structured parse tree information. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), Cited by: §1.
  57. X. Zhu and Z. Ghahramani (2002) Learning from labeled and unlabeled data with label propagation. Technical Report. Cited by: §1, Table 2, §5.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
386874
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description