HUSE: Hierarchical Universal Semantic Embeddings

HUSE: Hierarchical Universal Semantic Embeddings

Pradyumna Narayana, Aniket Pednekar, Abishek Krishnamoorthy , Kazoo Sone, Sugato Basu
Google, Georgia Institute of Technology
{pradyn,aniketvp,sone,sugato}@google.com, akrishna61@gatech.edu
This work was conducted at Google.
Abstract

There is a recent surge of interest in cross-modal representation learning corresponding to images and text. The main challenge lies in mapping images and text to a shared latent space where the embeddings corresponding to a similar semantic concept lie closer to each other than the embeddings corresponding to different semantic concepts, irrespective of the modality. Ranking losses are commonly used to create such shared latent space — however, they do not impose any constraints on inter-class relationships resulting in neighboring clusters to be completely unrelated. The works in the domain of visual semantic embeddings address this problem by first constructing a semantic embedding space based on some external knowledge and projecting image embeddings onto this fixed semantic embedding space. These works are confined only to image domain and constraining the embeddings to a fixed space adds additional burden on learning. This paper proposes a novel method, HUSE, to learn cross-modal representation with semantic information. HUSE learns a shared latent space where the distance between any two universal embeddings is similar to the distance between their corresponding class embeddings in the semantic embedding space. HUSE also uses a classification objective with a shared classification layer to make sure that the image and text embeddings are in the same shared latent space. Experiments on UPMC Food-101 show our method outperforms previous state-of-the-art on retrieval, hierarchical precision and classification results.

1 Introduction

The internet has been evolving from primarily being text-based to being multimodal, where text content is augmented with content from image or video modalities. With the rapid growth of different media types, learning discriminative feature representations of all these modalities is an essential yet challenging problem. One key useful feature is that when data from different modalities share the same semantics or have latent correlations (e.g., picture of baby and audio of a baby crying), we would like the resulting multimodal embeddings to share a common latent space. This space would allow the embeddings from multiple modalities to exploit the complementary information among themselves, while enriching the resulting common latent space. The resulting modality-agnostic (universal) embeddings would highly benefit search, ranking, ads and e-commerce space. Moreover, such universal embeddings are highly relevant for cross-modal retrieval. However, learning universal embeddings is a challenging task due to the “media gap” [25], which means features from different modalities can be inconsistent.

There is significant recent work in the domain of cross-modal retrieval for creating a universal embedding space. Majority of these approaches create a latent space by aligning the embeddings from different modalities using ranking losses [40, 35, 17, 9, 4, 42, 30]. These losses allow the embeddings corresponding to a semantic class to lie closer to each other than the embeddings corresponding to two different classes. However, they do not impose any constraints on inter-class relationships. So, the neighboring clusters may be completely unrelated resulting in the learned universal embedding space to not be semantically meaningful [3]. For example, the average embedding distance between cat and dog classes might be as large as the average distance between cat and bridge classes. However, this behavior is not desirable in retrieval problems where it is more meaningful to get semantically similar results.

For example, consider the embedding space corresponding to four classes: cat, dog, bridge, tower in Figure 1. In this space, the embeddings corresponding to an instance (e.g., image and text corresponding to Golden Gate bridge) lie closest to each other. After that, the embeddings corresponding to a semantic class are closer to each other resulting in four clusters. Moreover, the embeddings corresponding to cat and dog classes are closer to each other as they are semantically similar. Similarly, the embeddings corresponding to bridge and tower are closer to each other. This hierarchical semantic universal embedding space would be very useful in cross modal retrieval. When the dog class is queried, dog results would be retrieved first followed by cat results.

Figure 1: An example universal embedding space. Cat, dog, bridge and tower classes are represented with red, green, yellow and blue colors respectively. Circles represent image embeddings and plus represent text embeddings.

The domain of visual semantic embeddings addresses this issue by incorporating semantic knowledge to image embeddings. Majority of these methods create an embedding space based on some external knowledge such as WordNet hierarchy or pre-trained word embeddings and projects image embeddings onto this fixed embedding space [11, 3]. Constraining the image embeddings in this manner might limit the performance of these embeddings. Moreover, there is no existing work on incorporating this semantic information to modalities beyond image to the best of our knowledge. Although these methods may be extensible to other modalities, the issue of media gap still needs to be addressed.

We propose a novel method, HUSE, that addresses the above issues to learn Hierarchical Universal Semantic Embeddings. Unlike the previous ranking loss based methods that learn universal embeddings, HUSE projects images and text into a shared latent space by using a shared classification layer for image and text modalities. Unlike the visual semantic embedding methods that projects image embeddings onto a fixed semantic embedding space, HUSE allows the shared latent space to be completely independent of the semantic embedding space, while still having the similar semantic structure as that of semantic embedding space. HUSE incorporates semantic information by making the distance between any two universal embeddings to be similar to that of the distance between their class label embeddings in the semantic embedding space. HUSE achieves state-of-the-art retrieval, hierarchical precision, and classification results on UPMC Food-101 dataset.

In summary, the main contributions of the paper are:

  1. A novel architecture that uses shared classification layer to learn universal embeddings.

  2. Incorporation of semantic information into a universal embedding space.

  3. A novel semantic embedding method that doesn’t involve projection onto semantic embedding space.

  4. State-of-the-art retrieval, hierarchical precision, and classification results on UPMC Food-101 dataset.

2 Related Work

This section briefly reviews the literature corresponding to universal embeddings, multimodal classification and semantic embeddings as our work is related to these areas.

2.1 Universal Embeddings

The previous works on universal embeddings map images and text into a shared embedding space by maximizing correlation between related instances in the embedding space [1, 12, 27, 38]. More recent works use ranking loss for universal embeddings [40, 35, 17, 9, 11, 4]. Additional auxiliary tasks such as categorization [14, 27, 29], adversarial losses [13, 34, 30] and multi-head self-attention  [32] are also used to align these embeddings better.

Unlike these works, we learn universal embeddings using a shared hidden layer and classification objective. The image and text features extracted from pretrained networks are passed through their respective towers and the resulting embedding are L2 normalized. These normalized embeddings are classified by a shared hidden layer, which helps in mapping images and text into a shared embedding space.

2.2 Multimodal Classification

Previous works on multimodal classification can be primarly divided into early fusion, intermediate fusion and late fusion methods. Early fusion methods fuse the features from multiple modalities before passing into deep neural network [37, 24]. Intermediate fusion methods fuse (concatenation or cross modal attention) the features from the intermediate layers of deep networks processing different modalities and the fused features are classified [16, 2]. Late fusion methods classify different modalities by different deep networks and the classification scores are fused by simple methods such as weighted average or by training another network on top of the classification scores  [36, 22, 19, 23, 20, 21].

We employ the late fusion approach for multi-modal classification where the classification scores from image and text modalities are fused. However, unlike other late fusion approaches that have a separate classification layer for each modality, we use a shared hidden layer for all modalities. In addition to doing multimodal classification, this shared hidden layer helps in building a universal embedding space by clustering the embeddings corresponding to a class together.

2.3 Semantic Embedding

In computer vision literature, many prior works incorporated external knowledge to improve classification and image retrieval. As ImageNet classes are derived from WordNet, WordNet ontology is exploited for classification [6, 33, 39, 41] and image retrieval [7, 18] tasks. Frome et al. incorporated prior world-knowledge by mapping pre-trained image embeddings onto word embeddings of class labels learned from text corpora [11]. Barz and Denzler learned hierachy-based image embeddings by mapping images onto class embeddings whose pair-wise dot products correspond to a measure of semantic similarity between classes [3]. They computed class embeddings by a deterministic algorithm based on prior world-knowledge encoded in a hierarchy of classes such as WordNet. Juan et al. learned image semantic embeddings based on the co-click information of the images [15].

In contrast to the above feature level approaches that maps image embeddings to word embeddings, our method learns a universal embedding space that is similar to the class level semantic embedding space in terms of distance. This gives the additional flexibility of having different dimensions compared to class level embedding space.

3 Method

3.1 Problem Formulation

Given a labeled set that contains image-text-label triples , where label is the class label corresponding to the image and text , the objective is to learn a universal embedding space that achieves semantic understanding. Let correspond to the modality agnostic representation that represents either image or text with class label .

Let represent a function that projects an image to a dense vector representing an embedding in universal embedding space and represent a function that projects a text to the universal embedding space. Let correspond to a modality agnostic projection function that projects to universal embedding space. corresponds to if is image and if is text. Let be the distance metric that measures the distance between two embeddings. Unless otherwise mentioned, we will use cosine distance in the rest of the paper.

The universal embedding space that achieves semantic understanding should have the following properties:

  1. Class level similarity: The distance between any two embeddings corresponding to the same class should be on average less than the distance between two embeddings corresponding to different classes. So,

    (1)

    where indicates average distance, computed over different (suitable) choices of point pairs.

  2. Semantic similarity: The embeddings corresponding to two different but semantically similar classes should be on average closer to each other than the embeddings corresponding to two semantically different classes. So,

    (2)

    if and are semantically more similar classes than and .

  3. Cross modal gap: Cross modal gap makes the universal embedding learning challenging as the embeddings from different modalities have different distributions [25]. Ideally, the distance between paired image and text should be close to zero as they correspond to the same instance. So, we would like to have

    (3)

    for different points in the data.

3.2 Network Structure

HUSE consists of an image tower that returns universal embeddings corresponding to an image. The image tower consists of multiple fully connected layers with RELU non-linearity and dropout. The last fully connected layer has hidden units and the embedding from the last layer is normalized to restrict the universal embedding space to a -dimensional unit sphere. To make the training process simple, we extract image embeddings using pretrained backbone image network which are passed through an image tower to project them into a universal embedding space as shown in Figure 2. More formally, the backbone image network followed by the image tower corresponds to the image projection function that projects an image to -dimensional universal embedding space.

Similarly, HUSE model has a text tower consisting of multiple fully connected layers with RELU non-linearity and dropout. Similar to the image tower, the last fully connected layer has hidden units followed by normalization. Similar to images, we extract text embeddings using pretrained backbone text network and pass them through this text tower. More formally, the backbone text network followed by text tower corresponds to an text projection function that projects text to -dimensional universal embedding space.

The text tower and image tower can have different number of hidden layers and different hidden layer sizes. The only restriction is that the last layer of both towers should have hidden units followed by normalization.

The resulting embeddings from both image and text towers are passed through a shared fully connected layer of hidden units. This layer is used to classify the embeddings into classes.

Figure 2: Network architecture of HUSE. Image embeddings and text embeddings are passed through their respective towers to get universal embeddings. These embeddings are passed through a shared hidden layer. The semantic graph is used to regularize the universal embedding space.

3.3 Semantic Graph

We use the the text embeddings of class names extracted from a pretrained text model to construct a semantic graph. The distance between text embeddings corresponding to two words provide an effective method for measuring the semantic similarity of the corresponding words [26, 5]. As the text embedding space is semantically meaningful, previous works tried to project image embeddings into word embedding space [11, 18].

Each class name is treated as a “vertex” of the semantic graph and two class name’s are connected by an “edge”. The cosine distance between two class name embeddings is treated as “edge weight” so that two semantically similar classes have a lower edge weight compared to two semantically different classes. This semantic graph is used to regularize the universal embedding space.

More formally, we define the semantic graph as , where represents the set of classes and represent the edges between any two classes. Let represent the function that extracts embeddings of a class name. The adjacency matrix of graph contains non-negative weights associated with each edge, such that:

(4)

where is cosine distance as specified in Section 3.1.

3.4 Learning Algorithms

To learn HUSE, the network architecture presented in Section 3.2 is trained using the following loss:

(5)

where correspond to losses that will make the learned embedding space to have the three properties discussed in Section 3.1 and correspond to weights to control the influence of these three losses. These three losses are discussed in more detail below.

3.4.1 Class Level Similarity

HUSE passes the embeddings from image tower and text tower through a shared fully connected layer and the model is trained using softmax cross entropy loss. This discriminative learning objective can cluster the embeddings corresponding to a class together and is easy to train than the usual metric learning objectives. Moreover, the network can also simultaneously do multimodal classification.

More formally, we denote the common fully connected layer as that calculates unnormalized log probabilities () of each class. For a training example , these unnormalized log probabilites are calculated as:

(6)

From the logits, the probability of each label for a training example is calculated as

(7)

where is the unnormalized log probability for the -th class for point . Let correspond to the ground truth label of an example . The softmax cross entropy loss is calculated as

(8)

where is the indicator function.

3.4.2 Semantic Similarity

To make the learned universal embedding space semantically meaningful, where embeddings corresponding to two semantically similar classes are closer than the embeddings corresponding to two semantically different classes, we regularize the embedding space using the semantic graph discussed in Section 3.3. This semantic graph regularization enforces additional constraint that the distance between any two embeddings is equal to the edge weight of their corresponding classes in semantic graph. As the semantically similar classes have smaller edge weight compared to semantically different classes, the regularization forces semantically similar classes to be closer to each other.

We use the following graph regularization loss to learn semantic similarity.

(9)

However, it is hard to satisfy the constraint that all pairs of embeddings must adhere to the graph . So, we relax this constraint by adding a margin so that the regularization is enforced on semantic classes which are closer than the margin and make other embedding pairs at least as large as the margin.

(10)

After relaxing the constraint, the resulting loss is

(11)

3.4.3 Cross Modal Gap

To reduce the cross modal gap, the distance between image and text embeddings corresponding to the same instance should be minimized. The following loss function is used to achieve instance level similarity.

(12)

4 Experiments

We test our methodology on the UPMC Food-101 dataset. As HUSE incorporates semantic information derived from class labels when learning universal embeddings, we need a large multimodal classification dataset. UPMC Food-101 dataset [36] is a very large multimodal classification dataset containing around 101 food categories. The dataset has about 100,000 textual recipes and their associated images. Apple pie, baby back ribs, strawberry shortcake, tuna tartare are some examples of food categories in this dataset.

4.1 Implementation Details

This section discusses the visual and textual features used by HUSE, followed by the methodology used to construct semantic graph from class labels. Finally, we discuss the network parameters and training process.

4.1.1 Feature Representation

  • Visual: We extract pretrained Graph-Regularized Image Semantic Embeddings (Graph-RISE) of size 64 from individual images [15]. Graph-RISE embeddings are trained on 260 million images to discriminate 40 million ultra-fine-grained semantic labels using a large-scale neural graph learning framework.

  • Textual: We use BERT embeddings111https://github.com/google-research/bert [8] to obtain a representation of the text. Similar to Devlin et al[8], we concatenate the embeddings from the last four layers for each token and then average all token embeddings for a document. In addition to the BERT encoding, we also calculate TF-IDF for the text features to provide a notion of importance for each word in relation to the whole corpus by following the pre-processing steps similar to [36].

    As the number of tokens in each instance of UPMC Food-101 are far greater than the maximum tokens (512) that pre-trained BERT support, we extract 512 salient tokens from the text. To extract the salient tokens for a single example, we consider every sentence in the text as a separate document to build a corpus. We then use this corpus as the input to a TF-IDF model [28] and the most important 512 tokens are extracted.

4.1.2 Semantic Graph:

We construct a semantic graph based on the embeddings extracted from the class names as discussed in Section 3.4.2. As class names often contain more than a single word (e.g., apple pie), we use Universal Sentence Encoder [5] that provide sentence level embedding to extract embeddings of class names. To build the semantic graph, each class name is treated as a vertex and the cosine distance between universal sentence encoder embeddings of two class names is treated as edge weight. This semantic graph is used in the graph loss stated by Equation 9.

4.1.3 Training Process

The image tower consists of 5 hidden layers of 512 hidden units each and text tower consists of 2 hidden layers of 512 hidden units each. A dropout of 0.15 is used between all hidden layers of both towers. The network is trained using the RMSProp optimizer with a learning rate of 1.6192e-05 and momentum set to 0.9 with random batches of 1024 for 250,000 steps. These hyperparameters are chosen to maximize the image and text classification accuracies on the validation set of UPMC Food-101 dataset.

4.2 Baselines

To test the effectiveness of HUSE, we compare it to the following state-of-the-art methods for cross-modal retrieval and visual semantic embeddings. As visual semantic methods are modeled only for image embeddings, we extend them to have textual embeddings as well. The input feature representation, number of hidden layers, hidden layer size and other training parameters of baselines are set similar to the HUSE model.

  1. Triplet: This baseline uses the triplet loss with semi hard online learning to decrease the distance between embeddings corresponding to similar classes, while increasing the distance between embeddings corresponding to different classes [31].

  2. CME: This method learns Cross-Modal Embeddings by maximizing the cosine similarity between positive image-text pairs, and minimizing it between all non-matching image-text pairs. CME uses additional classification loss for semantic regularization [29].

  3. AdaMine: AdaMine uses a double-triplet scheme to align instance-level and semantic-level embeddings [4].

  4. DeViSE*222* indicates that the original model is extended to support text embeddings: DeViSE maps pre-trained image embeddings onto word embeddings of class labels learned from text corpora [11] using skip-gram language model. We extend DeViSE to support text by mapping pre-trained text embeddings onto word embeddings of class labels. To have a fair comparison, we use the class label embeddings used to construct the semantic graph instead of learning word embeddings of class labels using skip-gram language model as in the original paper.

  5. HIE*: Hierarchy-based Image Embeddings maps images to class label embeddings and uses an additional classification loss [3]. We extend this model to support text embeddings by mapping text to class centroids and using an additional classification loss on text. Although the original paper deterministically calculates the class embeddings based on the hierarchy, we use the class label embeddings used to construct the semantic graph for fair comparison.

  6. HUSE-P: This baseline has the similar architecture as that of HUSE and is meant to disentangle the architectural choices of HUSE from the graph loss. HUSE-P maps universal embedding space to class label embedding space by projecting image and text embeddings to class label embeddings similar to DeViSE* and HIE*. HUSE-P uses the following projection loss instead of graph loss in equation 11.

    (13)

We also compare the classification accuracy of HUSE to the previous published results by Wang et al[36] and Kiela et al[16] on UPMC Food-101 dataset. As CME, HIE* and HUSE-P baselines discussed above also has classification layer, we also compare their classification accuracies to HUSE. In addition, we also use separate classification models on image and text modalities of UPMC Food-101 as another baseline for classification task.

4.3 Retrieval Task

The universal embedding space learned by HUSE can be used for both in-modal and cross-modal retrieval. This section quantifies the performance of HUSE on two in-modal retrieval tasks (Image2Image and Text2Text) and two cross-modal retrieval tasks (Image2Text and Text2Image) by comparing it to the baselines discussed in Section 4.2. Given a query (image or text), the retrieval task is to retrieve the corresponding image or text from the same class. For evaluation, we consider each item in the test set as a query (for instance, an image), and we rank the other candidates according to the cosine distance between the query embedding and the candidate embeddings. The results are evaluated using the recall percentage at top K (R@K), over all queries in the test set. The R@K corresponds to the percentage of queries for which the matching item is ranked among the top K closest results.

Image2Image Image2Text Text2Image Text2Text
R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10
Triplet 0.469 0.685 0.746 0.282 0.541 0.632 0.171 0.410 0.539 0.484 0.705 0.783
CME 0.660 0.790 0.821 0.430 0.552 0.592 0.090 0.313 0.458 0.802 0.880 0.901
AdaMine 0.191 0.387 0.499 0.042 0.160 0.266 0.038 0.134 0.221 0.215 0.350 0.447

\aboverulesep3.0pt1-17\cdashline.51\cdashline.5

\cdashlineplus1fil minus1fil

\belowrulesep

DeViSE* 0.656 0.793 0.830 0.537 0.698 0.748 0.220 0.438 0.539 0.543 0.710 0.775
HIE* 0.649 0.792 0.832 0.590 0.712 0.751 0.430 0.721 0.800 0.786 0.870 0.897
HUSE-P 0.668 0.794 0.828 0.328 0.579 0.666 0.689 0.826 0.853 0.830 0.894 0.913
HUSE 0.685 0.806 0.835 0.702 0.763 0.778 0.803 0.889 0.903 0.875 0.917 0.927
Table 1: Retrieval performance of various methods on four retrieval tasks of UPMC Food-101 dataset. The entries above the dashed line correspond to the methods without semantic information and the entries below it correspond to methods that incorporate semantic information. The best value per column is set in bold font.

Table 1 shows the R@1, R@5, R@10 results on 4 retrieval tasks. The results show that HUSE is outperforming all the baselines across all retrieval tasks in all measures on UPMC Food-101 dataset. The more evident gains on cross-modal retrieval tasks compared to in-modal retrieval show that HUSE is able to address media gap more effectively than other methods. Moreover, the significant gains on R@1 metric compared to R@5, R@10 metrics show that HUSE is learning instance level semantics better than the other baselines. The first three entries in Table 1 correspond to methods that don’t include semantic information and the other entries correspond to semantic embedding methods. In general, the methods that uses the semantic information are performing better than the methods that are not using semantic information. This shows that the semantic information is important even when class level retrieval tasks are considered.

HUSE greatly outperforms all the baselines that doesn’t use semantic information in all measures. The gains are even more significant on cross-modal retrieval tasks and this can be attributed to the shared classification layer and instance loss used by HUSE that forces the embeddings from both modalities to be more aligned. CME is the best performing model among the baselines that doesn’t use semantic information and it’s architecture is compartively more similar to HUSE as both models have a classification loss and instance loss. However, CME has separate classification layers for image and text and it’s instance level loss minimizes the distance between embeddings corresponding to an instance while maximizing the distance between embeddings of different instances, even if they belong to the same class. This is different to the instance loss used by HUSE that only minimizes the distance between embeddings corresponding to an instance. AdaMine is the least performing baseline and it can be attributed to the double triplet loss the method is using. The instance level triplet loss pushes the embeddings corresponding to different instances away although they belong to same class, whereas the class level triplet loss tries to bring them closer. The polarity between these two loses makes AdaMine hard to optimize.

The results also show that HUSE is significantly outperforming the baselines that use semantic information, although the performance difference is small compared to the methods that don’t use semantic information. HUSE still outshines these methods on cross-modal retrieval tasks. The semantic embedding baselines uses a fixed class label embedding space and learn mappings to project image and text to that space. HUSE, on the other hand, has the flexibility of learning an embedding space that is completely different from class label embedding space, but has the same semantic distances as the class label embedding space. This flexibility allows HUSE to learn better universal embedding space resulting in better retrieval performance.

4.4 Semantic Quality

As HUSE incorporates semantic information into the embedding space, this section evaluates the quality of the semantic embedding space learned by HUSE. To measure this, we employ a hierarchical precision@K (HP@K) metric similar to [11] that measures the accuracy of model predictions with respect to a given semantic hierarchy. We create a taxonomy for the UPMC-Food 101 classes based on the WordNet ontology [10] and use it to calculate HP@K by generating a set of classes from the semantic hierarchy for each k, and computing the fraction of the model’s k predictions that overlap with the class set. The HP@K values of all the examples in test set are averaged and are reported in Table 2.

Image to Image Image to Text Text to Image Text to Text
H@2 H@5 H@10 H@2 H@5 H@10 H@2 H@5 H@10 H@2 H@5 H@10
Triplet 0.530 0.592 0.674 0.383 0.493 0.604 0.261 0.386 0.524 0.520 0.581 0.669
CME 0.031 0.086 0.167 0.033 0.077 0.162 0.034 0.079 0.166 0.031 0.085 0.167
AdaMine 0.164 0.252 0.376 0.096 0.214 0.350 0.084 0.179 0.309 0.235 0.296 0.397

\aboverulesep3.0pt1-17\cdashline.51\cdashline.5

\cdashlineplus1fil minus1fil

\belowrulesep

DeViSE* 0.667 0.709 0.759 0.683 0.742 0.799 0.157 0.233 0.339 0.350 0.412 0.503
HIE* 0.682 0.718 0.768 0.664 0.721 0.783 0.522 0.607 0.698 0.800 0.820 0.854
HUSE-P 0.689 0.722 0.768 0.656 0.729 0.776 0.573 0.632 0.763 0.825 0.836 0.871
HUSE 0.705 0.740 0.787 0.741 0.784 0.831 0.824 0.848 0.878 0.884 0.897 0.916
Table 2: Hierarchical Precision of various methods on four retrieval tasks of UPMC Food-101.

HUSE outperforms all baselines by a significant margin on HP@K metric showing that HUSE is learning better semantic embedding space than other baselines. Even when the performance of the best method in each retrieval task is compared to HUSE, the performance improvements shows the superiority of HUSE in integrating semantic information into the embedding space. Moreover, these gains are even more prominent in cross-modal retrieval tasks (0.055 to 0.251) than in-modal retrieval taks (0.016 to 0.061). This performance improvements can be attributed to the fact that HUSE doesn’t confine the universal embedding space to class label embedding space. Among three baselines that use the semantic information, HUSE-P is incorporating more semantic information than the other two methods as measured by HP@K. The only difference between HUSE and HUSE-P is that HUSE-P uses projection loss similar to DeViSE* and HIE*. As all of these three baselines maps image and text embeddings to class label space, the architectural choice of using a shared classification layer and an instance loss are effectively reducing the media gap resulting in better performance.

4.5 Classification Task

As HUSE is trained with classification objective, it is natural to apply the model for classification task. For an image, text pair, HUSE returns separate classification scores for image and text. These softmax scores are fused together using simple weighted averaging. Table 3 reports image, text and fusion classification accuracies of HUSE and other baselines along with the previous reported results on UPMC Food-101 dataset.

HUSE achieved an accuracy of 92.3% on UPMC Food-101 dataset, outperforming the previous state-of-the-art by 1.5%. Moreover, the previous state-of-the-art model used complex gated attention method for fusing image and text channels [16]. We, on the other hand, use a simple weighted averaging to fuse softmax scores from image and text channels. At the individual channel level, HUSE’s image accuracy outperformed the previous best by a large margin (17.1% gain) while performing slightly worse (0.7% loss) on text channels.

Image Text Fusion
Wang et al[36] 40.2 82.0 85.1
Kiela et al[16] 56.7 88.0 90.8
Separate Models 72.4 87.2 91.9
CME 72.4 78.8 88.1
HIE* 73.5 80.2 88.5
HUSE-P 73.1 83.9 89.6
HUSE 73.8 87.3 92.3
Table 3: Classification accuracy of UPMC Food-101 dataset.

However, the state-of-the-art classification results on this dataset can’t be entirely attributed to HUSE. This is because we are using different image (Graph-RISE [15]) and text (TFIDF+BERT [8]) embeddings compared to the previous state-of-the-art. To disentangle the contribution of embeddings and HUSE architecture on the classification accuracies, we trained “separate” image classification and text classification models using the same hyperparameters as HUSE. These models are simple classification models without any semantic regularization and have the hidden layers and hidden dimensions similar to image and text tower of HUSE. We see that these separate models achieve the fusion accuracy of 91.9% outperforming the previous best by 1.1%. The HUSE architecture further improves this score by another 0.4%. The majority of the gains HUSE achieved on image channel can be attributed to the better image embeddings, yet HUSE improved this further by another 1.4%. On the text channel, the performance of HUSE and the separate classification model are similar. Unlike separate classification models, HUSE imposes additional constraints on image and text channels by making them share a single classification layer and on the embedding space by including semantic information. However, these constraints didn’t regress the classification accuracies of HUSE, but improved them.

Table 3 also reports the classification accuracy of HUSE-P, where we replace the graph regularization loss with the projection loss to map the image and text embeddings onto the class label embeddings. We see that the performance of HUSE-P is inferior to HUSE. More interestingly, we see the performance regression on text and fusion accuracies compared to the baseline of separate models. These results show that mapping the universal embedding space to class embedding space degrades the resulting embeddings, whereas constraining the universal embedding space to have the same semantic distance as class embedding space improves the performance.

Based on these results, we can say that constraining the embedding space for cross-modal retrieval will decrease the classification performance compared to unconstrained classification models. Adding semantic information to the embedding space boosts the classification performance. Instead of mapping the universal embedding space to class label embedding space, allowing the universal embedding space to have the same semantic distance as the class label embedding space significantly improves the classification performance beyond the unconstrained classification models.

5 Conclusion

We proposed a novel architecture, HUSE, to learn a universal embedding space that incorporates semantic information. Unlike the previous methods that maps image and text embeddings to a constant class label embedding space, HUSE learns a new universal embedding space that still has the same semantic distance as the class label embedding space. These less constrained universal embeddings outperformed several other baselines on multiple retrieval tasks. Moreover, the embedding space learned by HUSE has more semantic information than the other baselines as measure by HP@K metric. A shared classification layer used by HUSE for both image and text embeddings and the instance loss reduced the media gap and resulted in superior cross-modal performance. Moreover, HUSE also achieved state-of-the-art classification accuracy of 92.3% on UPMC Food-101 dataset outperforming the previous best by 1.5%.

References

  • [1] G. Andrew, R. Arora, J. Bilmes, and K. Livescu (2013) Deep canonical correlation analysis. In International conference on machine learning, pp. 1247–1255. Cited by: §2.1.
  • [2] J. Arevalo, T. Solorio, M. Montes-y-Gómez, and F. A. González (2017) Gated multimodal units for information fusion. arXiv preprint arXiv:1702.01992. Cited by: §2.2.
  • [3] B. Barz and J. Denzler (2019) Hierarchy-based image embeddings for semantic image retrieval. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 638–647. Cited by: §1, §1, §2.3, item 5.
  • [4] M. Carvalho, R. Cadène, D. Picard, L. Soulier, N. Thome, and M. Cord (2018) Cross-modal retrieval in the cooking context: learning semantic text-image embeddings. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 35–44. Cited by: §1, §2.1, item 3.
  • [5] D. Cer, Y. Yang, S. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, et al. (2018) Universal sentence encoder. arXiv preprint arXiv:1803.11175. Cited by: §3.3, §4.1.2.
  • [6] J. Y. Chang and K. M. Lee (2015) Large margin learning of hierarchical semantic similarity for image classification. Computer Vision and Image Understanding 132, pp. 3–11. Cited by: §2.3.
  • [7] J. Deng, A. C. Berg, and L. Fei-Fei (2011) Hierarchical semantic indexing for large scale image retrieval. In CVPR 2011, pp. 785–792. Cited by: §2.3.
  • [8] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: 2nd item, §4.5.
  • [9] F. Faghri, D. J. Fleet, J. R. Kiros, and S. Fidler (2017) Vse++: improved visual-semantic embeddings. arXiv preprint arXiv:1707.05612 2 (7), pp. 8. Cited by: §1, §2.1.
  • [10] C. Fellbaum (1998) WordNet: wiley online library. The Encyclopedia of Applied Linguistics. Cited by: §4.4.
  • [11] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov (2013) Devise: a deep visual-semantic embedding model. In Advances in neural information processing systems, pp. 2121–2129. Cited by: §1, §2.1, §2.3, §3.3, item 4, §4.4.
  • [12] Y. Gong, Q. Ke, M. Isard, and S. Lazebnik (2014) A multi-view embedding space for modeling internet images, tags, and their semantics. International journal of computer vision 106 (2), pp. 210–233. Cited by: §2.1.
  • [13] J. Gu, J. Cai, S. R. Joty, L. Niu, and G. Wang (2018) Look, imagine and match: improving textual-visual cross-modal retrieval with generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7181–7189. Cited by: §2.1.
  • [14] Y. Huang, Q. Wu, C. Song, and L. Wang (2018) Learning semantic concepts and order for image and sentence matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6163–6171. Cited by: §2.1.
  • [15] D. Juan, C. Lu, Z. Li, F. Peng, A. Timofeev, Y. Chen, Y. Gao, T. Duerig, A. Tomkins, and S. Ravi (2019) Graph-RISE: Graph-Regularized Image Semantic Embedding. External Links: Link Cited by: §2.3, 1st item, §4.5.
  • [16] D. Kiela, E. Grave, A. Joulin, and T. Mikolov (2018) Efficient large-scale multi-modal classification. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.2, §4.2, §4.5, Table 3.
  • [17] R. Kiros, R. Salakhutdinov, and R. S. Zemel (2014) Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539. Cited by: §1, §2.1.
  • [18] D. Li, H. Lee, J. Huang, S. Wang, and M. Yang (2017) Learning structured semantic embeddings for visual recognition. arXiv preprint arXiv:1706.01237. Cited by: §2.3, §3.3.
  • [19] P. Narayana, J. R. Beveridge, and B. A. Draper (2018) Interacting hidden markov models for video understanding. International Journal of Pattern Recognition and Artificial Intelligence 32 (11), pp. 1855020. Cited by: §2.2.
  • [20] P. Narayana, J. R. Beveridge, and B. A. Draper (2019) Analyzing multi-channel networks for gesture recognition. In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §2.2.
  • [21] P. Narayana, J. R. Beveridge, and B. A. Draper (2019) Continuous gesture recognition through selective temporal fusion. In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §2.2.
  • [22] P. Narayana, R. Beveridge, and B. A. Draper (2018) Gesture recognition: focus on the hands. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5235–5244. Cited by: §2.2.
  • [23] P. Narayana (2018) Improving gesture recognition through spatial focus of attention. Ph.D. Thesis, Colorado State University. Libraries. Cited by: §2.2.
  • [24] S. Nawaz, A. Calefati, M. K. Janjua, M. U. Anwaar, and I. Gallo (2018) Learning fused representations for large-scale multimodal classification. IEEE Sensors Letters 3 (1), pp. 1–4. Cited by: §2.2.
  • [25] Y. Peng, X. Huang, and Y. Zhao (2017) An overview of cross-media retrieval: concepts, methodologies, benchmarks, and challenges. IEEE Transactions on circuits and systems for video technology 28 (9), pp. 2372–2385. Cited by: §1, item 3.
  • [26] J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §3.3.
  • [27] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. R. Lanckriet, R. Levy, and N. Vasconcelos (2010) A new approach to cross-modal multimedia retrieval. In Proceedings of the 18th ACM international conference on Multimedia, pp. 251–260. Cited by: §2.1.
  • [28] R. Řehůřek and P. Sojka (2010-05-22) Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, Valletta, Malta, pp. 45–50 (English). Note: http://is.muni.cz/publication/884893/en Cited by: 2nd item.
  • [29] A. Salvador, N. Hynes, Y. Aytar, J. Marin, F. Ofli, I. Weber, and A. Torralba (2017) Learning cross-modal embeddings for cooking recipes and food images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3020–3028. Cited by: §2.1, item 2.
  • [30] N. Sarafianos, X. Xu, and I. A. Kakadiaris (2019) Adversarial representation learning for text-to-image matching. arXiv preprint arXiv:1908.10534. Cited by: §1, §2.1.
  • [31] F. Schroff, D. Kalenichenko, and J. Philbin (2015) Facenet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823. Cited by: item 1.
  • [32] Y. Song and M. Soleymani (2019) Polysemous visual-semantic embedding for cross-modal retrieval. arXiv preprint arXiv:1906.04402. Cited by: §2.1.
  • [33] N. Verma, D. Mahajan, S. Sellamanickam, and V. Nair (2012) Learning hierarchical similarity metrics. In 2012 IEEE conference on computer vision and pattern recognition, pp. 2280–2287. Cited by: §2.3.
  • [34] B. Wang, Y. Yang, X. Xu, A. Hanjalic, and H. T. Shen (2017) Adversarial cross-modal retrieval. In Proceedings of the 25th ACM international conference on Multimedia, pp. 154–162. Cited by: §2.1.
  • [35] L. Wang, Y. Li, and S. Lazebnik (2016) Learning deep structure-preserving image-text embeddings. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5005–5013. Cited by: §1, §2.1.
  • [36] X. Wang, D. Kumar, N. Thome, M. Cord, and F. Precioso (2015) Recipe recognition with large multimodal food dataset. In 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 1–6. Cited by: §2.2, 2nd item, §4.2, Table 3, §4.
  • [37] J. Williams, R. Comanescu, O. Radu, and L. Tian (2018) Dnn multimodal fusion techniques for predicting video sentiment. In Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), pp. 64–72. Cited by: §2.2.
  • [38] F. Yan and K. Mikolajczyk (2015) Deep correlation for matching images and text. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3441–3450. Cited by: §2.1.
  • [39] Z. Yan, H. Zhang, R. Piramuthu, V. Jagadeesh, D. DeCoste, W. Di, and Y. Yu (2015) HD-cnn: hierarchical deep convolutional neural networks for large scale visual recognition. In Proceedings of the IEEE international conference on computer vision, pp. 2740–2748. Cited by: §2.3.
  • [40] K. Ye and A. Kovashka (2018) Advise: symbolism and external knowledge for decoding advertisements. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 837–855. Cited by: §1, §2.1.
  • [41] B. Zhao, F. Li, and E. P. Xing (2011) Large-scale category structure aware image categorization. In Advances in Neural Information Processing Systems, pp. 1251–1259. Cited by: §2.3.
  • [42] L. Zhen, P. Hu, X. Wang, and D. Peng (2019) Deep supervised cross-modal retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10394–10403. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398350
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description