Interactive Text2Pickup Network for Natural Language based Human-Robot Collaboration

Interactive Text2Pickup Network for Natural Language based Human-Robot Collaboration

Abstract

In this paper, we propose the Interactive Text2Pickup (IT2P) network for human-robot collaboration which enables an effective interaction with a human user despite the ambiguity in user’s commands. We focus on the task where a robot is expected to pick up an object instructed by a human, and to interact with the human when the given instruction is vague. The proposed network understands the command from the human user and estimates the position of the desired object first. To handle the inherent ambiguity in human language commands, a suitable question which can resolve the ambiguity is generated. The user’s answer to the question is combined with the initial command and given back to the network, resulting in more accurate estimation. The experiment results show that given unambiguous commands, the proposed method can estimate the position of the requested object with an accuracy of 98.49% based on our test dataset. Given ambiguous language commands, we show that the accuracy of the pick up task increases by 1.94 times after incorporating the information obtained from the interaction.

I Introduction

Language is one of the effective means of communication in humans. However, “language is the source of misunderstandings,” as Antoine de Saint-Exupéry said since a person may say ambiguous sentences which can be interpreted in more than one way [1]. In this case, we interact with the speaker by asking additional questions to reduce the vagueness in the sentence. This situation can also occur in human-robot collaboration, where humans can give an ambiguous language command to a robot and the robot needs to find an appropriate question to understand their intention. Inspired by this observation, we present a novel method which enables a robot to handle the ambiguity inherent in the human language command. Without any preprocessing procedures such as language parsing and object detection, the proposed method based on neural networks can deal with the given collaboration task more accurately.

In this paper, we focus on the task where a robot is expected to pick up a specific object instructed by a human. There have been several studies related to making robots pick up objects when a human has given a language command [2, 3, 4]. The methods in [2, 3] use a probabilistic graph model to make a robot understand human language commands and recognize where the object is. However, these studies do not consider the ambiguity in the human language, and our goal is to achieve a successful human-robot collaboration by alleviating the vagueness in the language through the interaction. [4] suggests a model which can make a robot fetch a requested object to a human and ask if the given language command is ambiguous. It also focuses on mitigating the ambiguity in the language command by interacting with humans, but executes an experiment in a simple environment where three types of six objects are arranged in a single line.

Fig. 1: An overview of the proposed Interactive Text2Pickup (IT2P) network. An image capturing the environment and a language command from a human user are provided as the input to the Text2Pickup network. Based on the input, a Text2Pickup network generates a position heatmap and an uncertainty heatmap. A question generation network receives the image, the language command, and two generated heatmaps as the input and generates an appropriate question to clarify the human intention. The human response to the question is appended to the initial language command and given back to the Text2Pickup network, producing better estimation results.

In this paper, we propose the Interactive Text2Pickup (IT2P) network, which consists of a Text2Pickup network and a question generation network. Once receiving the image of the environment and the language command from a human user, the Text2Pickup network generates a position heatmap, which is a two-dimensional distribution with values indicating the confidence in the position of the desired object, as well as an uncertainty heatmap, which is a two-dimensional distribution modeling the uncertainty in the generated position heatmap. By training the Text2Pickup network in an end-to-end manner, we remove the need of the preprocessing step, such as language parsing and object detection. The trained Text2Pickup network can handle the task flexibly when an input command is complicated or multiple objects are placed in various ways.

If the given initial language command is ambiguous, the question generation network decides which question to ask. We assume that possible questions are predetermined by the color and the position of the object (e.g., Red one?, Rightmost one?). However, even with the set of predefined questions, choosing an appropriate question is challenging since the question needs to be related to the given situation and not overlap with the information which has been already provided from the initial language command. Regarding this, the proposed question generation network generates a suitable question, which can efficiently query more information about the requested object. Once the human responses to the generated question, the answer is accumulated to the initial language command and given back to the Text2Pickup network for producing a better prediction result.

We show that the Text2Pickup network can better locate the requested object compared to a simple baseline network. If a given language command is ambiguous, the proposed Interactive Text2Pickup (IT2P) network, which incorporates additional information obtained from the interaction, outperforms the single Text2Pickup network in terms of the accuracy. In the real experiment, we have applied the proposed network using a Baxter robot. For training and test of the network, we have collected 477 images taken with the camera on the Baxter arm, and 27,468 language commands for ordering the robot to pick up different objects.

The goal of the proposed method can be considered the same as the Visual Question Answering (VQA) task [5], whose goal is to construct a model that can generate an appropriate answer to the image-related question. However, our goal is to build a model that can generate a question for alleviating the ambiguity in a given language command, enabling a successful human-robot collaboration. Note that finding an answer to a given image-related question, and generating a question to mitigating the ambiguity of a given image-related language command are different tasks.

The remainder of the paper is structured as follows. The proposed Interactive Text2Pickup (IT2P) network is described in Section II. Section III describes the Text2Pickup network and the question generation network. Section IV shows the results when the proposed IT2P network is applied to a test dataset. We present the quantitative results showing that the Text2Pickup network infers the object position better than the baseline single Text2Pickup network. The demonstration of the proposed method using a Baxter robot is also provided.

Ii Interactive Text2Pickup Network

Let be an image of the environment shared between a robot and a human user, where is the size of the image, and is a set of possible images from the environment. The language command from a human user is represented as , where denotes the one-hot vector representation of the -th word, is the size of vocabulary, is the number of words in the command, and is the set of possible commands from humans.

As shown in Figure 1, the proposed Interactive Text2Pickup (IT2P) network consists of a Text2Pickup network and a question generation network. A Text2Pickup network takes an image and a language command as the input and generates two heatmaps as follows:

where is a position heatmap which estimates the location of the requested object, and is an uncertainty heatmap which models the uncertainty in .

In this paper, we assume that types of questions that a robot can ask are predetermined, such that questions are related to the color or the position of an object, or inquire whether the current estimation is correct or not. Let be a set of predefined questions. The question generation network determines which question to ask. As depicted in Figure 1, based on the image , the language command , and generated heatmaps, and , generates a weight vector , which can be denoted as follows:

The predefined questions are sorted based on the weight values in , and the question with the highest weight value is chosen and asked to the human. Let be an answer from the human user to the question, where is the -th word consisting the answer and is the number of words in the answer. The answer is appended to the initial language command and the augmented command is generated. Based on this, generates a better estimation result as shown in Figure 1.

Iii Network Structure

Iii-a Text2Pickup Network

Fig. 2: The structure of the Text2Pickup network, which is composed of a single Hourglass network [6] combined with a RNN [7]. When the Hourglass network upsamples a set of encoded image features, the last hidden state vector of the RNN is delivered prior to each upsampling procedure. After that, the position heatmap and the uncertainty heatmap are generated (see equations (3)-(5)).

A Text2Pickup network takes an image and a language command and generates a position heatmap and an uncertainty heatmap . We carefully model the architecture of to model the relationship between and better. Specifically, the proposed adapts an Hourglass network [6], which is mixed with a recurrent neural network (RNN) [7] as shown in Figure 2.

The Hourglass network processes the input image first, and the size of processed image becomes the same as the heatmap. On , the Hourglass network performs the process of max-pooling after the residual module [8] repeatedly to encode a set of image features from the large resolution to the low resolution. After repeating this bottom-up process for several times, the top-down process which upsamples the encoded image features is executed as many times as the bottom-up process has been executed, so that the size of the generated heatmap can be equal to . The generated heatmap is used after being resized to the initial input image size . After every upsampling procedure, the image features encoded from the bottom-up process are passed to the result of the top-down process result by a skip connection, which is indicated by green dotted lines in Figure 2. Interested readers are encouraged to read [6].

However, since the Hourglass network can only process images, a human language command which contains the location information of an object cannot be incorporated. Therefore, as shown in Figure 2, we combine the Hourglass network with a RNN, which can learn features of sequential data . Before encoding a feature from , we encode all words in into a set of word embedding vectors , based on the word2vec model [9]. Here, is the word embedding representation of , such that , where is a word embedding matrix. Regarding this, we use a pretrained word2vec model provided in [10], whose vector dimension is .

The word embedding representation of is encoded into the hidden states of a long short-term memory (LSTM) cell of the RNN [7]. Let be a set of hidden state vectors of the RNN, where

(1)

Here, is a nonlinear function operating in a LSTM cell. For more details of how operates, we encourage readers to refer the original paper [7]. As shown in Figure 2, the last hidden state is delivered to the Hourglass network. The delivered is reshaped and concatenated to the image features before the Hourglass network executes each upsampling process.

Based on the following loss function, is trained to generate the prediction of a position heatmap :

(2)

where denotes the ground truth position heatmap in the training dataset.

From the trained , and are generated based on the method presented in [11]. To be specific, a set of predictions of the position heatmap is sampled from the trained with a dropout applied at a rate of . Based on this, the position heatmap and the uncertainty heatmap are obtained as follows.

(3)
(4)
(5)

Here, represents the element-wise production of matrices, and denotes the predictive variance of the model based on the method presented in [11]. Each element in is the square root of each element in , and represents the standard deviation of each element in .

Iii-B Question Generation Network

We assume that there are questions that can be asked to humans. To choose which question to ask, a question generation network first multiplies by a coefficient and adds it to as follows:

(6)

Note that each element in represents the standard deviation of each element in . Building by the equation (6) is inspired by the Gaussian process upper confidence bound (GP-UCB) [12] method, which solves an exploration-exploitation tradeoff by considering both the estimation mean and its uncertainty. Therefore, we claim that each element in represents the upper confidence bound of each element in .

The input image is resized to the size of , and concatenated to to generate the tensor as shown in Figure 3. A question generation network is a convolutional neural network (CNN) [13] combined with a RNN. As shown in Figure 3, the CNN module composed of three convolutional and max-pooling layers encodes the information in to a feature vector (the blue rectangular box in Figure 3). The RNN module in encodes the information in as does (see equation (1)), in order to prevent asking the information that has been provided from . The last hidden state of the RNN module (the orange rectangular in Figure 2) is used as a language feature.

To generate the output vector which determines a question to ask, properly combines the image and language features. To be specific, (the green rectangular box in Figure 2) is generated by passing to the fully connected layer, and it is concatenated with . The concatenated vector passes through a fully connected layer, and a feature vector (the purple rectangular in Figure 2) which has the same size as is generated. For more efficient combination of the information in and , an element-wise product between the and is performed. The result is passed to a fully connected layer, and the output weight vector is generated. This element-wise product based method is inspired by [14], and it allows to better model the relationship between the image and the language, resulting the better estimation of .

Fig. 3: The structure of the question generation network which is a CNN [13] combined with a RNN [7]. The structure of the network is designed to efficiently model the relationship between the image and the language for estimating the value of better.

Iii-C Implementation Details

The size of the input image is , and the size of the generated heatmaps is . For an Hourglass network in the Text2Pickup network , the bottom-up and top-down process are repeated for four times and 256 features are used in the residual module. The top-down process which upsamples the image features encoded from the bottom-up process is also performed four times. For the RNN in , the dimension of the hidden state vector is 256. In order to train , we set the number of epochs 300 with the batch size of eight. The Adam optimizer [15] is used to minimize the loss function , and a learning rate is set to . When yielding and (see equations (3)-(5)), a dropout rate is set to and the number of samples is set to .

For a question generation network , the coefficient value in (6) is . For its CNN module, the first, second, and third convolutional layers have 16, 32, and 64 filters, and the size of all filters is 3 by 3. The max-pooling layer uses a filter size of 2 and a stride of 2. For the RNN of , the dimension of the hidden state vector is 256. In order to train , we set the number of epochs as with the batch size of eight. For the loss function, the sparse softmax cross entropy loss function provided by [16] has been used with the learning rate of . All values of these parameters are also chosen empirically1.

Iv Experiment

Iv-a Dataset

For training and test of the Interactive Text2Pickup (IT2P) network, we have collected a dataset of images capturing the environment observed by a robot. Each image contains three to six blocks of five colors, with up to two blocks of the same color. Among images of blocks placed in various ways, images are used as a training dataset and images are used as a test dataset. The images were obtained from the camera on the arm of a Baxter robot to make the proposed network work well in the real environment.

For the Text2Pickup network, we collected unambiguous language commands which clearly specify the desired block. These language commands are composed of combinations of representations related to position (e.g., rightmost, upper, middle), color (e.g., red, blue, yellow), and relative position (e.g., between two purple blocks) of the block. As shown in Figure 4, each unambiguous language command is paired with an unambiguous heatmap indicating the position of the target block. Each unambiguous heatmap is generated based on the two-dimensional multivariate Gaussian distribution, whose mean value is at the center of the target block with variance of one. With images, unambiguous language commands, and corresponding unambiguous heatmaps, the Text2Pickup network can be trained.

For the question generation network, the input dataset needs to contain images, language commands, and heatmaps , generated from the trained Text2Pickup network. Regarding this, we collected ambiguous language commands and sampled unambiguous language commands from the dataset for the Text2Pickup network. With collected language commands, , are generated from the trained Text2Pickup network and used as a dataset for the question generation network.

For the output dataset of the question generation network, we have collected possible questions that a robot can inquire. The number of predefined questions is set to 15, which is composed of five questions about the color of the block (e.g., Blue one? Yellow one?), and nine questions about the position of the block (e.g., Lower one? Upper one?), and one question to confirm whether the predicted block is correct or not (e.g., This one?). Each question is encoded as a one-hot vector and used as the ground truth value of the output weight vector .

Fig. 4: The dataset for training the proposed Interactive Text2Pickup (IT2P) network. For training the Text2Pickup network, an image, an unambiguous language command, and an unambiguous heatmap indicating the target block position are used as data. For the input training data of the question generation network, we have additionally collected ambiguous language commands and generated heatmaps , from the trained Text2Pickup network. For the output training data of the question generation network, we have collected possible questions that a robot can inquire.

Iv-B Results from the Text2Pickup Network

We first examine how the position and uncertainty heatmaps are generated from the trained Text2Pickup network. Regarding this, four pairs of images and unambiguous language commands from the test dataset described in Section IV-A are given to the network, and Figure 5 shows the results. In this figure, the sentence at the top center of each rectangle represents the given language command, where the word written in blue is a new word which is not included in the training dataset. Even with language commands including unseen words, it is shown that the generated position heatmaps accurately predict the required block position. This is because the usage of the pretrained word2vec model from [10] helps the network to understand various input words.

The generated uncertainty heatmap has a similar shape to the position heatmap, showing that the uncertainty heatmap has a meaningful correlation with the position heatmap when the language command is unambiguous. However, when the language command is ‘Pick up the left block whose color is green’, it is shown that the uncertainty value is also high around the position of the blue block. It is because of the confusion in the trained Text2Pickup network when distinguishing between green and blue, due to the similarity in the RGB space. In addition, when the language command is ‘Pick up the green object on the upper side’, it is shown that the uncertainty value is also high around the position of another green block since the trained network considers both green blocks.

Fig. 5: Results from the Text2Pickup network when four pairs of images and the unambiguous language commands from the test dataset are given. The results in each rectangle are obtained when the language command (the sentence at the top center of each rectangle) and the image are given to the trained Text2Pickup network. In the language command, the word written in blue is a new word which is not included in the training dataset.

Iv-C Results from Interaction Scenarios

Fig. 6: Results from the Interactive Text2Pickup (IT2P) network from interaction scenarios where the vague language commands are given. The heatmaps in the green rectangle are obtained results when images and ambiguous language commands are given as inputs to the Text2Pickup network. The white square in the image indicates the block that the human user actually wanted to ask. The list in the yellow rectangle shows five possible questions with high weight values of , which are generated from the question generation network. The question whose weight value is the highest is selected and asked to the human user. Results in the blue rectangle are generated heatmaps when a human answer is accumulated to the initial language command and given back to the Text2Pickup network. It is shown that the estimation results are improved in terms of accuracy after obtaining the further information from humans.

In this section, we represent how the proposed Interactive Text2Pickup (IT2P) network works in interaction scenarios, where vague language commands are given. Figure 6 shows the results from three interaction scenarios which were not included when training the IT2P network.

The heatmaps in the green rectangle are obtained when images and ambiguous language commands are given to the Text2Pickup network. The white square in the image indicates the block that the human actually wanted to ask. Based on these two heatmaps, an image, and a language command, the question generation network generates a weight vector which determines the questions to ask. In Figure 6, the list in yellow rectangle shows five possible questions with high values of . It shows that questions with high weight values do not inquire the information provided in the initial language command, and are capable of alleviating the uncertainty in the given language command.

The question with the highest weight value is selected and asked to the human user. The human answer is accumulated to the initial language command and given back to the Text2Pickup network. If the answer is ‘Yes’, the question is appended to the initial language command instead of the answer. Results in the blue rectangle in Figure 6 shows the generated heatmaps after receiving the human answer. It shows that position heatmaps after the interaction better estimate the location of the block that the human user originally wanted. In addition, the value of the uncertainty heatmap is lower overall, indicating that the resulting position heatmap is more reliable than before.

Iv-D Comparison between the Text2Pickup Network and a Baseline Network

In this section, we validate the performance of the Text2Pickup network by comparing with the baseline network shown in Figure 7. The baseline network predicts the target block position without using an Hourglass network [6] and RNN [7]. As shown in Figure 7, the baseline network consists of the CNN module composed of three convolutional and max-pooling layers and uses the sum of the word embedding vectors as a language feature.

We compare the results when 994 unambiguous language commands and 265 ambiguous language commands in the test dataset described in Section IV-A are given as inputs to each network. For the Text2Pickup network, the final prediction of the target block position is set to the location where the value of the position heatmap is the highest. Regarding this, the generated position heatmap is used after being resized to the size of the input image. For the baseline network, the final prediction of the target block position is obtained by multiplying the image size to the generated .

The experiment is defined as successful when the distance between the predicted block position and the ground truth position is less than 20 pixels, which is the half of the block size when the image size is 256. Figure 7 shows the comparison result. The accuracy is when an unambiguous (or certain) language command is given to the baseline network, but if the input language command is ambiguous, the accuracy is significantly lowered to . On the other hand, the accuracy of the Text2Pickup network is when an unambiguous language command is given, and when the input language command is vague. This result shows that the Text2Pickup network is more robust to the ambiguous language commands than the baseline network. We claim that the Text2Pickup network which takes an advantage of an Hourglass network and RNN is superior to the neural network which finds the target block based on the regression method.

Fig. 7: (a) The structure of the baseline network which is compared with the Text2Pickup network. It is composed of a CNN module with three convolutional and max-pooling layers and uses , the sum of the word embedding vectors in the language command, as a language feature. (b) The result of comparison between the baseline network and the Text2Pickup network. The result shows that the Text2Pickup network is more robust than the baseline network when the commands are ambiguous.

Iv-E Comparison between the Interactive Text2Pickup and Text2Pickup Network

In this section, we compare the accuracy of predicting the target block before and after the interaction. Here, we assume that only ambiguous language commands are given. Regarding this, the performances of the Interactive Text2Pickup (IT2P) network and a single Text2Pickup network are compared.

For the IT2P network, we implement a simulator that can answer a question from the network instead of real human users. This simulator informs the color of the object when the network asks the color related questions, and the position information of the object when the network asks the position related questions. If the network asks questions related to the attributes of the desired block, the simulator answers ‘Yes’. If the network asks ‘This one?’ and show the predicted target block, the simulator responses ‘Yes’ if the indicated block is the desired one. If the indicated block is not the desired one, the experiment is considered as a failure.

We supplement additional conditions to make the experiment more realistic and rigorous. If the information provided in the language command (e.g., ‘Pick up the red block’) is asked again from the network (e.g, ‘Red one?’), the experiment is considered as unsuccessful. In addition, if the question is related to the object that cannot be indicated by an ambiguous language command is asked, the experiment is considered as unsuccessful. For example in Figure 4, when the language command is ‘Pick up the purple block’, the experiment is considered as a failure if the network asks ‘Lower left one?’. When the distance between the predicted block position and the ground truth position is less than 20 pixels on the image size of 256, the experiment is considered as successful.

Figure 8 shows the comparison result before and after the interaction. For a total of 265 ambiguous language commands in the test dataset described in Section IV-A, the accuracy of the proposed Interactive Text2Pickup (IT2P) network is . This shows that the proposed network which incorporates the human information gathered from the interaction outperforms a single Text2Pickup network whose accuracy is . This high accuracy can be attributed to our simulator which provides the right information, but this result obtained based the rigorous experimental conditions shows that the proposed network can interact with humans effectively by asking a question appropriate to the given situation.

Fig. 8: The result of comparison between the proposed Interactive Text2Pickup (IT2P) network and a single Text2Pickup network when ambiguous language commands are given. The result shows that the proposed method can predict object location more accurately by mitigating the ambiguity in the language command through the interaction.
Fig. 9: Experimental results of the Interactive Text2Pickup (IT2P) network using a Baxter robot. The subtitles in the purple rectangle show how the robot interacted with the human. Heatmaps in the yellow rectangle were generated before the interaction, and heatmaps in the green rectangle were generated after the interaction. By obtaining more information from a human user, the robot succeeds in picking up the object that the human user requested.

Iv-F Experiments Using a Baxter Robot

Figure 9 shows the result of applying the proposed Interactive Text2Picktup (IT2P) network to a Baxter robot. The subtitles in the purple rectangle show how the robot interacts with the human user. When a human user gives an ambiguous language command (“Pick up the yellow block”) to the robot, the heatmaps in the left yellow rectangle are generated. The generated position heatmap points to the yellow blocks on the lower left and the upper right. A high value of uncertainty is obtained around the red block, because the trained network has a difficulty in distinguishing between yellow and red, due to the similarity in the RGB space.

By asking a question (“Upper right one?”), the robot obtains additional information that the requested object is on the left, and the generated heatmaps after the interaction are shown in the green rectangle in Figure 9. After the interaction, the position heatmap indicates the yellow block on the left, and the value of the uncertainty heatmap is reduced overall. Based on the interaction, the robot succeeds in picking up the requested object.

V Conclusion

In this paper, we have proposed the Interactive Text2Pickup (IT2P) network for picking up the requested object when a human language command is given. The IT2P network interacts with a human user when an ambiguous language command is provided, in order to resolve the ambiguity. By understanding the given language command, the proposed network can successfully predict the position of the desired object and the uncertainty associated with the predicted target position. In order to mitigate the ambiguity in the language command, the network generates a suitable question to ask the human user. We have shown that the proposed IT2P network can efficiently interact with humans by asking a question appropriate to the given situation. The proposed network it applied to a Baxter robot and the collaboration between a real robot and a human user has been conducted. We believe that the proposed method, which can efficiently interact with humans by asking questions based on the estimation and the uncertainty, will enable more natural collaboration between a human and a robot.

Footnotes

  1. https://github.com/hiddenmaze/InteractivePickup

References

  1. S. Löbner, Understanding semantics.   Routledge, 2013.
  2. R. Paul, J. Arkin, N. Roy, and T. M. Howard, “Efficient grounding of abstract spatial concepts for natural language interaction with robot manipulators.” in Robotics: Science and Systems, 2016.
  3. R. Paul, A. Barbu, S. Felshin, B. Katz, and N. Roy, “Temporal grounding graphs for language understanding with accrued visual-linguistic context,” in Proc. of the 26th International Joint Conference on Artificial Intelligence.   AAAI Press, 2017, pp. 4506–4514.
  4. D. Whitney, E. Rosen, J. MacGlashan, L. L. Wong, and S. Tellex, “Reducing errors in object-fetching interactions through social feedback,” in Proc. of the International Conference on Robotics and Automation.   IEEE, 2017, pp. 1006–1013.
  5. S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh, “VQA: Visual question answering,” in Proc. of the IEEE International Conference on Computer Vision, 2015, pp. 2425–2433.
  6. A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” in European Conference on Computer Vision.   Springer, 2016, pp. 483–499.
  7. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  8. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  9. T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in Proc. of Workshop at International Conference on Learning Representations, 2013.
  10. “Google Code Archive,” https://code.google.com/archive/p/word2vec, accessed: 2018-02-12.
  11. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international Conference on Machine Learning, 2016, pp. 1050–1059.
  12. N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger, “Gaussian process optimization in the bandit setting: No regret and experimental design,” in Proc. of the International Conference on Machine Learning, 2009.
  13. Y. LeCun, Y. Bengio, et al., “Convolutional networks for images, speech, and time series,” The Handbook of Brain Theory and Neural Networks, vol. 3361, no. 10, p. 1995, 1995.
  14. T. Miyato and M. Koyama, “cGANs with projection discriminator,” arXiv preprint arXiv:1802.05637, 2018.
  15. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  16. “tf.nn.sparse softmax cross entropy with logits — tensorflow,” https://goo.gl/9b7JbB, accessed: 2018-02-13.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199820
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description