Online Semi-Supervised Learning with Bandit Feedback

Online Semi-Supervised Learning with Bandit Feedback


We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and ad recommendations. We demonstrate how Graph Convolutional Network (GCN), a semi-supervised learning approach, can be adjusted to the new problem formulation. We also propose a variant of the linear contextual bandit with semi-supervised missing rewards imputation. We then take the best of both approaches to develop multi-GCN embedded contextual bandit. Our algorithms are verified on several real world datasets.

1 Introduction

We formulate the problem of Online Partially Rewarded (OPR) learning. Our problem is a synthesis of the challenges often considered in the semi-supervised and contextual bandit literature. Despite a broad range of practical cases, we are not aware of any prior work addressing each of the corresponding components. Next we justify each of the keywords and give motivating examples.

  • Online: data is often naturally collected over time and systems are required to make predictions (take an action) before they are allowed to observe any response from the environment.

  • Partially: oftentimes there is no response available, e.g. a missing label or environment not responding to system’s action.

  • Rewarded: in the context of online (multiclass) supervised learning we assume that the environment will provide the true label - however in many practical systems we can only hope to observe feedback indicating whether our prediction is good or bad (1 or 0 reward), the latter case obscuring the true label for learning.

Practical scenarios that fall under the umbrella of OPR range from clinical trials to dialog orchestration. In clinical trials, reward is partial, as patients may not return for followup evaluation. When patients do return, if feedback on their treatment is negative, the best treatment, or true label, remains unknown and the only available information is a reward of 0 for the treatment administered. In dialog systems, a user’s query is often directed to a number of domain specific agents and the best response is returned. If the user provides negative feedback to the returned response, the best available response is uncertain and moreover, users can choose to not provide feedback at all.

In many applications, obtaining labeled data requires a human expert or expensive experimentation, while unlabeled data may be cheaply collected in abundance. Learning from unlabeled observations to improve prediction performance is the key challenge of semi-supervised learning [17]. One of the possible approaches is the continuity assumption, i.e. points closer to each other in the feature space are more likely to share a label [30]. When the data has a graph structure, another approach is to perform node classification using graph Laplacian regularization, i.e. penalizing difference in outputs of the connected nodes [35]. The latter approach can also be applied without the graph under the continuity assumption by building similarity based graph. We note that the problem of online semi-supervised leaning is rarely considered, with few exceptions [34, 33, 14]. In our setting, the problem is further complicated by the bandit-like feedback in place of labels, rendering the existing semi-supervised approaches inapplicable. We will however demonstrate how one of the recent approaches, Graph Convolutional Networks (GCN) [23], can be extended to our setting.

The multi-armed bandit problem provides a solution to the exploration versus exploitation trade-off, informing a player how to pick within a finite set of decisions while maximizing cumulative reward in an online learning setting. Optimal solutions have been developed for a variety of problem formulations [3, 4, 2, 9, 5, 10, 27, 12, 28, 11]. These approaches do not take into account the relationship between context and reward, potentially inhibiting overall performance. In Linear Upper Confidence Bound (LINUCB)  [26, 18, 13, 8, 15] and in Contextual Thompson Sampling (CTS) [1], the authors assume a linear dependency between the expected reward of an action and its context; the representation space is modeled using a set of linear predictors. These algorithms assume that the bandit can observe the reward at each iteration, which is not the case in our setting. Several authors have considered variations of partial/corrupted rewards [6, 20], however the case of entirely missing rewards has not been studied to the best of our knowledge.

In this paper we focus on handling the problem of online semi-supervised learning with bandit feedback. We first review some existing methods in the respective domains and propose extensions to each of them to accommodate our problem setup. Then we proceed to combine the strengths of both approaches to arrive at an algorithm well suited for the Online Partially Rewarded learning as demonstrated with experiments on several real datasets.

2 Preliminaries

In this section we review two approaches coming from the respective domains of semi-supervised learning and contextual bandits, emphasizing their relevance and shortcomings in solving the OPR problem.

2.1 Graph Convolutional Networks

Neural networks have proven to be powerful feature learners when classical linear approaches fail. Classical neural network, Multi Layer Perceptron (MLP), is dramatically overparametrized and requires copious amounts of labeled data to learn. On the other hand, Convolutional Neural Networks are more effective in the image domain [24], partially due to parameter sharing exploiting relationships between pixels. Image structure can be viewed as a grid graph where neighboring pixels are connected nodes. This perspective and the success of CNNs inspired the development of convolution on graphs neural networks [21, 19, 16] based on the concept of graph convolutions known in the signal processing communities [32]. Though all these works are in the realm of classical supervised learning, the idea of convolving signal over graph nodes is also widely applied in semi-supervised (node classification) learning [7], where the graph describes relationships among observations (cf. grid graph of features (pixels) in CNNs). [23] proposed Graph Convolutional Network (GCN), an elegant synthesis of convolution on graphs ideas and neural network feature learning capability, which significantly outperformed prior semi-supervised learning approaches on several citation networks and knowledge graph datasets.

To understand the GCN method, let denote a data matrix with observations and features and let denote a positive, sparse, and symmetric adjacency matrix of size . The GCN embedding of the data with one hidden layer of size is , where is degree normalized adjacency with self connections: and is the diagonal matrix of node degrees. is a trainable weight vector. Resulting embedded data goes into the softmax layer and the loss for backpropagation is computed only on the labeled observations. The product gives the one-hop convolution — signal from a node is summed with signals from all of its neighbours achieving smooth transitions of the embeddings over the data graph. Although a powerful semi-supervised approach, GCN is not suitable for the Online and Rewarded components of OPR. It additionally requires a graph as an input, which may not be available in some cases.

2.2 Contextual Bandit

Following [25], the contextual bandit problem is defined as follows. At each time , a player is presented with a context vector , and must choose an arm . is the reward of the action at time , and denotes a vector of rewards for all arms at time . We operate under the linear realizability assumption, i.e., there exist unknown weight vectors with for so that

Hence, the are independent random variables with expectation .

One solution to the contextual bandit problem is the LINUCB algorithm [26] where the key idea is to apply online ridge regression to incoming data to obtain an estimate of the coefficients for . At each step , the LINUCB policy selects the arm with the highest upper confidence bound of the reward , where is the expected reward for arm , is the standard deviation of the corresponding reward scaled by exploration-exploitation trade-of parameter (chosen a priori) and is the covariance of the -th arm context. LINUCB requires a reward for the chosen arm, , to be observed to perform its updates. In our setting reward may not be available at every step , hence we need to adjust the LINUCB algorithm to learn from data with missing rewards.

3 Proposed algorithms

In this section we formally define Online Partially Rewarded (OPR) problem and present a series of algorithms, starting with natural modifications of GCN and LINUCB to suit the OPR problem setting and conclude with an algorithm building on strengths of both GCN and LINUCB.

3.1 Problem setting

We now formally define each of the OPR keywords:

  • Online: at each step we observe observation and seek to predict its label using and possibly any information we had obtained prior to step .

  • Partially: after we made the prediction , environment may not provide any feedback (we will use -1 to encode absence of feedback) and we have to proceed to step without knowledge of the true .

  • Rewarded: suppose there are possible labels . The environment at step , if it responds to our prediction , will not provide true , but instead a response , where indicates and indicates (-1 indicates missing response).

Note on absence of environment response.

We assume that there is no dependence on in whether environment will respond or not. This is a common setting in semi-supervised learning [17] — we have access to limited samples from the joint distribution of data and labels and larger amount of samples from the data marginal with the goal to infer using both. This assumptions is justified in some applications of interest, e.g. whether user will provide feedback to the dialog agent is independent of what the user asked.

3.2 Rewarded Online GCN

There are three challenges to be addressed to formulate Rewarded Online GCN (ROGCN): (i) online learning; (ii) the environment only responds with 0 or 1 to our predictions and (iii) the potential absence of graph information. As we shall see, there is a natural path from GCN to ROGCN. Suppose there is a small portion of data and labels available from the start, and , where is the number of features, is the number of classes and is the size of initially available data. When there is no graph available we can construct a -NN graph ( is a parameter chosen a priori) based on similarities between observations - this approach is common in convolutional neural networks on feature graphs [21, 19] and we adopt it here for graph construction between observations to obtain graph adjacency . We provide details in Section 4. Now that we have , we can train GCN with hidden units (a parameter chosen a priori) to obtain initial estimates of hidden layer weights and softmax weights . Next we start to observe the stream of data — as new observation arrives, we add it to the graph and data matrix, and append -1 (missing label) to . Then we run additional training steps of GCN and output a prediction to obtain environment response . Here 1 indicates correct prediction, hence we include it to the set of available labels for future predictions; 0 indicates wrong prediction and -1 an absence of a response, in the later two cases we continue to treat the label of as missing. This procedure is summarized in Algorithm 1.

1:  Input:
2:  Set
3:  for  to  do
4:      Append to , -1 to
5:      Update with new edges if graph information is available or build -NN similarity graph from to obtain
6:      Update and through GCN backpropagation with inputs
7:      Retrieve GCN prediction and observe environment response for
8:      if  then
9:          Replace last entry of with
10:      end if
11:  end for
Algorithm 1 ROGCN

3.3 Bounded Imputation LINUCB

Contextual multi-armed bandits offer a powerful approach to online learning when true labels are not available and the environment’s response to a prediction is observed at every observation instead. However, in our OPR problem setting, the environment may not respond to the agent for every observation. Classic bandit approach such as Linear Upper Confidence Bound (LINUCB) [26] may be directly applied to OPR, however it would not be able to learn from observations without environment response. We propose to combine LINUCB with a user defined imputation mechanism for the reward when environment response is missing. In order to be robust to variations in the imputation quality, we only allow imputed reward to vary within agent’s beliefs. To make use of the context in the absence of the reward, we consider a user defined imputation mechanism , which is expected to produce class probabilities for an input context to impute the missing reward. Typically any imputation mechanism will have an error of its own, hence we constrain the imputed reward to be within one standard deviation of the expected reward for the chosen arm:


As in ROGCN, we can take advantage of small portion of data to initialize bandit parameters and for . Bounded Imputation LINUCB (BILINUCB) is summarized in Algorithm 2.

1:  Input: , , A,
2:  for  to  do
3:      Update with and retrieve
4:      for all  do
7:      end for
8:      Predict , and observe environment response
9:      if   then
10:           for
12:          Update with label
13:      else if  then
15:      else if  then
17:           (see Eq. (1))
18:      end if
19:  end for
Algorithm 2 BILINUCB

3.4 Multi-GCN embedded UCB

We have presented two algorithms for OPR learning, however both approaches pose some limitations: ROGCN is unable to learn from missclassified observations and has to treat them as missing labels, while BILINUCB assumes linear relationship between data features and labels and even with perfect imputation is limited by the performance of the best linear classifier. Note that the bandit perspective allows one to learn from missclassfied observations, i.e. when the environment response , and the neural network perspective facilitates learning better features such that linear classifier is sufficient. This observation motivates us to develop a more sophisticated synthesis of GCN and LINUCB approaches, where we can combine advantages of both perspectives.

To begin, we note that if , a environment response identifies the correct class, hence the OPR reduces to online semi-supervised learning for which GCN can be trivially adjusted using ideas from ROGCN. To take advantage of this for we can consider a suite of GCNs for each of the classes, which then necessitates a procedure to decide which of the GCNs to use for prediction at each step. We propose to use a suite of class specific GCNs, where prediction is made using contextual bandit with context of -th arm coming from the hidden layer representation of -th class GCN and, when missing, reward is imputed from the corresponding GCN.

We now describe the multi-GCN embedded Upper Confidence Bound (GCNUCB) bandit in more details. Let denote the -th GCN data embedding and let denote the embedding of observation . We will use this embedding (additionally normalized to unit norm) as context for the corresponding arm of the contextual bandit. The advantage of this embedding is its graph convolutional nature coupled with expressive power of neural networks. We note that as we add new observation to the graph and update weights of the GCNs, the embedding of the previously observed evolves. Therefore instead of dynamically updating bandit parameters and as it was done in BILINUCB, we maintain set of indices for each of the arms . At any step we can compute corresponding bandit context covariance and weight estimate using current embedding:


where is the reward that was observed or imputed at step for arm (recall that we are imputing using prediction of the binary GCN corresponding to the arm chosen by the bandit). Now we can compute expected value and standard deviation for the reward on each arm. The prediction is made based on the upper confidence bounds for the rewards of the arms:


Then we observe the environment response . Unlike ROGCN, GCNUCB is able to learn from mistakes, i.e. when  — although as before we don’t know the true class, we can be sure that it was not , hence we can use this information to improve GCN corresponding to the class . We summarize GCNUCB in Algorithm 3. Similarly to ROGCN and BILINUCB we can use a small amount of data and labels converted to binary labels (as before -1 encodes missing label) for each class to initialize GCNs weights for and index sets for each of the arms . Adjacency matrix if not given is obtained as in ROGCN.

1:  Input:
2:  Set
3:  for  to  do
4:      Append to , -1 to each of
5:      Update with new edges if graph information is available or build -NN similarity graph from to obtain
6:      Update and through GCN backpropagation with inputs for
7:      Retrieve embeddings
8:      Compute (Eq. (2)) and (Eq. (3))
9:      Make prediction using Eq. (4) and observe environment response
10:      if  then
11:          For each replace last entry of with 1 if and 0 otherwise
12:          Append to each and 1 to if and 0 otherwise
13:      else if  (learning from mistakes) then
14:          Replace last entry of with 0
15:          Append to and 0 to
16:      else if  (imputing) then
17:          Append to , output of -th GCN to
18:      end if
19:  end for
Algorithm 3 GCNUCB

4 Experiments

In this section we compare baseline method LINUCB which ignores the data with missing rewards to ROGCN, BILINUCB and GCNUCB — algorithm proposed in this paper. We consider four different datasets: CNAE-9 and Internet Advertisements from the the UCI Machine Learning Repository1, Cora 2, and Warfarin [31]. Cora is naturally a graph structured data which can be utilized by ROGCN, BILINUCB with ROGCN based imputation and GCNUCB. For other datasets we use a 5-NN graph built online from the available data as follows.

Suppose at step we have observed data points for . Weights of the similarity graph computed as follows:


As it was done by [19] we set , where denotes distance between observation and its -th nearest neighbour indexed . The k-NN adjacency is obtained by setting all but (excluding itself) corresponding closest entries of , to 0 and symmetrizing. Then, as in [23], we add self connections and row normalize , where is the diagonal matrix of node degrees.

For pre-processing we discarded features with large magnitudes (3 features in Internet Advertisements and 2 features in Warfarin) and row normalized all observations to have unit norm.

For all of our algorithms that use GCN we use default parameters of the GCN and Adam optimizer [22]. Default parameters are as follows: 16 hidden units, learning rate of 0.01, 0.0005 weight decay, and dropout of 0.5.

25% Missing labels
CNAE-9 Internet Ads Warfarin Cora
LINUCB 67.57 2.90 90.08 0.64 53.70 0.77 38.06 3.45
ROGCN 64.73 2.67 88.22 1.73 47.72 9.40 48.57 7.75
BILINUCB-GCN 67.27 2.79 89.91 0.73 53.70 0.77 37.66 3.92
BILINUCB-KMeans 67.69 4.30 90.37 0.63 52.53 4.83 39.11 2.68
GCNUCB 77.10 1.89 93.14 0.39 55.19 3.40 66.01 1.35

50% Missing labels
CNAE-9 Internet Ads Warfarin Cora
LINUCB 64.25 3.55 88.62 0.67 51.87 5.12 38.85 2.74
ROGCN 65.96 3.69 88.38 1.93 49.37 8.29 47.71 9.25
BILINUCB-GCN 63.52 3.31 88.40 0.73 51.75 5.32 38.08 2.97
BILINUCB-KMeans 67.37 5.18 89.95 0.66 54.20 0.30 39.20 1.76
GCNUCB 74.55 1.82 92.62 0.37 56.51 3.43 63.47 2.26
75% Missing labels
CNAE-9 Internet Ads Warfarin Cora
LINUCB 61.67 3.16 86.66 0.99 52.99 2.61 33.92 0.04
ROGCN 65.67 5.28 88.31 1.81 47.48 5.41 49.63 5.06
BILINUCB-GCN 61.36 3.79 86.68 1.04 50.04 11.44 32.21 5.99
BILINUCB-KMeans 57.16 3.57 88.21 0.99 51.21 7.12 32.51 4.98
GCNUCB 70.82 2.33 91.45 0.89 53.31 2.98 58.29 2.80

Table 1: Total average accuracy

To emulate the OPR setting we randomly permute the order of the observations in a dataset and remove labels for some portion (we experiment with three settings: 25%, 50% and 75% missing labels) of the observations chosen at random. For all methods we consider initial data and to represent a single observation per class chosen randomly (). At a step each algorithm is given a feature vector and is ought to make a prediction . The environment response is then observed and algorithms moves onto step . To compare performance of different algorithms at each step we compare to true label available from the dataset (but concealed from the algorithms themselves) to evaluate running accuracy. Defined as such, accuracy is inversely proportional to regret.

Imputation Methods.

We test two different imputation functions for BILINUCB - a ROGCN and simple k-means clustering with 10 clusters. Henceforth, we denote these two approaches as BILINUCB-GCN and BILINUCB-KMeans. In BILINUCB-GCN we update ROGCN with incoming observations and use the softmax class prediction to impute missing reward when needed. In BILINUCB-KMeans, we use the mini-batch k-means algorithm to cluster incoming observations online and impute missing reward with the average non-missing reward of all observations in the corresponding cluster.

Running accuracy results.

We noticed that BILINUCB with both imputation approaches and GCNUCB are more robust to data ordering when we use baseline LINUCB for first 300 steps and then proceed with the corresponding algorithm (see Figure 0(a) where aforementioned algorithms and LINUCB perform the same until step 300 and then have individual running accuracies). For all LINUCB based approaches we used exploration-exploitation trade-off parameter . Results are summarized in Table 1. Since ordering of the data can affect the problem difficulty, we performed 10 data resampling for each setting to obtain error bars.

GCNUCB outperforms the LINUCB baseline and our other proposed methods in all of the experiments, validating our intuition that a method synthesizing the exploration capabilities of bandits coupled with the effective feature representation power of neural networks is the best solution to the OPR problem. We see the greatest increase in accuracy between GCNUCB and the alternative approaches on the Cora dataset which has a natural adjacency matrix. This suggests that GCNUCB has a particular edge in OPR applications with graph structure. Such problems are ubiquitous. Consider our motivating example of dialog systems - for dialog systems deployed in social network or workplace environments, there exists graph structure between users, and user information can be considered alongside queries for personalization of responses.

Role of bounding the imputed reward.

Notice that on average, a BILINUCB method outperforms LINUCB and ROGCN. To understand the role of these imputation bounds, we analyze the effects of random imputation. We denote this use of random imputation as BILINUCB-Random and ILINUCB-Random as the same without bounding the imputed reward. We define BILINUCB-KMeans and ILINUCB-KMeans similarly. We summarize the overall accuracy of each method on CNAE-9 in Table 2.

As the purpose of these bounds is to correct for errors in the imputation method, we expect to see its impact the most when imputation is inaccurate. This is exactly what we see in Table 2 and Figure 0(a). When we use a reasonable imputation method, k-means, the imputation bounds do not improve, or only make slight improvements to the results. The improvement gain is much more evident with random imputation, and across both imputation methods, the bounds have a larger impact when there is more reward missing.

(a) Accuracy on CNAE-9;
50% missing labels
(b) t-SNE embeddings of context and bandit weight vectors for LINUCB
(c) t-SNE embeddings of context and bandit weight vectors for GCNUCB

Visualizing GCNUCB context space.

Recall that the context for each arm of GCNUCB is provided by the corresponding binary GCN hidden layer. The motivation for using binary GCNs to provide the context to LINUCB is the ability of GCN to construct more powerful features using graph convolution and neural networks expressiveness. To see how this procedure improves upon the baseline LINUCB utilizing input features as context, we project the context and the corresponding bandit weight vectors, , for both LINUCB and GCNUCB to a 2-dimensional space using t-SNE [29]. In this experiment we analyzed CNAE-9 dataset with 25% missing labels. Recall that the bandit makes prediction based on the upper confidence bound of the regret: and that for LINUCB and for GCNUCB. To better visualize the quality of the learned weight vectors, for this experiment we set and hence resulting in a greedy bandit, always selecting an arm maximizing expected reward . In this case, a good combination of contexts and weight vectors is the one where observations belonging to the same class are well clustered and corresponding bandit weight vector is directed at this cluster. For LINUCB (Figure 0(b), 68% accuracy) the bandit weight vectors mostly point in the direction of their respective context clusters, however the clusters themselves are scattered, thereby inhibiting the capability of LINUCB to effectively distinguish between different arms given the context. In the case of GCNUCB (Figure 0(c), 77% accuracy) the context learned by each GCN is tightly clustered into two distinguished regions - one with context for corresponding label and binary GCN when it is the correct label (points with bolded colors), and the other region with context for the label and GCN when a different label is correct (points with faded colors). The tighter clustered contexts allow GCNUCB to effectively distinguish between different arms by assigning higher expected reward to contexts from the correct binary GCN than others, thereby resulting in better performance of GCNUCB than other methods.

% Reward Missing ILINUCB-Random BILINUCB-Random
25 67.29 4.15 67.65 4.30
50 65.67 5.20 67.19 5.37
75 49.77 4.68 56.36 3.71
% Reward Missing ILINUCB-KMeans BILINUCB-KMeans
25 67.92 3.98 67.69 4.30
50 67.14 4.84 67.37 5.18
75 56.62 4.40 57.16 3.57
Table 2: CNAE-9 total average accuracy

5 Conclusion and Discussion

We have defined and studied the problem of Online Partially Rewarded (OPR) learning, which combines challenges from semi-supervised learning and multi-armed contextual bandits. We have developed ROGCN and BILINUCB - extensions of popular algorithms in the corresponding domains to solve the OPR problem. Our main contribution, GCNUCB algorithm, is the efficient synthesis of the strengths of the two approaches. Our experiments show that GCNUCB, which combines feature extraction capability of the graph convolution neural networks and natural ability of contextual bandits to handle online learning with reward (instead of labels), is the best approach for OPR across a LINUCB baseline and other algorithms that we proposed.


  2. mccallum/data.html


  1. S. Agrawal and N. Goyal (2013) Thompson sampling for contextual bandits with linear payoffs. In ICML (3), pp. 127–135. Cited by: §1.
  2. R. Allesiardo, R. Féraud and D. Bouneffouf (2014) A neural networks committee for the contextual bandit problem. In International Conference on Neural Information Processing, pp. 374–381. Cited by: §1.
  3. P. Auer, N. Cesa-Bianchi and P. Fischer (2002) Finite-time analysis of the multiarmed bandit problem. Machine Learning 47 (2-3), pp. 235–256. Cited by: §1.
  4. P. Auer, N. Cesa-Bianchi, Y. Freund and R. E. Schapire (2002) The nonstochastic multiarmed bandit problem. SIAM J. Comput. 32 (1), pp. 48–77. Cited by: §1.
  5. A. Balakrishnan, D. Bouneffouf, N. Mattei and F. Rossi (2018) Using contextual bandits with behavioral constraints for constrained online movie recommendation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pp. 5802–5804. External Links: Link, Document Cited by: §1.
  6. G. Bartók, D. P. Foster, D. Pál, A. Rakhlin and C. Szepesvári (2014) Partial monitoring—classification, regret bounds, and algorithms. Mathematics of Operations Research 39 (4), pp. 967–997. Cited by: §1.
  7. M. Belkin, P. Niyogi and V. Sindhwani (2006) Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research 7 (Nov), pp. 2399–2434. Cited by: §2.1.
  8. D. Bouneffouf, R. Féraud, S. Upadhyay, Y. Khazaeni and I. Rish (2020) Double-linear thompson sampling for context-attentive bandits. External Links: 2010.09473 Cited by: §1.
  9. D. Bouneffouf and R. Féraud (2016) Multi-armed bandit problem with known trend. Neurocomputing 205, pp. 16–21. External Links: Link, Document Cited by: §1.
  10. D. Bouneffouf, S. Parthasarathy, H. Samulowitz and M. Wistuba (2019) Optimal exploitation of clustering and history information in multi-armed bandit. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, S. Kraus (Ed.), pp. 2016–2022. External Links: Link, Document Cited by: §1.
  11. D. Bouneffouf, I. Rish, G. A. Cecchi and R. Féraud (2017) Context attentive bandits: contextual bandit with restricted context. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, C. Sierra (Ed.), pp. 1468–1475. External Links: Link, Document Cited by: §1.
  12. D. Bouneffouf, I. Rish and G. A. Cecchi (2017) Bandit models of human behavior: reward processing in mental disorders. In Artificial General Intelligence - 10th International Conference, AGI 2017, Melbourne, VIC, Australia, August 15-18, 2017, Proceedings, T. Everitt, B. Goertzel and A. Potapov (Eds.), Lecture Notes in Computer Science, Vol. 10414, pp. 237–248. External Links: Link, Document Cited by: §1.
  13. D. Bouneffouf and I. Rish (2019) A survey on practical applications of multi-armed and contextual bandits. CoRR abs/1904.10040. External Links: Link, 1904.10040 Cited by: §1.
  14. D. Bouneffouf, S. Upadhyay and Y. Khazaeni (2020) Contextual bandit with missing rewards. CoRR abs/2007.06368. External Links: Link, 2007.06368 Cited by: §1.
  15. D. Bouneffouf (2020) Online learning with corrupted context: corrupted contextual bandits. CoRR abs/2006.15194. External Links: Link, 2006.15194 Cited by: §1.
  16. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam and P. Vandergheynst (2017) Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §2.1.
  17. O. Chapelle, B. Scholkopf and A. Zien (2009) Semi-supervised learning. IEEE Transactions on Neural Networks 20 (3), pp. 542–542. Cited by: §1, §3.1.
  18. W. Chu, L. Li, L. Reyzin and R. E. Schapire (2011) Contextual bandits with linear payoff functions.. In AISTATS, G. J. Gordon, D. B. Dunson and M. Dudik (Eds.), JMLR Proceedings, Vol. 15, pp. 208–214. External Links: Link Cited by: §1.
  19. M. Defferrard, X. Bresson and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pp. 3844–3852. Cited by: §2.1, §3.2, §4.
  20. P. Gajane, T. Urvoy and E. Kaufmann (2016) Corrupt bandits. EWRL. Cited by: §1.
  21. M. Henaff, J. Bruna and Y. LeCun (2015) Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163. Cited by: §2.1, §3.2.
  22. D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
  23. T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1, §2.1, §4.
  24. A. Krizhevsky, I. Sutskever and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §2.1.
  25. J. Langford and T. Zhang (2008) The epoch-greedy algorithm for multi-armed bandits with side information. In Advances in neural information processing systems, pp. 817–824. Cited by: §2.2.
  26. L. Li, W. Chu, J. Langford and R. E. Schapire (2010) A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, WWW ’10, USA, pp. 661–670. Cited by: §1, §2.2, §3.3.
  27. B. Lin, D. Bouneffouf, G. A. Cecchi and I. Rish (2018) Contextual bandit with adaptive feature extraction. In 2018 IEEE International Conference on Data Mining Workshops, ICDM Workshops, Singapore, Singapore, November 17-20, 2018, H. Tong, Z. J. Li, F. Zhu and J. Yu (Eds.), pp. 937–944. External Links: Link, Document Cited by: §1.
  28. B. Lin, G. Cecchi, D. Bouneffouf, J. Reinen and I. Rish (2020) Unified models of human behavioral agents in bandits, contextual bandits and rl. arXiv preprint arXiv:2005.04544. Cited by: §1.
  29. L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: §4.
  30. M. Seeger (2000) Learning with labeled and unlabeled data. Technical report Cited by: §1.
  31. A. Sharabiani, A. Bress, E. Douzali and H. Darabi (2015) Revisiting warfarin dosing using machine learning techniques. Computational and mathematical methods in medicine 2015. Cited by: §4.
  32. D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega and P. Vandergheynst (2013) The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine 30 (3), pp. 83–98. Cited by: §2.1.
  33. M. Valko, B. Kveton, L. Huang and D. Ting (2012) Online semi-supervised learning on quantized graphs. arXiv preprint arXiv:1203.3522. Cited by: §1.
  34. B. Yver (2009-10) Online semi-supervised learning: application to dynamic learning from radar data. In 2009 International Radar Conference ”Surveillance for a Safer World” (RADAR 2009), pp. 1–6. External Links: ISSN 1097-5764 Cited by: §1.
  35. X. Zhu, Z. Ghahramani and J. D. Lafferty (2003) Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pp. 912–919. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description