Learning Model Bias

Learning Model Bias

Jonathan Baxter
Department of Computer Science
Royal Holloway College, University of London
jon@dcs.rhbnc.ac.uk
Abstract

In this paper the problem of learning appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a theorem is given bounding the number tasks that must be learnt. A corollary of the theorem is that if the tasks are known to possess a common internal representation or preprocessing then the number of examples required per task for good generalisation when learning tasks simultaneously scales like , where is a bound on the minimum number of examples requred to learn a single task, and is a bound on the number of examples required to learn each task independently. An experiment providing strong qualitative support for the theoretical results is reported.

1 Introduction

It has been argued (see [6]) that the main problem in machine learning is the biasing of a learner’s hypothesis space sufficiently well to ensure good generalisation from a small number of examples. Once suitable biases have been found the actual learning task is relatively trivial. Exisiting methods of bias generally require the input of a human expert in the form of heuristics, hints [1], domain knowledge, etc . Such methods are clearly limited by the accuracy and reliability of the expert’s knowledge and also by the extent to which that knowledge can be transferred to the learner. Here I attempt to solve some of these problems by introducing a method for automatically learning the bias.

The central idea is that in many learning problems the learner is typically embedded within an environment or domain of related learning tasks and that the bias appropriate for a single task is likely to be appropriate for other tasks within the same environment. A simple example is the problem of handwritten character recognition. A preprocessing stage that identifies and removes any (small) rotations, dilations and translations of an image of a character will be advantageous for recognising all characters. If the set of all individual character recognition problems is viewed as an environment of learning tasks, this preprocessor represents a bias that is appropriate to all tasks in the environment. It is likely that there are many other currently unknown biases that are also appropriate for this environment. We would like to be able to learn these automatically.

Bias that is appropriate for all tasks must be learnt by sampling from many tasks. If only a single task is learnt then the bias extracted is likely to be specific to that task. For example, if a network is constructed as in figure 1 and the output nodes are simultaneously trained on many similar problems, then the hidden layers are more likely to be useful in learning a novel problem of the same type than if only a single problem is learnt. In the rest of this paper I develop a general theory of bias learning based upon the idea of learning multiple related tasks. The theory shows that a learner’s generalisation performance can be greatly improved by learning related tasks and that if sufficiently many tasks are learnt the learner’s bias can be extracted and used to learn novel tasks.

Other authors that have empirically investigated the idea of learning multiple related tasks include [5] and [8].

2 Learning Bias

For the sake of argument I consider learning problems that amount to minimizing the mean squared error of a function over some training set . A more general formulation based on statistical decision theory is given in [3]. Thus, it is assumed that the learner receives a training set of (possibly noisy) input-output pairs , drawn according to a probability distribution on ( being the input space and being the output space) and searches through its hypothesis space for a function minimizing the empirical error,

(1)

The true error or generalisation error of is the expected error under :

(2)

The hope of course is that an with a small empirical error on a large enough training set will also have a small true error, i.e. it will generalise well.

I model the environment of the learner as a pair where is a set of learning tasks and is a probability measure on . The learner is now supplied not with a single hypothesis space but with a hypothesis space family . Each represents a different bias the learner has about the environment. For example, one may contain functions that are very smooth, whereas another might contain more wiggly functions. Which hypothesis space is best will depend on the kinds of functions in the environment. To determine the best for , we provide the learner not with a single training set but with such training sets . Each is generated by first sampling from according to to give and then sampling times from according to to give . The learner searches for the hypothesis space with minimal empirical error on , where this is defined by

(3)

The hypothesis space with smallest empirical error is the one that is best able to learn the data sets on average.

There are two ways of measuring the true error of a bias learner. The first is how well it generalises on the tasks used to generate the training sets. Assuming that in the process of minimising (3) the learner generates functions with minimal empirical error on their respective training sets111This assumes the infimum in (3) is attained., the learner’s true error is measured by:

(4)

Note that in this case the learner’s empirical error is given by The second way of measuring the generalisation error of a bias learner is to determine how good is for learning novel tasks drawn from the environment :

(5)

A learner that has found an with a small value of (5) can be said to have learnt to learn the tasks in in general. To state the bounds ensuring these two types of generalisation a few more definitions must be introduced.

Figure 1: Net for learning multiple tasks. Input from training set is propagated forwards through the internal representation and then only through the output network . The error is similarly backpropagated only through the output network and then . Weight updates are performed after all training sets have been presented.
Definition 1.

Let be a hypothesis space family. Let . For any , define a map by . Note the abuse of notation: stands for two different functions depending on its argument. Given a sequence of functions let be the function . Let be the set of all such functions where the are all chosen from . Let . For each define by and let .

Definition 2.

Given a set of functions from any space to , and any probability measure on , define the pseudo-metric on by

Denote the smallest -cover of by . Define the -capacity of by

where the supremum is over all discrete probability measures on .

Definition 2 will be used to define the -capacity of spaces such as and , where from definition 1 the latter is .

The following theorem bounds the number of tasks and examples per task required to ensure that the hypothesis space learnt by a bias learner will, with high probability, contain good solutions to novel tasks in the same environment222The bounds in theorem 1 can be improved to if all are convex and the error is the squared loss [7]..

Theorem 1.

Let the training sets be generated by sampling times from the environment according to to give , and then sampling times from each to generate . Let be a hypothesis space family and suppose a learner chooses minimizing (3) on . For all and , if

The bound on in theorem 1 is the also the number of examples required per task to ensure generalisation of the first kind mentioned above. That is, it is the number of examples required in each data set to ensure good generalisation on average across all tasks when using the hypothesis space family . If we let be the number of examples required per task to ensure that , where all for some fixed , then

represents the advantage in learning tasks as opposed to one task (the ordinary learning scenario). Call the -task gain of . Using the fact [3] that

and the formula for from theorem 1, we have,

Thus, at least in the worst case analysis here, learning tasks in the same environment can result in anything from no gain at all to an -fold reduction in the number of examples required per task. In the next section a very intuitive analysis of the conditions leading to the extreme values of is given for the situation where an internal representation is being learnt for the environment. I will also say more about the bound on the number of tasks () in theorem 1.

3 Learning Internal Representations with Neural Networks

In figure 1 tasks are being learnt using a common representation . In this case is the set of all possible networks formed by choosing the weights in the representation and output networks. is the same space with a single output node. If the tasks were learnt independently (i.e. without a common representation) then each task would use its own copy of , i.e. we wouldn’t be forcing the tasks to all use the same representation.

Let be the total number of weights in the representation network and be the number of weights in an individual output network. Suppose also that all the nodes in each network are Lipschitz bounded333A node is Lipschitz bounded if there exists a constant such that for all . Note that this rules out threshold nodes, but sigmoid squashing functions are okay as long as the weights are bounded.. Then it can be shown [3] that and . Substituting these bounds into theorem 1 shows that to generalise well on average on tasks using a common representation requires examples of each task. In addition, if then with high probability the resulting representation will be good for learning novel tasks from the same environment. Note that this bound is very large. However it results from a worst-case analysis and so is highly likely to be beaten in practice. This is certainly borne out by the experiment in the next section.

The learning gain satisfies Thus, if , , while if then . This is perfectly intuitive: when the representation network is hardly doing any work, most of the power of the network is in the ouput networks and hence the tasks are effectively being learnt independently. However, if then the representation network dominates; there is very little extra learning to be done for the individual tasks once the representation is known, and so each example from every task is providing full information to the representation network. Hence the gain of .

Note that once a representation has been learnt the sampling burden for learning a novel task will be reduced to because only the output network has to be learnt. If this theory applies to human learning then the fact that we are able to learn words, faces, characters, etc with relatively few examples (a single example in the case of faces) indicates that our “output networks” are very small, and, given our large ignorance concerning an appropriate representation, the representation network for learning in these domains would have to be large, so we would expect to see an -task gain of nearly for learning within these domains.

4 Experiment: Learning Symmetric Boolean Functions

In this section the results of an experiment are reported in which a neural network was trained to learn symmetric444A symmetric Boolean function is one that is invariant under interchange of its inputs, or equivalently, one that only depends on the number of “1’s” in its input (e.g. parity). Boolean functions. The network was the same as the one in figure 1 except that the output networks had no hidden layers. The input space was restricted to include only those inputs with between one and four ones. The functions in the environment of the network consisted of all possible symmetric Boolean functions over the input space, except the trivial “constant 0” and “constant 1” functions. Training sets were generated by first choosing functions (with replacement) uniformly from the fourteen possible, and then choosing input vectors by choosing a random number between 1 and 4 and placing that many 1’s at random in the input vector. The training sets were learnt by minimising the empirical error (3) using the backpropagation algorithm as outlined in figure 1. Separate simulations were performed with ranging from to in steps of four and ranging from to in steps of . Further details of the experimental procedure may be found in [3], chapter 4.

Once the network had sucessfully learnt the training sets its generalization ability was tested on all functions used to generate the training set. In this case the generalisation error (equation (4)) could be computed exactly by calculating the network’s output (for all functions) for each of the input vectors. The generalisation error as a function of and is plotted in figure 2 for two independent sets of simulations. Both simulations support the theoretical result that the number of examples required for good generalisation decreases with increasing (cf theorem 1).

Figure 2: Learning surfaces for two independent simulations.

For training sets that led to a generalisation error of less than , the representation network was extracted and tested for its true error, where this is defined as in equation (5) (the hypothesis space is the set of all networks formed by attaching any output network to the fixed representation network ). Although there is insufficient space to show the representation error here (see [3] for the details), it was found that the representation error monotonically decreased with the number of tasks learnt, verifying the theoretical conclusions.

The representation’s output for all inputs is shown in figure 3 for sample sizes and . All outputs corresponding to inputs from the same category (i.e. the same number of ones) are labelled with the same symbol. The network in the case generalised perfectly but the resulting representation does not capture the symmetry in the environment and also does not distinguish the inputs with 2, 3 and 4 “1’s” (because the function learnt didn’t), showing that learning a single function is not sufficient to learn an appropriate representation. By the representation’s behaviour has improved (the inputs with differing numbers of 1’s are now well separated, but they are still spread around a lot) and by it is perfect.

Figure 3: Plots of the output of a representation generated from the indicated sample.

As well as reducing the sampling burden for the tasks in the training set, a representation learnt on sufficiently many tasks should be good for learning novel tasks and should greatly reduce the number of examples required for new tasks. This too was experimentally verified although there is insufficient space to present the results here (see [3]).

5 Conclusion

I have introduced a formal model of bias learning and shown that (under mild restrictions) a learner can sample sufficiently many times from sufficiently many tasks to learn bias that is appropriate for the entire environment. In addition, the number of examples required per task to learn tasks independently was shown to be upper bounded by for appropriate environments. See [2] for an analysis of bias learning within an Information theoretic framework which leads to an exact -type bound.

References

  • [1] Y. S. Abu-Mostafa. Learning from Hints in Neural Networks. Journal of Complecity, 6:192–198, 1989.
  • [2] J. Baxter. A Bayesian Model of Bias Learning. Submitted to COLT 1996, 1995.
  • [3] J. Baxter. Learning Internal Representations. PhD thesis, Department of Mathematics and Statistics, The Flinders University of South Australia, 1995. Draft copy in Neuroprose Archive under “/pub/neuroprose/Thesis/baxter.thesis.ps.Z”.
  • [4] J. Baxter. Learning Internal Representations. In Proceedings of the Eighth International Conference on Computational Learning Theory, Santa Cruz, California, 1995. ACM Press.
  • [5] R. Caruana. Learning Many Related Tasks at the Same Time with Backpropagation. In Advances in Neural Information Processing 5, 1993.
  • [6] S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Comput., 4:1–58, 1992.
  • [7] W. S. Lee, P. L. Bartlett, and R. C. Williamson. Sample Complexity of Agnostic Learning with Squared Loss. In preparation, 1995.
  • [8] T. M. Mitchell and S. Thrun. Learning One More Thing. Technical Report CMU-CS-94-184, CMU, 1994.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398316
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description