Descriptions of Objectives and Processes of Mechanical Learning 1footnote 11footnote 1Great thanks for whole heart support of my wife. Thanks for Internet and research contents contributers to Internet.

Descriptions of Objectives and Processes of
Mechanical Learning 111Great thanks for whole heart support of my wife. Thanks for Internet and research contents contributers to Internet.

Chuyu Xiong
Independent researcher, New York, USA
Email: chuyux99@gmail.com
July 22, 2019
Abstract

In [1], we introduced mechanical learning and proposed 2 approaches to mechanical learning. Here, we follow one such approach to well describe the objects and the processes of learning. We discuss 2 kinds of patterns: objective and subjective pattern. Subjective pattern is crucial for learning machine. We prove that for any objective pattern we can find a proper subjective pattern based upon least base patterns to express the objective pattern well. X-form is algebraic expression for subjective pattern. Collection of X-forms form internal representation space, which is center of learning machine. We discuss learning by teaching and without teaching. We define data sufficiency by X-form. We then discussed some learning strategies. We show, in each strategy, with sufficient data, and with certain capabilities, learning machine indeed can learn any pattern (universal learning machine). In appendix, with knowledge of learning machine, we try to view deep learning from a different angle, i.e. its internal representation space and its learning dynamics.

Keywords: Mechanical learning, learning machine, objective and subjective patterns, X-form, universal learning, learning by teaching, internal representation space, data sufficiency, learning strategy, squeez to higher, embed to parameter space

If you want to know the taste of a pear, you must change the pear by eating it yourself. ……
All genuine knowledge originates in direct experience.
       —-Mao Zedong

But, though all our knowledge begins with experience, it by no means follows that
all arises out of experience.
       —-Immanuel Kant

Our problem, …… is to explain how the transition is made from a lower level of knowledge
to a level that is judged to be higher.
       —-Jean Piaget

1 Introduction

Mechanical learning is a computing system that is based on a simple set of fixed rules (so called mechanical), and can modify itself according to incoming data (so called learning). A learning machine is a system that realizes mechanical learning.

In [1], we introduced mechanical learning and discussed some basic aspects of it. Here, we are going to continue the discussion of mechanical learning. As we proposed in [1], there are naturally 2 ways to go: to directly realize one learning machine, or to well describe what mechanical learning is really doing. Here, we do not try to design a specific learning machine, instead, we focus on describing the mechanical learning, specially, the objects and the process of learning, and related properties. Again, the most important assumption is mechanical, i.e., the system must follow a set of simple and fixed rules. By posting such requirement on learning, we can go deeper and reveal more interesting properties of learning.

In section 2, we discuss more about learning machine. We show one useful simplification: a - learning machine can be reduced to independent -1 learning machines. This simplification could help us a lot. We define level 1 learning machine in section 2. This concept clarifies a lot of confusing.

The driving force of a learning machine is its incoming data, and incoming data forms patterns. Thus, we need to understand pattern first. In section 3, we discuss patterns and examples. In the process of understanding pattern, what is objective and what is subjective is naturally raised. In fact, these issues are very crucial to learning machine. Objective patterns and their basic operators are straightforward. In order to understand subjective pattern, we discuss how learning machine to perceive and process pattern. Such discussions lead us to subjective pattern and basic operators on them. We introduce X-form for subjective expressions, which will play central role in our later discussions. We prove that for any objective pattern we can find a proper X-form based upon least base patterns and to express the objective pattern well.

Learning by teaching, i.e. learning driving by a well designed teaching sequence (a special kind of data sequence), is a much simpler and effective learning. Though learning by teaching is only available in very rare cases, it is very educational to discuss it first. This is what we do in section 4. We show if a learning machine has certain capabilities, we can make teaching sequence so that under driven of such teaching sequence, it learns effectively. So, with these capabilities, we have an universal learning machine.

From learning by teaching, we get insight that the most crucial part of learning is abstraction from lower to higher. We try to apply such insights to learning without teaching. In section 5, we first defined mechanical learning without teaching. Then we introduce internal representation space, which is the center of learning machine and best to be expressed by X-forms. Internal representation space is actually where learning is happening. We write down the formulation of learning dynamics, which gives a clear picture about how data drives learning. However, one big issue is how much data are enough to drive the learning to reach the target. With the help of X-form and sub-form, we define data sufficiency: sufficient to support a X-form, and sufficient to bound a X-form. Such sufficiency gives a framework for us to understand data used to drive learning. We then show that by a proper learning strategy, with sufficient data, with certain learning capabilities, a learning machine indeed can learn. We demonstrate 3 learning strategies: embed into parameter space, squeezing to higher abstraction from inside, and squeezing to higher abstraction from inside and outside. We show that the first learning strategy is actually what deep learning is using (see Appendix for details). And, we show that by other 2 learning strategies with certain learning capabilities, a learning machine can learn any pattern, i.e. it is an universal learning machine. Squeezing to higher abstraction and more generalization is one strategy that we invent here. We believe that this strategy would work well for many learning tasks. We need to do more works in this direction.

In Section 6, we put more thoughts about learning machine. We will continue work on these directions. In section 7, we briefly discuss some issues of designing a learning machine.

In Appendix, we view deep learning (restricted to the stacks of RBMs) from our point of view, i.e. internal representation space. We start discussions from simplest, i.e. 2-1 RBM, then 3-1 RBM, N-1 RBM, N-M RBM, and stacks of RBM, and deep learning. In this way, it is clear that deep learning is using the learning strategy: embed a group of X-forms into parameter space that we discuss in section 5.

As in [1], for the same reason, here we will restrict to spatial learning, not consider it temporal learning.

2 Learning Machine

IPU – Information Processing Unit
We have discussed mechanical learning in [1]. A learning machine is a concrete realization of mechanical learning. We can briefly recall them here. See the illustration of IPU (Information Processing Unit):


Fig. 1. Illustration of - IPU (Information Processing Unit)

One - IPU has input space ( bits) and output space ( bits), and it will process input to output. If the processing are adapting according to input and feedback to output, and such adapting is governed by a set of simple and fixed rules, we call such adapting as mechanical learning, and such IPU as learning machine. Notice the phrase ”a set of simple and fixed rules”. This is a strong restriction. Mostly, we use this phrase to rule out human intervention. And, we pointed out this: since the set of adapting rules is fixed, we can reasonablly think the adapting rules are built inside learning machine at the setup.

We will try to well describe learning machine. First, we can put one simple observation here.

Theorem 2.1

One - IPU is equivalent to -1 IPU .

Proof: The output space of is -dim, so, we assume it is . If we project to first component, i.e. , we get a -1 IPU, denote it as: . We can do same for , and get -1 IPUs: . This tells us, if we have one - IPU , we can get -1 IPU , so that = .

On the other side, if we have -1 IPU, , we can use them to form a - IPU in this way: = .

Though this theorem is very simple, it can make our discussion much simpler. For most time, we can only consider -1 IPU, which is much simpler to discuss. However, this is only to consider IPU, i.e. ability to process information. For learning, we need to consider more. See theorem 2.

The purpose or target of learning machine:
One learning machine is one IPU, i.e. it will do information processing for each input and generate output, it maps one input (a -dim binary vector) to a -dim binary vector. This is what a CPU does as well (More abstractly, since we do not restrict the size of and , any software without temporal effect can be thought as one IPU).

However, learning machine and CPU have very different goal. One CPU is designed to distinguish a input from any other, even there is only one bit difference, i.e. bit-wise. Yet, IPU and learning machine are not designed for such purpose. IPU and learning machine are designed to distinguish patterns. It should generate different output for different patterns, but, should generate same output for different inputs of a same pattern. That is to say, the target of a learning machine is to learn to distinguish a group of base patterns and how to process them. Thus, we need to understand patterns. Actually, to understand patterns is the most essential job, which is done in next section.

Data
The purpose of a learning machine is to learn, i.e. to modify its information processing. However, we would emphasis that for mechanical learning, learning is driven by data fed into it.

Definition 2.2 (Data Sequence)

If we have a sequence , and , where is a base pattern, is either (empty) or a binary vector in output space, we call this sequence a data sequence.

Note, could be empty or a vector in output space. If it is non-empty, it means that at the moment, the vector should be the value of output. If it is empty, it means there is no data for output match up. Learning machine should be able to learn even is empty. Of course, with value of output, the learning is often easier and faster.

We can easily see that data sequence is the only information source for a learning machine to modify itself. Without information from data sequence, learning machine just has no information about what to modify. Learning machine will adapt itself only based on information from data sequence.

There are 2 kinds of data sequence. One is very well designed data sequence, i.e. we know the consequence of this data, and we can expect the outcome of learning. This is called teaching sequence. Another kind of data sequence is not teaching sequence. These data sequences are just outside data to drive the learning machine (could be random from outside). We have no much knowledge about them. Clearly, in order to learn certain target, if available, a teaching sequence is much more efficient. However, in most cases, we just do not have teaching sequence.

Universal Learning Machine
Naturally, we will ask what a learning machine can learn? Can it learn anything? To address this, we need some careful defintion. Suppose we have a learning machine . At the beginning, has the processing , i.e. is one mapping from input space (-dim) to output space (-dim). As the learning going, the processing will changed to , which is also one mapping from input space to output space, different one though. This is exactly what a learning machine does: its processing is adapting. We then have following definition.

Definition 2.3 (Universal Learning Machine)

For a learning , suppose its current processing is , and is another processing, if we have one data sequence (which depends on and ), so that when we apply to , at the end, the processing of become , we say can learn starting from . If for any given processing and , can learn starting from , we say is an universal learning machine.

Simply say, an universal learning machine can learn anything starting from anything. Universal learning machine is desirable. But, clearly, not all learning machine are universal. So, we will discuss what properties can make a learning machine become universal.

In Theorem 1, we gave the relationship of - IPU and -1 IPU. In order to discuss the relationship of - learning machine and -1 learning machine, we need to introduce one property: standing for zero input. We say a learning machine with property of standing for zero input, if will do nothing for learning, i.e. doing nothing to modify its internal status, when input is zero vector (i.e. ) and output side value is empty. Such a property for a learning machine should be very reasonable and very common. After all, zero input means no stimulation from outside, and it is very reasonable to require that learning machine should do nothing for such input.

Theorem 2.4

If we have one -1 universal learning machine with property of standing for zero input, we can use independent to construct a - universal learning machine .

Proof: For simplicity and without loss of generality, we only consider the case of . Now, is a -1 universal learning machine. As in theorem 1, we can construct a -2 IPU by this way: = .

is sure a -2 learning machine. We only need to show it is universal learning machine. That is to say, for any given processing and , there is one data sequence, and driven by the data sequence, can learn from .

Actually, we can design a data sequence as following: is followed by , where, , and , where is the data sequence that drives to learn from , is the data sequence that drives to learn from , and are the zero inputs. Since and are universal learning machine, indeed exist. We know the data sequence followed by indeed is the data sequence we want.

Of course, the data sequence ( forllowed by ) is far from optimal, and not desired in practice. But, here we just show the existence of such data sequence.

From theorem 1 and 2, we can see that without loss of generality, in many cases, we can focus on -1 learning machine. From now on, we will mostly discuss -1 learning machine.

Different Level of Learning
Learning machine modifies its processing by data sequence. Obviously, there is some mechanism inside learning machine to do the learning. More specifically, this learning mechanism would catch information embedded inside data sequence, and use the information to modify its processing. But, we need to be very careful to distinguish 2 things: 1) the learning mechanism only modify the processing, and the learning mechanism itself is not modified; 2) the learning mechanism itself is also modified. But, how to describe these things well?

If is an universal learning machine, so, for any giving 2 processing and , we have one data sequence so that, starting from , and by applying to , its processing becomes . This is clear. But, consider this, somehow, we apply some other data sequence so that the processing becomes again. Since is universal, this is allowed. But, we ask, what about if we apply data sequence again? what would happen? Do we still have processing becomes ? There is no guarantee for this. Actually, for many learning machine, this is not the case. However, if this is true, it indicate this: learning mechanism does not change as the processing is changing. This would be one important property. We use next definition to capture this property.

Definition 2.5 (Level 1 Learning Machine)

is an universal learning machine, for any giving pair of processing and , by definition, there is at least one data sequence , so that, starting from , and by applying to , processing becomes . If the teaching sequence will only depends on and , and dose not depend on any history of processing of , we call as one level 1 universal learning machine.

Note, following this line of thoughts, we also can define level 0 learning machine, which is an IPU that its processing could not be changed. And, we also can define level 2 learning machine, which is a learning machine that its processing could change, and its learning mechanism could change as well, but its learning mechanism of learning mechanism could not be changed. We can actually follow this line, to define level learning machine, . But, we do not discuss in this direction. We will mostly consider level 1 learning machine.

Some Examples
Example 2.1 [Perceptron]
Perhaps, the simplest learning machine is the perceptron. Perceptron is a 2-1 IPU, and it is a learning machine. However, it is not universal. As well known, does not have AND gate and XOR gate. That is to say, no matter what, could not learn these 2 processing .

Example 2.2 [RBM is learning machine] See [4] for RBM. -1 RBM is one -1 IPU. It is a learning machine as well. There could be many ways to make it learn. The most common way is the so-called Gibbs sampling methods. We can see this clearly: Gibbs sampling is a simple set of rules, and the processing is modified as data is fed into. However, as we can see in Appendix, -1 RBM is not universal.

Put independent -1 RBM together by the way in theorem 1, we get a - RBM. So, - RBM is one learning machine, but it is not universal.

Example 2.3 [Deep learning might be a learning machine] Deep learning normally is a stack of RBM, see [4]. It is often formed in this way: first use data to train RBM at each layer, then stack different layers together, then use data to do further training. By the restricted sense, the whole deep learning action is not mechanical learning, since it involves a lot of human intervention. But, if we just see the stage after different layers stacked together, and exclude any further human intervention, it is a mechanical learning. So, in this sense, deep learning is a learning machine.

Example 2.4 [Deep learning might not be a learning machine] But, these days, deep learning is much more than stacking RBM together then training without human intervention. There are a lot of pruning, change structure, adjusting done by human. Such learning is surely not mechanical learning. However, many properties can still be studied by point of view of mechanical learning.

Generally, we can say, for software to do learning, it often needs people to establish its very complicated structure and initial parameters. This establishment is not simple and fixed. But, once software is established, and is running without human intervention, such software is learning machine.

3 Pattern, Examples, Objective and Subjective

Incoming data drive learning. But, IPU and learning machine do not treat data bit-wise. They treat data as patterns. So, patterns are very important to learning machine. Everything of a learning machine is around patterns. Yet, pattern is also quite confusing. We can actually view pattern from different angles and get quite different results. We can view patterns objectively, i.e. totally independent from learning machine and learning, and we can view patterns subjectively, i.e. quite dependent on learning machine and its view on pattern. It is very important we clarify the concept here.

Examples of Patterns Before going to more rigorous discussions, we here discuss some examples of patterns, which could help us to clean thoughts. The simplest patterns are 2-dim patterns.

Example 3.1 [All 2-dim Base Patterns] 2-dim patterns is so simple that we can list all of base patterns explicitly below:

=

All base patterns are here: totally 4 base patterns. For example, (0, 1) is a base pattern. But, besides base patterns, there are more patterns. How about this statement: ”incoming pattern is (0,0) or (0,1)”? Very clearly, what this statement describes is not in . However, equally clearly, this statement is valid, and specifies an incoming pattern. We have solid reason to believe that the statement represents a new pattern that is not in base pattern space. So, the patterns should be able to include ”combination of patterns”. We can introduce one way to express this:

= = { one pattern that either (0,0) or (0,1) appears }

In above equation, the symbol is called OR (see the similar usage of symbol in [6]). The combination operator would make a new pattern out of 2 base patterns. Clearly, this new pattern is not in base pattern space. Additional important point: we should note that the new pattern above is independent from learning machine.

Example 3.2 [2x2 Black-White Images] We can consider a little more complicated base patterns: 2x2 black-white images. See below illustrations.


Fig. 1 One base pattern in base pattern space of 2x2 black-white images

Although in the above illustrations, the patterns are in 2-dim form, it is easy to see that all these patterns can be represented well in linear vector form (for example, the base pattern in Fig. 1 is (1, 1, 0, 1)). It is simple enough so that we can list them:

=

One pattern could be shown as the vector or as 2x2 image. For example, (1,0,1,0) is in vector form, the equivalent image is a vertical line. Let’s see some example of combination operators. We can view (1,1,0,0) as one horizontal line, and (0,1,0,1) as one vertical line. Consider this statement ”one pattern that has this horizontal line and also this vertical line”. Clearly, this is one new pattern. We try to capture it as below:

= = { one pattern that both (1,1,0,0) and (0,1,0,1) appears together }

The symbol is called AND (see the similar usage of symbol in [6]). But, what is the new pattern ? First impression that it is the base patter: (see it in Fig. 1). It is. This is a new base pattern out from 2 base patterns. How come? Yet, it could be even more complicated. We will address this later.

Now, we should note that the new pattern above is surely dependent on learning machine and how it views patterns. Without learning machine and how it views patterns, we could not even talk about ”appears together”.

We will see another example of pattern but not base pattern. is a base pattern. How about this statement: ”one pattern that (1,1,0,0) not appears”? This is one new pattern as well. We would have:

= = { one pattern that (1,1,0,0) not appears }

The symbol is called NOT (see the similar usage of symbol in [6]). However, what is the new pattern? Is it a group of base patterns: {(0,0,1,1), (0,0,0,1), }? As the last question, this should be addressed later.

Besides the above situations, actually, we can see more interesting things (which could not be seen in ).

Example 3.4 [Abstraction and Concretization] Let’s see this pattern:

= { common feature of (1,1,0,0) and (0,0,1,1) }

Clearly, this common feature is not in . But, this common feature is one very important pattern: it represents horizontal line. Actually, we can say this pattern is horizontal line. Similarly, we have:

= { common feature of (1,0,1,0) and (0,1,0,1) }

This time, is vertical line. Further, we can see:

= { common feature of and }

This time, is line, vertical or horizontal. From the examples above, we can see clearly that abstracting a common feature out from a group of patterns is one very important operation. Without it, we simply could not see some very crucial patterns (such as line). Thus, we need to develop symbols for such operations. For example:

=

Here, is one operation that abstract some common features out from the patterns and . Note, is not one operator, but one operation. That means that for same set of patterns, could have more than one operations, which abstract different features from the set of patterns. As we meet more complicated patterns later, this properties would become very clearer.

Very clearly, the operation is highly dependent on a learning machine and what the learning machine learned previously.

Conversely to abstraction operation , we can also have concretization operation . See examples below:

= { one concrete horizontal line related to the pattern (0,0,0,1) } = (0,0,1,1)

is one operation that concretize a pattern (which is one abstraction pattern) by related it to some pattern. Any concretization of a pattern is a pattern. As above, concretizing a horizontal line would give a real horizontal line. And, since it is related to (0,0,0,1), this horizontal line should be (0,0,1,1).

Very clearly, the operation and are highly dependent on a learning machine (such as: what the learning machine learned previously, how it views patterns, etc).

From above examples, we can see that patterns are much more than base patterns. We can have pattern of patterns (see horizontal lines, vertical lines). We can have pattern of patterns of patterns (see line). We can have operations on the patterns. We have operators of patterns. All results are still patterns. So, patterns are not just one type, it has many types. Or, we can say patterns are typeless. Base patterns are just simplest patterns and fundamental building blocks.

Example 3.3 [4x4 Black-White Images] We now consider even more complicated patterns: 4x4 black-white images. See below illustrations.


Fig. 2 A base pattern in base pattern space of 4x4 black-white images

The binary vector space has elements. This is a large number. While in theory we can still list all base patterns, it would be very hard.

=

Since there is a larger dimension, more phenomenon would appears. We can see some of them here. Clearly, the binary vector shown in the above equation is one horizontal line. So, we can still have:

= (first 2 horizontal lines)

Clearly, this pattern is not in . But, it represents first 2 horizontal lines. Can this pattern , which abstracts first 2 horizontal line, represent all horizontal 4 lines? This is one very important question. At this moment, we can not answer it.

Similarly, we have:

= (all vertical lines)

And, we ca have:

=

But, again, since we are dealing more complicated patterns space now, we can see something that Example 2 could not show. How about:

= { a point at coordination (3,3) }, = { a point at coordination (0,0) }

=

This is concretization of vertical line related to point (0,0).

And, more:

=

This is one pattern with one vertical line and a point at (3,3). The pattern is AND of 2 different types of patterns. This is one example that we have to make all operations and operators on patterns typeless.

Let’s try to put the above equations together, we then have:

= (all vertical lines), { a point at coordination (0,0) }) { a point at coordination (3,3) }

Might be easier to just state: a vertical line pass through (0,0) and a point at (3,3). But, as we can see, the above equation describe the pattern much more precisely and mechanically (i.e. to avoid to use language, either natural language or programming language, just use our simple and mechanical terms: , , , , ).

We examined some simple examples above. Though simple, they are very revealing. From these examples, we can see some important properties of patterns. First, patterns are more than base patterns, much more. Second, some patterns together could generate new pattern. There are many ways to generate new patterns, such as OR, AND, NOT, abstraction, concretization, and more. Third, very crucially, we realize that some patterns are independent from learning machine, while some depend on learning machine heavily. In other words, for a learning machine, some patterns are objective, while some are subjective.

Pattern, Objectively
First, we want to discuss pattern that is objective to learning machine. Base pattern is the foundation for all patterns. We defined it before. But, we repeat it again here for easy to cite.

Definition 3.1 (Base Pattern Space)

-dim base pattern space, denote as , is a -dim binary vector space, i.e.

=

Each element of is a base pattern. There are totally patterns in . When is not very small, is a huge set. Actually, this hugeness is the source of richness of world and fundamental reason of difficulty of learning.

Base pattern space is just the starting point of our discussion. From above examples, we know that many patterns are not base pattern. But, if a pattern is not base pattern, what is it? We can see in this angle: no matter what a pattern is, what is presented to input space of a learning machine is a base pattern. So, naturally, we have definition below.

Definition 3.2 (Pattern as Set of Base Patterns)

A -dim pattern is a set of base patterns:

= { }

We can denote this set as , and call is as the base set of (b stands for base). While we use as the notation of a pattern, we understand it is a set of base patterns. If we want to emphasis it is a set of base patterns, we use notation . We also can write . Any base pattern in base set is called a base face of (or just simply face). For example, in above, is one face of . Specially, any base pattern is one pattern, and it is the (only) base face of itself.

According to this definition, a pattern is independent from learning machine, which is just a group of base patterns, no matter what a learning machine is. If we want to view pattern objectively, the only way is to define a pattern as a group of base patterns. So, objectively, a pattern is a set of base patterns.

What objective operators on objective patterns are? Since patterns are set of base patterns, naturally we first examine basic set operations: union, intersection, and complement.

Definition 3.3 (Operator OR (set union))

Based on any 2 patterns and , we have a new pattern :

= OR =

Here, is the set union. That is to say, this new pattern is such a pattern whose base set is the union set of base sets of 2 old patterns. Or, we can say, is such a pattern whose face is either a face of or a face of .

Definition 3.4 (Operator AND (set intersection))

For any 2 patterns and , we define a new pattern:

= AND =

Here, is the set intersection. Or we can say, is such a pattern that its face is both face of and . In this sense, we say, is both and .

Definition 3.5 (Operator NOT (set complement) )

For any patterns , we define a new pattern:

= NOT = =

Here, is complement set of . That is to say, is such a pattern that its face is not a face of .

Very clearly, the above 3 operators do not depend on learning machine. So, they are all objective. Consequently, if we apply these 3 operators consecutively any times, we still generate a new pattern that is objective.

Pattern, Subjectively
Now we turn attention to subjective pattern, i.e. pattern to be viewed from a particular learning machine.

We need to go back for a while and consider basic. When we say there is an incoming pattern to a learning machine, what do we mean? If we see this objectively, the meaning is clear: at input space, a binary vector is presented, which is a face of the incoming pattern . This does not depend on learning machine at all. And, this is very clear and no unambiguity.

However, as our examples demonstrated, we have to consider pattern subjectively. We need to go slowly since there are a lot of confusing here. We have to consider something that is not valid at all objectively.

Pattern, 1-significant or 0-significant
First, when we discuss patterns subjectively, we need to know: Is 1 significant? or 0 significant, or both are equally significant?

Does this sound wrong? By definition, a base pattern is a binary vector, so, of course, both 0 and 1 would be equally significant. Why consider 1-significant, or 0-significant? Let’s consider one simple example. For 4-dim pattern, is one base pattern, and could be viewed as one horizontal line (see example 2 and Fig. 2). is also one base pattern, and could be viewed as one vertical line. When we talk about and appears together (or happen together), do we mean this pattern: (1, 1, 0, 1), or (0, 1, 0, 0)? Former one is 1-significant, and latter is 0-significant. So, if we want to use the term such as ”2 pattern happen together”, it is necessary to distinguish 1-significant and 0-significant.

So, to distinguish 1-significant pattern or 0-significant pattern indeed makes sense, and is necessary. When we consider a pattern as 1-significant, we often look at its 1 components, not pay much attention to its 0 components, just as we did in the example: ”(1, 1, 0, 1) equals (1, 1, 0, 0) and (0, 1, 0, 1) appear together”. Contrast, we do not think: ”(0, 1, 0, 0) equals (1, 1, 0, 0) and (0, 1, 0, 1) appear together”, since we do not consider 0-significant.

Perhaps, 1-significant is actually already in our sub-conscious. Just see which sentence is more appealing to us: ”(1, 1, 0, 1) equals (1, 1, 0, 0) and (0, 1, 0, 1) appear together”, or ”(0, 1, 0, 0) equals (1, 1, 0, 0) and (0, 1, 0, 1) appear together”.

Additional to the above consideration, most patterns that people consider for many applications are sparse pattern, i.e. only a few bits in the pattern are 1, most are zero. For sparse patterns, 1-significant is very natural choice. In fact, in sparse pattern, 1-significant is very natural. Just see this example:

(1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1) =

(1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) and (0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1) appear together.

We would accept this statement easily. From now on, unless we state explicitly, we will use 1-significant.

Patterns and Learning Machine
From the examples, we know that one pattern could be perceived very differently by different learning machine. This make us consider this question carefully: from the view of learning machine, what really is a patterns? We have not really addressed this crucial question yet, we just let our intuition play at the background. In Example 2, when we talk about operator, and give an equation = , we did not really tell what is this pattern . Now we address this more carefully.

Take a look at this: { one pattern that both (1,1,0,0) and (1,0,1,0) appears together } . In our tuition, this is a right thought. However, if we see things objectively, this is simply wrong: Base patterns (1,1,0,0) and (1,0,1,0) cannot appears together. They are different base patterns. At one time, only one of them can appear. In this sense, ”together” cannot happen.

To address this question, we have to going deep to see what a pattern really is. When we talk about base pattern, i.e. binary vector in , there is no unambiguity. Everything is very clear. However, just base pattern is not good enough. With only base patterns, we simply cannot handle most things that we want to work with.

At this point, we should be able to realize that pattern is not only associated with what is presented at input space (surely base pattern), but also associated with how a learning machine perceives incoming pattern. For example, when base pattern (1,1,1,0) is at input, the learning machine could perceive it is just one base pattern, but also could perceive it as two base patterns (1,1,0,0) and (1,0,1,0) appear together, or could perceive much more, much more complicated.

So, naturally, a question arise: can we define pattern without introducing perception of learning machine? Yes, this can be done. Since no matter what pattern is, when a pattern is sent to learning machine, it is one base pattern at input space. In this way, we can surely define a pattern to be a set of base patterns. So, no matter what learning machine is and how it perceives, pattern is a set of base patterns. This is just objective pattern. For example, we can forcefully define { one pattern that both (1,1,0,0) and (1,0,1,0) appears together } as the set of base patter { (1,1,1,0) }. This is what we did in above section.

Seems this way resolves unambiguity. However, as all examples indicated, objective way cannot go far and we need to understand patterns subjectively. Pattern cannot be separated from how a learning machine perceives. Pattern defined as a set of base patterns is precise, but how a learning machine perceives patterns is much more important. Without learning machine perceiving, actually, no matter how precise a pattern is, it is not much useful.

Here, it is worth to stop and review our thoughts here. The major point is: learning machine plays an active role, and it must have its own way to see its outside world. More precisely, a learning machine must have the ability to tell what is outside of itself, and what is inside of itself, and what is its view to outside. With or without such ability is very critical. Only with this ability, the learning machine can go further and our later discussions can be conducted. It is very important we realize this. Without such ability, a learning machine is reduced to an ordinary computer program that is very hard to do learning. From now on, our learning machine will have such ability and we will make the ability more clearer. So, patterns would be mainly subjective to a learning machine.

Thus, we have to address this critical issue: how a learning machine perceives pattern? And we need to see this by considering relationship among patterns. We need to think these issues as well: 1) how to form new pattern from old pattern? 2) how to associate new pattern with prior learned patterns? 3) how to organize learned patterns? 4) how to re-organized learned patterns? In order to do these, we have to see how machine perceives.

How Learning Machine Perceives Patterns
How a learning machine perceives pattern is closely related to how it processes information. So we go back to IPU for a while. Consider a -1 IPU , suppose its processing is . We define of black set:

Definition 3.6 (Black set of -1 Ipu)

For a -1 IPU , if its processing is , the black set of is:

=

Equivalently, we also call as the black set of processing .

For IPU , suppose is its black set, this means: if we put one base pattern to input space, will process it to 1, if , to 0. This reveals one important fact to us: inside , there must exist a bit with such property: if input , , if , .

We do not know exactly what inside is, and we do not know how exactly the processing is done. However, we do know such a bit must be there. We do not know where this bit is and exists in what form, but we know it exists. Otherwise, how can be able to distinguish input from or not from ? Such bit reflects how process input to output. We can imagine that could have more such bits. So, we have definition.

Definition 3.7 (Processing Bits)

For IPU , if it has some internal bit has such properties: there is a set so that for any , (light), for any , (dark), we call bit as one processing bit. If has more than one such bit, say, are all processing bits, we call such set as set of processing bits of , or just simply, processing bits.

Theorem 3.8

For a IPU , set of processing bits exists and is not empty.

Proof: We will exclude 2 extreme cases, i.e. maps all input to 0 vector , and maps all input to 1 vector . After excluding the 2 extreme cases, we can say, black set of is a proper subset of , so does . Thus, as we argued above, there must exist a bit inside with such property: for , , for , . So, set of processing bits indeed exists and not empty.

In proof, we show that set of processing bits are not empty, at least there is bit in it. Such case indeed exists. There are IPU whose set of processing bits has only one elememt. But in most cases, set of processing bits has more than one element. In fact, , the number of processing bits, can reflect the complexity of IPU. Processing bits reflects how processing of IPU is conducted.

Since a learning machine is also a IPU, it has processing bits as well. But, as we discussed before, how a learning machine perceives pattern is closely related to how it process input. So, for learning machine, we will call these bits as perception bits, instead of processing bits. When one base pattern is put into input, each perception bit will take its value. All these values together, we have perception values. Perception values reflects how a learning machine perceives this particular base pattern. If a learning machine is learning, its perception bits could chang, the number perception bits could increase or decrease, its behavior could change. Even the array of perception bits might not change, the behavior could change.

Armed with perception bits, we can describe how perceives pattern. When a base pattern is put into input space, perception bits act, some are light and some are dark. These bits reflect how is perceived, i.e. the perception bits are taking values, we have a binary vector , where is value (1 or 0) of takes. We call them as perception values. Note, the perception values depends on a particular base pattern. The perception values tells how perceives a base pattern .

If and are 2 different base patterns, i.e. they are different bit-wise, but they have same perception values, we know that these 2 base patterns are perceived as same by , since has no way to tell any difference between and . If 2 base patterns are possibly perceived different by , their perception values must be different (at least one perception bit must behaves differently).

However, reverse is not true. It is possible that 2 base patterns and have different perception values, but still could perceives and as same subjectively. That is to say, can perceives 2 different base patterns as same even their perception values are different. So we have definition below.

Definition 3.9 (Base patterns perceived same by a learning machine subjectively)

Suppose is a learning machine, are perception bits, if for 2 base patterns and , their perception values are and , and at least for one , , we say, at perception bit , could subjectively perceives and as same.

That is to say, for 2 base patterns, if at any perception bit, their perception values are different, learning machine is not possible to perceive them same. But, if at least at one perception bit, their perception values are both 1, could possibly perceive them as same subjectively. Of course, could also perceive them as different subjectively. Note, perception value should be 1, not 0. This is related to 1-significant.

Definition 3.10 (Pattern perceived by a learning machine subjectively)

Suppose is a learning machine, are perception bits. And suppose is a group of base pattern, and at perception bit , perception value for all base patterns in equals 1, then could perceive all base patterns of as same, and if so, we say perceives as one pattern subjectively at , and forms a subjective pattern.

Note, in definition, it only needs that all base patterns in behaves same at one perception bit. This is minimal requirement. Of course, this requirement could be increased. For example, to require at all perception bits behaving same. But, all requirements are subjective.

Here we put down the major points about subjective patterns and how a learning machine to perceive them.

  1. There are perception bits in a learning machine (only exclude 2 extreme cases). Any system that satisfies the definition of Iearning machine must have perception bits. How perception bits are formed and how exactly perception bits are realized inside a learning machine could be different greatly. But we emphasis that perception bits indeed exist.

  2. These bits are very crucial for a learning machine. They reflect how learning machine perceive and process patterns. When a base pattern is put into input space of learning machine, then perception bits act and the learning machine uses these values to perceive pattern subjectively, and process pattern accordingly.

  3. For learning machine, its perception bits are changing with learning. However, even the number of perception bits are not changing, the behavior of perception bits could change (so does the perception of learning machine).

  4. Armed by perception bits, we can well understand subjective pattern. If 2 base patterns behave same at one perception bit, then, the 2 base patterns can be perceived as same at this perception bit subjectively. This can be extended to more than 2 base patterns. For a group of base patterns , and if all base patterns behave same at one perception bit, then can be perceived as same at this perception bit subjectively. This is the way to connect objective and subjective.

To consider pattern objectively, only need to involve set operation, no need to do any modification on learning machine itself. But, to consider pattern subjectively, set operation could be used. But more importantly, perception bits are needed. And, quite often, to modify perception bits is necessary. For subjective operator of subjective patterns, we need to base our discussion on perception bits.

Pattern, Subjective Operators
Just as operators for objective patterns, it is naturally to consider subjective operators for subjective patterns. There are 3 very basic operators: NOT, OR, AND. First, consider NOT.

Definition 3.11 (Operator NOT for a Pattern, Subjectively)

Suppose is a learning machine, are perception bits. For a subjective pattern perceived at by , is another pattern perceived at by in this way: are all such base patterns that is perceived by , and at , the perception value is 0.

We can denote this pattern as = NOT or . This notation is following [5]. We can also say, pattern is a pattern that does not appear.

Note, this operation NOT is subjective. consists of base patterns that are perceived by . So, this is quite different than the objective operation NOT (set complement). Another important point is: in order to do this operator, no need to modify perception bits of , only perception value is different.

Now we turn attention to another operator OR. Consider that we have a subjective pattern , and the perception values of are , and subjective pattern , and the perception values of are . Since and are different pattern, their perception values must be different at some bits. Now, we want to put them together to form a new pattern, i.e. , which measn either or . This action of course changes the perception of and must change the perception. If the perception is not changed, there is no way to have OR. So, when we introduce the OR operator, we in fact change . This is what subjective really means: learning machine changes its perception so that and are treated same, though and indeed have difference, and the difference is ignored.

Definition 3.12 (Operator OR for 2 Patterns, Subjectively)

Suppose is a learning machine, are perception bits. For any 2 subjective patterns and , perceived at by , and perceived at by , is another subjective pattern, and perceived by in this way: first will modify its perception bits if necessary, then perceive any base patterns from either or at another perception bit same. That is to say, if does not exist, will generate this perception bit first.

We can also say, new pattern is either or appears. We can denote this new pattern as = . This notation is following [3].

Note, if we want to do operation OR, we might need to modify perception bits of . This is often done by adding a new perception bit. This is totally different from the objective OR (or set union). On surface, indeed is a union (set union) of and . But, without modification of perception bits, there is no way to do this union.

Then consider subjective operator AND. This operator is crucially important. Actually, we spent a lot of time to argue about this operator, i.e. appears together.

Definition 3.13 (AND Operator for 2 Base Patterns, Subjectively)

Suppose is a learning machine, are perception bits. If is one subjective patterns perceived at , is one subjective patterns perceived at , then, all base patterns that perceives at both and at the same time will form another subjective pattern , and is perceived by at . That is to say, if does not exist, will generate this perception bit first.

We can also say, new pattern is both and appear together. We can denote this pattern as = . This notation is following [3].

Note, if we want to do AND operator, we have to modify perception bits of . This is totally different from the objective AND (or set intersection).

X-Form
We have setup 3 subjective operators for subjective patterns. If applying the 3 operators consecutively, we will have one algebraic expression. Of course, in order this algebraic expression makes sense, learning machine needs to modify its perception bits. But, we want to know what we can construct from such algebraic expressions? First, we see some examples.

Example 3.4 [One Simple X-form ] Suppose are 3 different base patterns. Then,

is one subjective pattern. We can say, this pattern is: either or and happen together. However, the expression has more aspects. Since is one algebraic expression, we can substitute base patterns into it, and get one value. This is actually what algebraic expression for. That is to say, is one mapping on to {0, 1}, and it behaves like this: for any , if or , , otherwise . This matches our intuition well.

Example 3.5 [More X-forms ] If is a group of base patterns, and we have some algebraic expressions, we get more subjective patterns based on . See some example here:

are subjective pattern. But, also these expressions can be used to define a mapping on to {0, 1}, just like above.

Example 3.6 [Prohibition ] If and are expressions, we want to find an expression for this situation: prohibits , i.e. if is light, output has to be dark, otherwise, output equals . This expression is:

are subjective pattern.

Above, each expression has 2 faces: first, it is one algebraic expression, second, it is one subjective pattern perceived by . In order to make sense for these expressions, has to modify its perception bits accordingly. This is crucial. Thus, we have following definition.

Definition 3.14 (X-Form for patterns)

If is one algebraic expression of 3 subjective operators, and is a group of base patterns, then we call the expression as an X-form upon , or simply X-form. We note, in order to have this expression make sense, quite often, learning machine needs to modify its perception bits accordingly. And, if this expression make sense, we then have a subjective pattern .

The name X-form is chosen for reason: these expressions are forms, and we do not know them well, and X means unknown. In [3], there are similar form called conjunction normal form (CNF). Though, our expression are quite different than CNF of Valiant. CNF of Valiant is basically objective, while X-forms are subjective.

One important aspect of X-form is: it is one algebraic expression, so, we can substitute variable into and calculate to get output value, 0 or 1. See above examples. In this sense, one X-form would be a mapping on to {0, 1}. The calculation of this expression is actually same as learning machine doing processing inside itself. This is one wonderful property. This is exactly the reason why we introduce the construction of X-form. In this way, one X-form can be thought as one processing. Thus, we can also think one X-form has a black set, which is exactly equals the subjective pattern of this X-form.

In order to connect objective patterns, subject patterns, and X-form, we have following theorem.

Theorem 3.15

Suppose is a -1 learning machine. For any objective pattern (i.e. a set of base patterns in ), we can find some algebraic expression upon some group of base patterns so that . If so, we say X-form express . In most cases, there are many X-form to express . However, among those X-forms, we can find at least one so that it base upon no more than base patterns, i.e. in .

Proof: Suppose is one objective pattern. It is easy to see there is one algebraic expression can express . Since is a set of base patterns, surely we can write as:

where each is a base pattern. The algebraic expression

can express , since we can easily see . If is not bigger than , we already find such a group of base pattern and such an algebraic expression, and proof is done.

If is bigger than , we can do further. For one base pattern , we can find some other base patterns , , and to express in this way: . Such sure can be found. For example, if , we can find and , then .

For a group of base patterns, we can do same. That is to say, for , we can find There are at most base patterns , so that for each of , we can find some , and . We know such a group base patterns indeed exists. For example, are such a group.

Now, we can continue:

(1)

This algebraic expression and a group of base patterns , and , are what we are looking for. We should note, expression (3) is ”level 1” expression, while (1) is ”level 2” expression. We can do for higher level expressions.

Of course, the expression in the proof is just used to do the existence proof. It is not best expression. This expression is very ”shallow”. We can push the expression to higher level. But, here we do not discuss how to do so.

Theorem 4 tells the relationship of objective pattern and subjective pattern. For any objective pattern , we can find a good group of base patterns (size of this group is as small as possible, at worst, not greater than ), and a good algebraic expression, to express this objective pattern as one subjective pattern.

Here is the major point. One objective pattern is a set of base patterns. However, when is perceived by a learning machine, learning machine generates a subjective pattern. The major question is: will the subjective one match with objective one? Theorem 4 confirm that, yes, for any objective pattern , we always can find X-form to express .

Naturally, next we would ask, how well such expression is? For ”how well”, we need some criteria. There could have many such criteria. However, this criteria is very important: use as less as possible base patterns, i.e. in , is as small as possible. There could have other important properties of X-form. To satisfy these properties, we can get a better X-form.

Of course, next question is how to really find or construct such X-form. That is what we do next.

Sub-Form
Several X-form could form a new X-form. And, some part of a X-form is also a X-form. Such part could be quite useful. So, we discuss sub-form here.

Definition 3.16 (Sub-Form of a X-form)

Suppose is a X-form, so, it is one algebraic expression (of 3 subjective operations) upon a set of base patterns so that . A sub-form of is one algebraic expression upon a subset of , , , so that , and the objective pattern expressed by is a proper subset of the objective pattern expressed by .

So, by definition, a sub-form is also a X-form.

Example 3.7 [Sub-Form] 1. is one X-form. Both and are sub-form of .
2. is one X-form. Both and are sub-form of . But, (or ) is not sub-form of .
3. is one X-form. We can see that the black set of is . So, is sub-form of , but are not.

One X-form could have more than one sub-form. Or one X-form has no sub-form. For a sub-form, since it is one X-form, it could have sub-form for itself. So, we can have sub-forms for sub-forms, and go on. It is easy to see, any sub-form of sub-form is still a sub-form. So, X-form could have many sub-form. We denote all sub-forms as . These sub-form are play important roles. They are actually fabric of processing.

4 Learning by Teaching

We now turn attentions to learning. We emphasis again that a learning machine is based on patterns, not bitwise, and the purpose of a learning machine is to process patterns and learn how to process.

Theorem 1 and Theorem 2 tell us, for simplicity and without loss of generality, we can just consider -1 learning machine. For a -1 learning machine, its processing is actually equivalent to its black set. We can also consider an objective pattern , which is a set of base patterns. Thus, can be thought as black set of one processing, and vise versa. This tells us, for a -1 learning machine, its processing is equivalent to a objective pattern, called its black pattern. Obviously, black set and black pattern are equivalent. We can switch the 2 terms freely. By this understanding, we can define universal learning machine equivalently below.

Definition 4.1 (Universal Learning Machine (by Black Set))

For a -1 learning , if its current black set is , and a given objective pattern , can start from to learn and at the end of learning its black set becomes , we call can learn from to . If for any and , can learn from to , we call a universal -1 learning machine.

For a -1 learning , it is easy to see definition 4.1 and definition 2.2 are equivalent.

Now, we turn attention to how to make a learning machine learn from to . It is easy to imagine, there are many possible ways to learn. Here, we discuss learning by teaching, that is to say, we can design a special data sequence and apply it to the learning machine, then the machine learns effectively under the driven of . We call as teaching sequence. Teaching sequence is a specially designed data sequence.

It is easy to imagine, if we know the teaching sequence, learning by teaching is easy. Just feed the teaching sequence into, and learning is done. It is quite close to programming. But, learning by teaching can reveal interesting properties to us, and can guide us for further discussions.

Consider a teaching sequence . Here, output feedback could be empty, i.e. there is just no output feedback. Learning machine still could learn without output feedback. Of course, with output feedback, the learning will be more effective and efficient. Teaching sequence is the only information source for the machine. Learning machine will not get any other outside information besides teaching sequence. This is very essential.

The fundamental question would be: what kind properties of learning machine to make it universal? We will reduce this questions to see some capabilities of learning machine, and with these capabilities, machine is universal.

Note, one special case is: black set of is empty set, we call it as empty state. This is one very useful case. There are some base patterns quite unique: , , , i.e. these base patterns only has one component equals 1, and rest equals 0. We call such base patterns as elementary base patterns.

Definition 4.2 (Learning by Teaching - Capability 1)

For a learning machine , the capability 1 is: for any elementary base pattern , , there is one teaching sequence , so that starting from empty state, driven by , the black pattern become .

The capability 1 means: can learn any elementary base pattern from empty state.

Definition 4.3 (Learning by Teaching - Capability 2)

For a learning machine , the capability 2 is: for any black pattern , there is at least one teaching sequences , so that starting from , driven by , the black set becomes empty.

The capability 2 means: to forget current black pattern, can go back to empty state.

Definition 4.4 (Learning by Teaching - Capability 3)

For a learning machine , the capability 3 is: for any 2 objective patterns and , there is at least one teaching sequence , so that starting from , driven by , the black pattern becomes ; and there is at least one teaching sequence so that starting from , driven by , the black pattern becomes ; and there is at least one teaching sequence so that starting from , driven by , the black pattern becomes ;

Simply say, capability 3 means: for any 2 objective patterns , learning machine is capable to learn subjective pattern of applying operator ””, ”+” to and , and ”” to . This is the most crucial capability.

If one learning machine has all 3 capabilities, we expect a strong learning machine. Actually, we have following major theorem.

Theorem 4.5

If a -1 learning machine has the above 3 capabilities, it is an universal learning machine.

Proof: Since we have capability 2, we only need to consider the case: to start from empty state. That is to say, we only need to prove this: for any objective pattern , we can find a teaching sequence , so that starting from empty state, driven by , the black pattern becomes .

According to Theorem 4, for any objective pattern , we can find an X-form , where is one algebraic expression, is a group of elementary base patterns , so that equals this X-form, i.e. .

By , we can construct teaching sequence like this way:
1) First we have a teaching sequence so that go to empty state. This is using capability 1.
2) Then, have a teaching sequence so that have black pattern . This is using capability 2.
3) Since is formed by finite steps of , , and starting from , we can use capability 3 consecutively to construct teaching sequence for each operator in . Eventually, we will get a teaching sequence over all operators in .
Such teaching sequence will drive to .

Note: The expression depends on several things: the complexity of , and to find an X-form for . In theorem 4, we demonstrated 2 level X-forms. We actually expect to have a much better X-form. The worst case would be: , in which, the pattern is so complicated that there is no way to find a X-form for higher level, so the only way is to just list all base patterns.

Corollary 4.5.1

If we have -1 learning machine with the above 3 capabilities, we then can use it to build one universal - learning machine.

This is just following Theorem 5 and Theorem 2. From Theorem 5 and corollary, we reduce the task to find university learning machine to find a -1 learning machine with 3 capabilities. Once we can find a way to construct a -1 learning machine with those 3 capabilities, we have an universal learning machine.

Also, it is easy to see that an universal learning machine surely has the 3 capabilities. Thus, the necessary and sufficient conditions for a learning machine to become universal are the 3 capabilities.

But, do we have one learning machine with those 3 capabilities? Well, it is up for us to design a concrete learning machine with the 3 capacities. We will do this in other places. Any way, the 3 capabilities will give us a clear guide on design of effective learning machine: The most essential capabilities for a learning machine is to find a way to move patterns to higher organized patterns. See the quotation at the front, most important step is: ”from a lower level to …… higher”. This indeed guides us well.

5 Learning without Teaching Sequence

Learning by teaching is a very special way to drive learning. From discussions in last section, we can see clearly, only when we have full knowledge of learning machine and the desired pattern, we could possibly design a teaching sequence. In this sense, learning by teaching is quite similar to programming – to inject the ability into the machine, not machine to learn by itself. Of course, learning by teaching is still a further step than programming, and it will bring us a lot more power to handle machines than just programming.

We focus on -1 learning machine .

Typical Mechanical Learning
From examples of mechanical learning, typical mechanical learning would be as below:

  1. For -1 learning machine , the learning target is often is given as an objective pattern , is expected to learn, and the learning result is that the black set of become .

  2. To drive the mechanical learning, data sequence is fed into . In learning by teaching, the data sequence is a specially designed teaching sequence. In learning without teaching, typically, data to feed into are chosen from target objective pattern , and from . In another word, it is sampling .

  3. Feed-in data will drive learning, i.e. the black set of is changing. Hopefully, at some moment later, the black set at the moment becomes , or at least approximates well.

We put the above observations into a formal definition.

Definition 5.1 (Typical Mechanical Learning)

Let be a -1 learning machine, action of typical mechanical learning is:

  1. to set one target pattern: ;

  2. to choose one sampling set , normally, is a much smaller set than . But, in extreme case, could be ;

  3. to choose another sampling set , i.e. all member in is not in . is a much smaller set than . But, in extreme case, could be ;

  4. to use sampling set of and to form data sequence. In data sequence, data are , if , is 1 or (empty), if , is 0 or (empty).

  5. to feed data sequence into consecutively, we do not restrict how to feed, and how long to feed, and how often to feed, how to repeat feeding, which part to feed, etc.

The action above will drive to learn. As the result of learning. its processing (equivalently, black set) is changing.

Remark: could be empty, i.e. not sampling out of . But, is often not empty. However, if is empty, should not be empty. We will discuss this more in Data Sufficiency.

For such typical mechanical learning, what is happening in the learning process? To address this, first we want to examine learning machine.

Internal Representation Space
For a learning machine , it has input space (-dim binary array), and output space (-dim binary array, but here ), and something between input space and output space. This something between is the major body of a learning machine, and we denote it as . What is ? We have not discussed it yet. We need to carefully describe and its essential properties.

At any point of learning, if we stop learning, then is a IPU, i.e. it has processing at the moment. So we can say, at this moment, uniquely defines something between input and output. Thus, at the moment, we can think, between input space and output space is . Thus, it is quite reasonable to define as the collection of all processing of . And, we will give a better name to : Internal Representation Space.

Definition 5.2 (Internal Representation Space)

For -1 learning machine , the major body of that lays between input space and output space is called as internal representation space of . At any moment, the processing of is one member of this internal representation space. So, the internal representation space is the collection of all possible processing of . We denote it as .

Remark: All possible processing of is , an extremely huge number for not too small . But for a particular learning machine, its internal representation space might be limited, not fully.

For -1 learning machine, for any processing , it is equivalent to its black set . By theorem 4, there is at least one X-form (one algebraic expression , and some base patterns ) so that . We say that this X-form expresses processing . Thus, naturally, we can think, the collection of all X-forms can be used to express the internal expression space. We have following definition.

Definition 5.3 (Internal Representation Space (X-form))

For -1 learning machine , the major body of that lays between input space and output space is called as internal representation space of . At any moment, one X-form expresses the processing of , it is one member of this internal representation space. So, the internal representation space is the collection of all possible X-forms. We denote it as .

Remark, for one processing (which is equivalent to one black set), there is at least one X-form to express it. Quite often, there are many X-forms to express one processing. So, the size of would be not less than the size of . In fact, it is much larger. Learning sure is to get correct processing. However, to seek a good X-form that expresses the processing is more important. Thus, to use definition 5.3 (all X-forms as the internal representation space) is much better than to use definition 5.2. From now on, we will use definition 5.3. And, we just denote internal representation space as .

Now, we can clearly say, learning is a dynamics on space , from one X-form to another X-form. Or, we can say, learning is a flow on internal representation space.

One important note: No matter what a learning machine really is, if it satisfies the definition of learning machine, it must have internal representation space as we defined above. If we concretely design a learning machine, the internal representation space is designed by us explicitly, we know it well and can view its inside directly. If the learning machine is formed by different way, such as from a RBM (see Appendix), we could not view the inside directly. But, in theory, internal representation space indeed exists, and this space, equivalently, consists of a collection of X-forms. In theory, such space might be limited, not all X-forms, but only a part of the collection of all possible X-forms. This is not good. But, unfortunately, many learning machines are just so. However, when we discuss learning machine theoretically, the internal representation space is as definition 5.3.

Learning Methods
For a learning machine , besides input space, output space, and internal representation space , clearly, it must also have learning mechanism, or learning methods. So, we need to describe learning methods.

Now we know that learning is a dynamics on internal representation space, moving from one X-form to another. But, how exactly?

Let’s make some notations. We have learning machine , its input space, output space, and its internal representation space , and a learning method . As in definition 5.1, we also have target pattern , and data sequences . Also assume the initial internal representation (one X-form) is .

Now, we start learning. First, one base pattern is feed into input space, and its feed-back value is also feed into output space ( could be (empty), in that case, just has no feed-in to output space). Driven by this data, learning method moves internal representation from to , which can be written as:

Here, is the learning method. Note, since the learning is mechanical, it is legible to write function form (if it is not mechanical, might not be justifiable to write in such function form). This is just the first step of learning. Next, we have: . The process continues, we feed data into input space consecutively, and we have:

Note, the feed-in data could be repeating, i.e. could have while .

This equation (5) is actually the mathematical formulation for definition 5.1 – typical mechanical learning.

With this process, as increase, X-form continues to change, and we hope at some point, would be good enough for us. What is good enough? Perhaps, there are more than one criteria. For example, ” to express ”, i.e. the black set of equals the target pattern . But, also, could be: ” to express a good approximation to ”. Or, additional to ”express , some additional goals are posted, such as is based upon less base patterns, etc.

Yet, how do we know would make our hope become true? Several questions immediately pop up:

  1. What is the mechanism of to make the approach ?

  2. Is data sequence good enough? how to know data sequence is good enough?

We would first discuss sufficiency of data, then further discuss the learning mechanism.

Data Sufficiency
Learning machine needs data, and data drives learning. More data, more driving. But, data are expensive. It would be nice to use less data to do more, if possible. More importantly, we need to understand what data are used for what purpose.

As we know already, learning is actually to get one good X-form. But, one X-form normally is a quite complicated and is quite hard to get. How can a mechanical learning method get it? Mechanical learning is not as smart like human, it only follow certain simple and fixed rules. In order to make a mechanical learning to get a complicated X-form, sufficient data are necessary. But, what are sufficient data? Good thing is that X-form itself gives a good description of such data.

We already know that an X-form and all its sub-forms give perception bits. This tells us that X-form and all its sub-forms describe the structure of black set. To tell one X-form, the least data necessary are 2: one is in the black set, another is not in the black set. Of course, just 2 data is not sufficient to describe a X-form. However, how about for each sub-form, we can find such pair of data, one is in, and one is out? It turns out, all such pairs are very good description for the X-form. This is why we have following definitions.

Definition 5.4 (Data Sufficient to Support a X-form)

Suppose is a X-form, and suppose all sub-forms of are: . For a set of base patterns , if for any sub-form , there is at least one base patterns so that , we said data set is sufficient to support X-form . That is to say, for each sub-form , we have a data that is in black set of , but not in black set of .

When we do sampling as in definition 5.1, if the sampling includes data sufficient to support X-form , then the data sequence will have such property: for each sub-form , there is at least one data in so that . For such a kind of data sequence , we say the data sequence is sufficient to support X-form .

Data sufficient to support means: for each sub-form of a X-form, there is at least one data to tell learning machine, This is only a sub-form. It is good, but not good enough. With such information, learning method could conduct learning further mechanically.

Data sufficient to support a X-form is to provide information from inside a X-form. But, we also need information from outside a X-form. To do so, we will define data sufficient to bound a X-form. In order to say more easily, we make some terms first. For 2 X-form and , if and implies , we say is over (this is equivalent that the black set of is greater than the black set of ). For 2 X-form and , if there is so that and , we say is out boundary of (this is equivalent to say the black set of is not subset of the black set of ).

Definition 5.5 (Data Sufficient to Bound a X-form)

Suppose is a X-form, and suppose all sub-forms of are: . For one sub-form , if for any X-form that is both over and out boundary of , there is at least one so that , we call this data set as sufficient to bound .

When we do sampling as in definition 5.1, if the sampling includes data sufficient to bound X-form , then the data sequence will have such property: for each X-form that is both over and out boundary of , there is at least one data in so that . For such a kind of data sequence , we say it is sufficient to bound X-form .

Data sufficiency to bound means: for any X-form that is over a sub-form, and out of boundary of , there is at least one data to tell learning machine, This X-form is not good, it is out of boundary. With such information, learning method could conduct learning further mechanically.

Examples of Data Sufficient to Support a X-form:
1. is one X-form. Its all sub-forms are and . So, are data sufficient to support .
2. is one X-form. has no sub-form. Data set or or are all data sufficient to support .
3. is one X-form. Its all sub-forms are and . Data set is data sufficient to support . And, so do .

Learning Strategies and Learning Methods
Again, learning is a dynamics of X-forms, from one X-form to another. X-form is complicated. How come such a dynamics reaches the desired X-form? Such dynamics is determined by learning methods, and learning strategies. We discussed learning methods above, which is described well in equation (5). Learning methods have set of rules on how to move from one X-form to another. Learning strategy is higher than learning method. It will govern these aspects: what X-forms to consider? what general approach to X-form? pre-set some X-forms? Or everything from scratch? etc. So, we can see that strategy governs method. Also, different strategy works for different kind of data. Different strategy also need different learning capabilities

We should emphasis here: learning is a complicated thing, one strategy and one method cannot fit all situations. There must be many strategies and even more methods. We are going to discuss some strategies and methods. But, still, there should have some common rules for these strategies and methods.

One very important property of X-form is: one processing (equivalently one black set) could be expressed by more than one X-form (normally, many). This property will play very important role in learning. Let’s see one simple example first. Consider a set of base patterns :

has totally base patterns. What X-form could express ? The easiest one is:

Sure is one X-form to express . Now, if we assume we can write as some subjective expressions of and , as following:

So, we can further have:

We can see X-form and express the same black set. But, the 2 X-forms are very different. In fact, is more complicated than , and with higher structure. But at the same time, is upon much less base patterns, just and , while is upon on base patterns.

This is very crucial: to learn , we might have to use all base patterns , while to learn , in principle, we might only use 2 base patterns (just might, might need more, depends on learning method). And, not only that, it is much more. is just a collection of some base patterns, and no relationship between these base patterns are found and used, while is built on many the relationship between base patterns (of course subjectively). In this sense, comparing to ,