Matrix Interpolation Problem

Matrix Interpolation Problem

Dharm Prakash Singh prakashdharm@gmail.com Amit Ujlayan Department of Applied Mathematics, School of Vocational Studies and Applied Sciences,
Gautam Buddha University, Gr. Noida - 201312, India.
iitk.amit@mail.com
Abstract

In this paper, a new class of two-variate interpolation problems, namely, the matrix interpolation problem (MIP) with respect to a set of pairwise distinct interpolation points (or positions) corresponds to a given real matrix of the order is proposed, where . The MIP cannot be poised in -dimensional space of two variable polynomials, of degree at most , for almost every . Therefore, an approach of two-variate polynomial interpolation is introduced and the existence of two new -dimensional subspaces of two variable polynomials is established in which the MIP can always be poised. Some formulae are presented to construct the respective polynomials which provide the associated transformations between the space of real matrices to the established subspaces. Further, it is proved that these polynomial maps are isomorphisms. Some examples are included to demonstrate and verify the results. The theoretical findings provide two new finite-dimensional polynomial subspaces of two variables and two isomorphisms.

keywords:
Interpolation, polynomial interpolation in two variables, matrix interpolation problem Mathematics Subject Classification (2010): 41A05, 41A63, 65D05, 65F99
journal:

1 Introduction

Let denotes the space of -variate polynomials and be dimensional subspace of , of degree at most , with real coefficients, where , carl:1; Yuan:1; Gasca:1. If be a finite set of pairwise disjoint points in , where , then for the given subspace and some real constants , find a polynomial with respect to , such that

(1)

is defined as Lagrange interpolation problem. The interpolation points , are also known as nodes or interpolation sites. The space is called the interpolation space, Gasca:1; Thomas:1.

The problem (1) is said to be poised with respect to the set in , if there exists a unique polynomial for which the condition (1) holds, carl:1; Yuan:1; Gasca:1. More precisely, the problem (1) is poised or correct in with respect to the set , if and only if, and the sample matrix Olver:1; Kamron:1 is always non-singular for any choice of the basis of . For the given subspace, an interpolation problem is said to be singular if the determinant of the associated sample matrix is always zero for every choice of the set , regular if the determinant of the associated sample matrix is non-zero for any choice of the set , and almost regular if the determinant of the associated sample matrix is zero only on a subset of , of measure zero, Gasca:1; Thomas:1; Kamron:1; Olver:1; Mehaute:1.

1.1 Motivation and formulation of the problem

Let represents the space of all matrices of the order over the field and denotes the matrix , where , axler15; Alfio.

Suppose be a matrix such that its elements are restricted by the relation, for all , . If we consider the problem to construct the corresponding matrix , then . Generally, for any two-variate given polynomial (or relation), the construction of the matrix of a given finite order can be completed in the similar manner. But, can we find the two-variate general expression for every elements of a given matrix of some finite order?

Also, the matrices are used in a sufficiently large-scale to solve the problems in scientific and engineering computing. The computational problems are classical and constitute an open area of effectual research as their solutions build a major section of scientific and engineering computations on modern computers. Some numerical algorithms have been developed for the matrix computations and an intense progress has been witnessed in the design of the efficient algorithms for the polynomial computations in the last decades. The matrices and polynomials are very useful and broadly applicable in algebraic computations, symbolic computations, and numerical computing. Particularly, in image processing, signal processing, control theory, algebraic computing, coding and partial differential equations, imp:2; imp:1; imp:3.

In image processing, a matrix is the most common data structure to represent the complete information of an image that is independent of contents of the image data. Digital image functions are used for computerized image processing which is usually represented by matrices and the coordinates are the natural numbers. An image captured by human retina or TV camera can be sampled by a continuous image function , where domain of the image function is the region such that , are the maximal image coordinates in the plane. Generally, the domain and range of the image function is limited (or finite). In general, it is assumed that the image function are ‘well-behaved’, image:1; image:2.

One of the important levels of data representation is the geometric representations of the data that holds the information about two-dimensional and three-dimensional shapes. It is needed for the transition between natural raster images and data used in computer graphics. It is also useful while doing general and complex simulations of the influence of illumination and motion of the objects. Usually, the quantification of the geometrical shape is much harder but very important, image:1. Moreover, surface interpolation has a very important role in computer-assisted medical diagnosis and surgical planning through the construction (or reconstruction) of three-dimensional smooth interpolating surfaces from the given (or constructed) data of the images or pictures, surface:1.

In addition, the matrix is a data structure to store the information in many models of signal processing image:1, computer graphics image:2, thermal engineering thermal:1, computer aided design in control systems CAD:1; CAD:2, to name a few. The geometrical interpretation of such information using three-dimensional smooth interpolating surface is very useful in the different stages of their fundamental and applied research. The polynomials are mostly used to approximate the continuous functions and complicated curves as they are continuously differentiable, integrable and generate the smooth interpolating surfaces (with respect to the sampled data set). Therefore, the interest is to generate two-variate unique polynomial solution of the problem defined as follows:

Definition 1.1

Let be a given matrix. For a given subspace of and the set of pairwise distinct interpolation points or nodes, find a polynomial , such that

(*)

is defined as Matrix Interpolation Problem (MIP).

For the given matrix , the MIP (*1.1) can not be poised in for almost every . It can be demonstrated as follows:

Illustration 1.1

Let be any matrix. Then, for the given three data points , , and in , the MIP can be given as

(2)

Since , the given set of data points may define an interpolating polynomial . Let us consider that,

(3)

for some coefficients and . If the polynomial (3) satisfies the MIP (2), then the coefficients must satisfy the system of equations

(4)

The system of equations (4) is equivalent to

(5)

Here, the determinant of the coefficient matrix (or the sample matrix) is zero. Thus, there are two possible cases:

Case-1: If , the system of equations (4) has infinitely many solutions, i.e., , where is an arbitrary real constant.

Case-2: If , the system of equations (4) has no solution.

Therefore, the MIP (2) can not be poised in for any given matrix in .

Illustration 1.2

Let be any matrix. Then, for the given six data points , , , , , and in , the respective MIP can be given as

(6)

Since , the given set of data points may define an interpolating polynomial . Let us consider that,

(7)

for some coefficients . If the polynomial (7) satisfies the MIP (6), then the coefficients must satisfy

(8)

The system of equations (8) is reduced (or equivalent) to

(9)

Again, the determinant of the coefficient matrix (or the sample matrix) is equal to zero and there are two possible cases:
Case-1: If , then the system of equations (9) has infinitely many solutions.
Case-2: If , then the system of equations (9) has no solution.

Therefore, the MIP (6) with respect to the nodes , and can not be poised in for any given matrix in .

Remark 1.1

The construction of the sample matrix of the order corresponds to the MIP (*1.1) such that is analytically intractable for large . However, using the suitable computer program, it is verified that the determinant of such sample matrices is zero for (higher dimension required more execution time). We conjecture that it is true for all .

1.2 Brief review of literature

The Lagrange polynomial interpolation in one variable is a classical topic and there is a well-established theory for it. One variate Lagrange and Newton interpolation formulae, Alfio; Atkinson:1; Jain:1, categorically provide the solution to the one variate interpolation problems. The polynomial interpolation in the case of more than one variables is a more difficult problem. There is no any simple and effective theory available for it, carl:1; Yuan:1; Gasca:1; Olver:1.

In the case of one variable polynomial interpolation problem, the position of the data points does not matter. For , the problem (1) with respect to distinct nodes is always poised in , where . The polynomials in one variable (up to a certain degree) form a Haar Space and plenty of Haar Spaces exist for one variable polynomial interpolation problems. However, on moving from one variable to more than one variables, the basic nature and structure of the polynomial interpolation changes thoroughly and there does not exist any Haar Space of dimension higher than one. In the case of more than one variables, for a given finite-dimensional polynomial space of a certain degree, the poisedness of an interpolation problem does not only depend on the number of data points, also, significantly depends on the actual position and configuration (or geometry) of the data points, carl:1; Yuan:1; Gasca:1; Thomas:1.

For the given set of nodes, if the problem (1) is poised in some space then the construction of the respective polynomial in can be completed using Lagrange interpolation formulae (or polynomials), Yuan:1; Kamron:1.

Although, for the given space , the problem (1) may not be poised with respect to a set of pairwise distinct nodes in . But, there exists at least one (possibly infinite) set of pairwise distinct nodes in for which the problem (1) can always be poised in , Yuan:1; Gasca:1; Thomas:1. Also, due to the Kergin interpolation kergin:1, for any given set of pairwise distinct , nodes in , there exists at least one subspace of , in which the problem (1) can always be poised, Yuan:1; Gasca:1. Therefore, the problem (1) can never be singular.

In the case of more than one variables, there are some essential difficulties in solving Lagrange interpolation problems by polynomials. There are more than one (infinitely many) linearly independent polynomials of the equal total degree, therefore the main problem is to choose the right polynomial subspace, Yuan:1. However, for a non-trivial problem, the construction of new interpolating points with respect to some given polynomial space or the identification (or construction) of the correct polynomial spaces with respect to some given set of pairwise distinct nodes is a challenging and difficult problem, Thomas:1; Gasca:1.

1.3 Objectives

In case of MIP (*1.1), the set of interpolating points is fixed. Therefore, due to Kergin, there must exist at least one subspace of in which the MIP (*1.1) can always be poised for all . Establishing the existence of at least one subspace in which the MIP (*1.1) can always be poised for all is the main objective of this paper. Present some formulae to construct the respective polynomials in the established subspaces and investigate some algebraic properties of the obtained polynomial maps (from to the established subspaces) are the other objectives.

Remark 1.2

All interpolation methods assume certain properties on the sought structure like smoothness, nonexistence of some singularities in the sufficiently large vicinity and so on. If these types of assumptions are spoiled then the error of the interpolation may become drastic, mft1; mft2; mft3. We are assuming the smoothness and non-existence of singularities on the sought structure in the given domain. The present work is carried out in the algebraic direction and approximation or computation of error is not the part of the article.

The paper is organized as follows. In section 1, some basic concepts of multi-variate interpolation problems have been included. The motivation and the formulation of the MIP (*1.1) for a given matrix has been discussed, which does not possess a unique solution in . In section 2, a class of -dimensional subspaces of , of degree at most , with two parameters will be introduced. For the given matrix , the constructive existence of two particular subspaces and the poisedness of the proposed MIP (*1.1) in these subspaces will be proved respectively. In section 3, some algebraic properties of the obtained polynomial transformations will be discussed. In section 4, three examples will be presented to demonstrate the results geometrically and to verify some linear algebra results. Finally, section 5 concludes the paper and section 6 emphasizes on future perspectives.

2 Existence of the polynomial subspaces of two variables and poisedness of the MIP

In this section, an -dimensional space of two variable polynomials, of degree at most , with real coefficients is defined. It is proved that the defined space is a collection of -dimensional subspaces of . The existence of two particular subspaces in this space is established and it is proved that the MIP is always poised in them for all .

Definition 2.1

For some scalars and , let denotes the space of all two variable polynomials in and , of degree at most , with real coefficients such that, if , then

(10)

Here, dim .

Theorem 2.1

The space form an -dimensional subspace of .

Proof: The space is a -dimensional non-empty subset of the vector space . Suppose be two polynomials respectively such that

and be a scalar. Then,

i.e., .

i.e., . This completes the proof.

Corollary 2.1

The spaces and are isomorphic vector spaces.

Remark 2.1

The set of standard basis of the vector space is given as

Theorem 2.2

Let be a given matrix, then there exits a unique polynomial which satisfy the MIP (*1.1).

Proof: The proof of the theorem consist of two parts, existence and uniqueness. We will discuss both separately step by step as following:

Existence: Let be a given matrix and be a sequence of length , . Therefore, the set will represent the set of pairwise distinct interpolation points (or nodes) in . Again, let be a bijective linear map defined as

(11)

Thus, the set of pairwise distinct nodes can be transformed into the set

(12)

and the corresponding MIP (*1.1) is transformed into

(13)

The consecutive nodes in (12) are equidistant with step size 1 by (11). On applying Newton-Gregory forward interpolation formula axler15; Alfio; Atkinson:1, there exist a polynomial of degree at most with respect to the set of nodes (12) which satisfy the MIP (13) and can be given as

(14)

where is the th, forward difference of with respect to the set of nodes (12). Using (11), the equation (14) becomes

(15)

The right hand side of the equation (15) is a polynomial in two variables and of degree at most , i.e., . This completes the proof of the existence part.

Uniqueness: From the proof of existence, there exists a polynomial with respect to the set , say

(16)

This polynomial satisfy the MIP (*1.1). Thus, we can write

(17)

where , are some coefficients. Therefore, the coefficients must satisfy the system of equations

(18)

On comparing the system of equations (18) with the standard form , we get

(19)

using the standard Vandermonde determinant Vondermonde:1. Hence, the system of equations (18) has unique solution. This completes the proof.

Corollary 2.2

For all , there exists a unique such that for all , and is given by (15), where is the th, forward difference of for the table 1.

Table 1: Forward row-wise arrangements of the elements of the matrix .
Theorem 2.3

Let be a given matrix, then there exists a unique polynomial which satisfy the the MIP (*1.1).

Proof: The proof is similar to the theorem 2.2. Using the bijective linear map

(20)

in place of (11) completes the proof.

Corollary 2.3

For all , there exists a unique such that for all , , given as

(21)

where is the th, forward difference of for the table 2.

Table 2: Forward column-wise arrangements of the elements of the matrix .
Remark 2.2

Let be a given matrix. The theorem 2.2 ensures the existence of the unique polynomial in that satisfy the MIP (*1.1). Suppose is the required polynomial of the form (10). The conditions , where , will lead to a non-homogeneous linear system of equations of the form , where , and is the coefficient matrix such that . Using a suitable method to solve non-homogeneous linear system of equations, the coefficients , can be determined uniquely. In the similar manner, for the given matrix , the construction of the unique polynomial can be completed which satisfy the MIP (*1.1).

Remark 2.3

For the given matrix , the MIP(*1.1) is always poised in and , i.e., the MIP (*1.1) can never be singular.

3 Some properties of the obtained polynomial maps

By definition, two vector spaces are isomorphic if and only if there exists an isomorphism from one vector space onto the other vector space. An isomorphism is invertible linear map axler15. In the virtue of the theorem 2.2 and 2.3, there exist unique polynomials in and respectively which satisfy the corresponding MIP’s for all . Using construction formulae (15) and (21), we are defining the polynomial maps from to and to respectively. The linearity and invertibility of the maps are also discussed.

For the given matrix , let and represent the unique polynomials and which satisfy and respectively, for all , . The notations with described meaning will be used in the rest of the paper.

Theorem 3.1

Let and be some scalar, then the following properties hold:

  1. in .

  2. in .

Proof: Let be two matrices given as and . Then, and for some scalar . Again, there exists unique polynomials , , , and with respect to the set of nodes which satisfy the MIP’s

(22)
(23)
(24)
(25)

respectively. Since,

The poisedness of the MIP’s (24) and (25) with respect to the set , completes the proof.

Theorem 3.2

For all , there exists a unique matrix such that for all , .

The theorem 3.1 and the theorem 3.2 prove the linearity and the invertibility (or bijectivity) of the polynomial map from to respectively. On combining the theorems 2.2, 3.1, and 3.2 an isomorphism from the space to the subspace can be defined as follows:

Theorem 3.3

The polynomial map defined by

(26)

is an isomorphism.

Definition 3.1

The inverse linear map is given by

(27)
Remark 3.1

In the similar manner, it can be proved that the polynomial map defined by for all is an isomorphism.

4 Numerical verification

In this section, three examples are included for geometrical and numerical verification of the results.

Example 4.1

Let , be two matrices, and . Then, there exist unique , , , and given as

(28)
(29)
(30)
and (31)

respectively. Surface diagrams of the polynomials (28), (29), (30), and (31) in the respective subspaces are given as follows:

(a)
(b)
(c)
(d)
Figure 1: Surface diagrams of the interpolating polynomials (28), (29), (30), and (31) in the respective subspaces, indicating the data points.
Note 4.1

The MIP corresponds to the matrices has no solution in . However, the MIP corresponds to the matrix does not holds the necessary condition of nodes in or .

Example 4.2

(a) Let us consider the map defined as

Suppose , be two matrices in and be a non-zero real constant. Then,

i.e., the given map is linear.

Again, the standard bases of the vector spaces and are and respectively. Therefore,

The corresponding co-ordinate matrix with respect to the bases and is

Here, , i.e., the given map is invertible.

(b) The inverse linear map is given by

For the standard bases of the respective spaces, we get

The corresponding co-ordinate matrix with respect to the bases and is

Clearly,