Robust Affordable 3D Haptic Sensation via Learning Deformation Patterns
Abstract
Haptic sensation is an important modality for interacting with the real world. This paper proposes a general framework of inferring haptic forces on the surface of a 3D structure from internal deformations using a small number of physical sensors instead of employing dense sensor arrays. Using machine learning techniques, we optimize the sensor number and their placement and are able to obtain highprecision force inference for a robotic limb using as few as 9 sensors. For the optimal and sparse placement of the measurement units (strain gauges), we employ datadriven methods based on data obtained by finite element simulation. We compare datadriven approaches with modelbased methods relying on geometric distance and information criteria such as Entropy and Mutual Information. We validate our approach on a modified limb of the “Poppy” robot and obtain 8 mm localization precision.
I Introduction
We are witnessing a rapid development of robot technologies. Actuators and sensors have become increasingly compact and powerful. Nevertheless, robots are still far from matching human capabilities especially when it comes to touch sensation. Haptic information is, however, essential for a reliable interaction with the real world. It becomes evident that robots need to learn interaction patterns for mastering the real world challenges. For this, haptic sensors have to be robust in order to sustain longlasting experiments. Besides robustness, another important aspect of robotic hardware is its price, availability, and performance. A low cost makes robotic technologies widely accessible and thus facilitates research.
Currently available large area haptic sensor systems [1, 2] are expensive, complicated to integrate, and not robust enough to sustain longterm use. Array shaped sensors [3] can localize stimulations but have large amounts of elements and require many wires. To reduce the hardware complexity, methods such as anisotropic electrical impedance tomography (EIT) [4] have been proposed. However, both types are generally not robust because they cover the surface as a skin which makes them vulnerable to impacts with hard or sharp objects. A tiny crack can destroy the functionality of the entire sensor. A small scaled sensor BioTac [5, 6] with integrated multi functionalities has aroused attention since 2007, the functional area is only the finger belly instead of the whole 3D surface due to its structure limitation. TacTip [7] is an optical tactile sensor with shape size varying from human finger tip to human limb, it is able to detect contacting object shapes accurately while force’s information is still not involved in the system.
In this paper, we aim at providing a lowcost, robust and sufficiently precise method for inferring haptic forces on the surface of a 3D structure. Instead of relying on a dense array of sensors on the surface of the robot, we opt for a small number of physical sensors measuring internal deformations. This offers a couple of conceptual advantages. First, the system is robust to environmental impacts because the sensors can be placed inside of the structure. Second, the surface shape can be freely designed. Third, only a few channels have to be read out which reduces both the energy consumption as well as the data rate.
On the downside, a measurement of the sensors does not directly correspond to the impacting force. Instead, an inference mechanism is required to estimate the force. We propose a datadriven approach using machine learning algorithms to perform this inference efficiently. In order to require as few sensors as possible, we employ several optimization schemes to determine optimal sensor placement.
The contributions of the paper are as follows. On the theory side, we:

propose a new way of implementing a whole surface haptic sensor,

provide a method for determining the optimal number and position of sensors using finite element simulations.
On the application side, we

provide an assembly method for attaching the strain gauges,

designed a hardware system to systematically collect data,

implemented the proposed system on a robotic limb.
Ii Method
We propose a method to implement a whole surface haptic force sensor using a minimal amount of deformation sensors (strain gauges) inside of a 3D structure. Using machine learning, the haptic forces are inferred from the few measured deformations, as illustrated in Fig. 1. We assume that the surface is constructed such that it has a inside rigid support and a flexible outer shell.
In order to place the sensors optimally, we need to get access to data describing how forces applied to the structure propagate into deformations measurable by the sensors. We do this by finite element simulation. In our case, the Ansys [8] simulation tool was used. See Fig. 2.
Given the simulated deformation patterns for many force impacts (dataset), the problem can be stated as follows: Let denote the deformation (displacement) of points on the inside surface of the shell and the position of the applied force for different locations. We are looking for a subset of location among all possible points such that the force locations can we well inferred i. e. :
(1) 
where is a learned mapping function and is the tolerated error.
Our approach to approximate is composed of the following steps:

collect a dataset of deformations from the finite element simulation, see Section IIA,

filter the possible sensor locations according to physical constraints, see Section IIB,

learn a nonlinear regression model (SVR) to infer the haptic force positions for unseen stimulation locations, see Section IIC,

select the number and position of sensors needed for a certain predetermined accuracy, where we evaluate different techniques in Section IID,

validate the prediction quality obtained from differently selected sensor positions, see Section IIIA.
Afterwards, we apply the optimal selected sensor positions and inference model on a real robotic limb (Section III).
Iia Finite Element Simulation
In order to get the deformation patterns for a certain impact force, we use a finite element simulation of the 3D structure. The Ansys simulation tool [8] allows importing the 3DCAD description of the structure to equip with force sensation. We can apply forces to every location on the surface and record the deformations of all positions on the structure, discretized in a fine manner. The deformation pattern is illustrated in Fig. 2. For this example structure, we obtain around 4000 different force positions and 3000 different sensor positions.
IiB Filtering feasible Sensor Positions
From all positions on the inside of the 3D structure, we need to filter those that allow for physical sensor placement. There are several constraints imposed by the sensor size, their placement restrictions and their range of detection. The strain gauge sensors cannot be placed on edges since they need a relatively flat surface. Placing them near highly rigid support structures is also disadvantageous because only small deformations occur.
In Fig. 3(a) the unfolded surface of our example structure is displayed. The rigid support is at the top and the bottom of the structure, so we discard positions close to that. In order to get rid of candidate positions at the edges, we use a Nearest Neighbor (kNN) criterion. For each candidate position, we consider the center of mass of the neighborhood, see Fig. 3(b). If the center of mass is inside a certain radius, the position is kept (red points in Fig. 3(c)). In our examples, we get around 2100 remaining points.
IiB1 Reducing Number of Candidates using Compressive Sensing
To make the optimal selection of sensor positions more efficient, we further reduce the number of candidates. Compressive Sensing techniques [9] can be employed here, which are optimized to reconstruct sparse or compressible signals accurately from a limited number of measurements. A lossless reduction is not possible in our case, but we can bound the maximal tolerated reconstruction error.
We use PCA with QRPivoting [10]. At this point, we only give an intuitive understanding and elaborate the details below in Section IID2. The method uses PCA to compute the principal components explaining the variance in decreasing order. The QRpivoting selects those positions (sensors) that are most important for the top principal components.
In Fig. 3(c) the linear reconstruction error (unexplained variance) in dependence on the number of selected sensor positions is displayed. In our example, for error, we can select 407 out of 2162. The points are also marked in black in Fig. 3(a).
So far, we have automatically selected a set of candidate positions.
IiC Force position inference (SVR)
From the set of candidate positions, the task is now to find a smaller subset that is sufficient to make the inference about deformations anywhere on the structure. We restrict ourselves in this paper to a method that infers the position of a single impacting force.
Support Vector Machines (SVM) are popular kernelbased algorithms [11] combining the strength of nonparametric techniques with efficient storage requirements of parametrized models. In this paper, we face a regression task. The Support Vector Regression (SVR) method [12] is proper to be applied here. The idea is that the input is nonlinearly mapped into a highdimensional feature space where a linear regression with maximum margin is performed.
The model is
(2) 
where is the nonlinear feature map, are the parameters. The regression error for each example is defined as , i. e. the deviation from the target larger than . In our case, the input is the vector of deformations at the selected sensor positions and the target is the position of the applied force point. In fact, we use one SVR model for each of the target dimensions.
SVR is formulated as a minimization of the following function:
(3) 
where the hyperparameter controls the tradeoff between complexity of the regression model (norm of ) and the error.
Interestingly, only scalar products of elements in the feature space are computed, such that one can directly express the scalar product using an appropriate kernel function . A popular kernel function is the radial basis function (RBF) or Gaussian kernel:
(4) 
with hyperparameter controlling the sensitivity to distance. We use the Python sklearn [13] implementation. We choose RBF kernels and used kfold crossvalidation to select the optimal , and .
As a remark, using all candidate positions and of training data SVR can achieve an average test error of (, , , 5fold crossvalidation).
IiD Optimal Sensor Placement
In this section, we propose different ways to select sensor positions, which can be generally grouped into datadriven methods and geometry/modelbased methods.
Datadriven methods make use of previously collected data and select the subset of sensors with which the regression model performs the best. Geometry/modelbased methods do not need access to the measured data. Instead, they rely only on the geometric position of the sensors. If sufficient data is available then the datadriven methods can be more accurate because they have access to the actual dependency between the sensor locations.
In this paper, we propose to use a greedy SVR approach. As a comparison, we provide also results using a linear compressive sensing method and two geometrybased methods.
IiD1 Nonlinear method: Greedy Support Vector Regression
In principle, we want to select this combination of sensors that perform best on average at inferring the force position for unseen stimulations. The problem is that we would need to search through different possibilities, which is intractable for . Thus we employ a greedy strategy: Start with the best single sensor position and then add the second sensor position that gives the best performance and so forth. Performance is defined in terms of fold crossvalidation, see Alg. 1.
IiD2 Linear method: PCA with QR Pivoting
As mentioned in Section IIB1, the number of sensors can be reduced using methods from compressive sensing. In this section, we provide details about the specific method, namely PCA with QRPivoting.
Compressive sensing depends on two major functional matrices: feature transform basis and subsampling matrix . is designed to transform raw measurements into a sparse representative space where has only nonzero elements:
(5) 
subsamples measurements from optimally such that the representation may be most accurately reconstructed from the measurements using norm:
(6) 
Eq. (6) is a linear optimization problem and conditioned on the operator (). The central challenge is to design a optimal compressing raw data efficiently and find a good such that the operator () is wellconditioned.
PCA is a linear unsupervised dimension reduction method [14]. It finds the directions of maximum variance in highdimensional data and projects data onto a smaller dimensional subspace while remaining most of information. We keep the first principle components of to ensure the sparsity in and then select an optimal subsampling to constrain the reconstruction error of Eq. (6) based on the condition number criterion. The condition number of the operator () is denoted as:
(7) 
Since is a permutation measurement matrix, it can be designed as the column pivoting matrix of . QR factorization with column pivoting is used to select the highest singular values such that is maximized while is minimized. Details are shown in Alg. 2.
IiD3 Modelbased methods using Gaussian Process
We want to compare the datadriven methods with those relying only on the geometric location of the sensors. They assume that the geometry is homogeneous and all sensors have a fixed sensing radius, hence they are called modelbased. A convenient model to predict unmeasured sensor values is to use a Gaussian Process (GP) [15, 16]. It models a distribution over functions with a continuous domain, here functions from sensor location to deformations. It uses a similarity between locations (sensors) which is measured by a localized kernel to construct a covariance structure. This in turn is used to make sure that predicted values of similar locations are similar. However, for a new location, GPs predict not only a mean estimate but also the uncertainty represented by a onedimensional Gaussian distribution.
Gaussian Process and model/kernel selection
We start with formalizing the prediction procedure of a GP and then choose the right kernel.
Given a set of sensors , their positions , and their deformation data we can predict the distribution of deformations at a different sensor with location . The mean and standard deviation are given by
(8)  
(9) 
where is the vector of mean sensor values for the sensors in , is covariance matrix/kernel matrix for all sensors with being the Kronecker delta and is a hyperparameter. Similarly, is the vector of similarities with the new sensor location.
The GP is a nonparametric process after picking the kernel . Typical choices are polynomial, rational quadratic, and exponential kernels. In our application, the deformations vary smoothly and locally w. r. t. the location, as they can be described by the bending of a thin plate [17] Thus, we use an exponential kernel
(10) 
with hyperparameters for the length scale and for the distance norm . The distance is measured by which is the approximate geodesic distance instead of the Euclidean distance because the surface of the 3D structure is curved.
For hyperparameter selection, we use crossvalidation [18] which was shown to be robust. The grid search for the parameters and is shown in Fig. 4(a). We choose and .
The probabilistic modeling of the data allows us to use information criteria for selecting the most informative sensor positions. The first method minimizes the uncertainty about the nonmeasured locations (Entropy) and the second maximizes the Mutual Information (MI) between selected sensor and the nonmeasured locations.
As before we select best sensor positions out of all permissible locations (red and black points in Fig. 3(a)).
IiD4 Pick locations with maximal uncertainty – Entropy
By intuition, a good design is to pick sensor locations that minimize the uncertainty about the entire permissble locations . This can be quantified by the conditional entropy of the unobserved locations given the observed ones, i. e. [19]. Mathematically, we aim at:
(11) 
see [19] for details. Since Eq. (11) involves a combinatorial search we solve it in a greedy fashion, as done in the SVR case in Section IID1. The entropy of a gaussian distribution is analytically given as . The algorithm is detailed in Alg. 3
This will automatically choose sensors far away from each other, as illustrated in Fig. 4(b). As a side effect, the selected locations tend to sit on the boundary of the space, making them in principle inefficiently using their full detection disk.
IiD5 Mutual Information
Another criterion suggested in [19] is the Mutual Information (MI) which measures the shared information between selected and unselected locations:
(12) 
Where we use here as the set of all locations (also those that are not permissive as sensor location (all light blue points in Fig. 3(a)). An intuitive explanation is given in Fig. 4(d). Maximizing the MI between and is also a combinatorial problem, such that a greedy method is used as well. In each step we pick the location with maximum additive Mutual Information:
(13) 
as detailed in Alg. 4.
Iii Results
Using the methods presented above, we first determine the optimal sensor placement based on the simulation results. Afterwards, we apply this to a real robotic limb.
Iiia Optimal Sensor Placement and Validation in Simulation
Based on the preprocessed dataset acquired using finite element simulation (see Section IIA and IIB), we compare the performance of the sensor placement using the different selection methods.
In Fig. 5(a) we present the selected sensor positions. We notice that the PCAQR method places the sensors on the edges of each beam whose deformation has highest variances. In contrast, the SVR method arranges sensors onto the two offcenter parallel beams ( and ), in which deformation from both sides can be measured. The sensor position suggested by SVR is located closely to two already selected sensors rather than an expected more centered position. However, as it turns out that the sensor positions not shown in Fig. 5(a) suggested by SVR are with high majority distributed closely to the upper and lower boundaries. Because the areas near boundaries are more rigid and less sensitive to applied force. More sensors are needed in these areas for high prediction precision. The modelbased methods (Entropy, MI) are only based on the geometric information. The Entropy criterion recommends the sensors to be placed toward the edges and homogeneously distributes them on the entire space. The Mutual Information criterion suggests the positions to be more centered.
After picking the optimal sensor positions, we evaluate the four methods by comparing the prediction performance using the SVR force location inference (Section IIC). As shown in Fig. 5(b), datadriven methods work generally better because they can exploit the structure of the data. Note, that the PCAQR method is not a greedy method and it suggests different combinations of sensors for each sensor budget . For that reason, the prediction error is not guaranteed to be in descending order.
In general we notice that a small number of sensors can already lead to a relatively small prediction error (on unseen locations). Based on the simulation data we obtain a precision for 5 sensors, selected using the greedy SVR method (Section IID1). With 10 sensors the prediction performance is roughly
IiiA1 Probing Robustness to Failure
As any physical device is susceptible to failure, we also checked the robustness of the selected positions against failing sensors. We tested the prediction error with different degrees of sensor failures, i. e. out of 10 sensors to are broken. Fig. 5(c) presents the results. The greedy SVR method has the highest robustness against sensor failures. For failures the performance drops by
IiiB Experimental Evaluation of Hardware
We use the 10 selected sensor positions by the greedy SVR method and implemented it on a hardware limb. The next subsections introduce the sensor choice, the sensor assembly process, the data acquisition and finally the results of the force inference on the real system.
IiiB1 Sensor choice
In this section, we motivate the choice of the physical sensors. We have chosen Strain Gauge (SG) sensors because they are generally cheap, widely available and relatively straightforward to use. Although we have also found cases of broken sensors after assembly. In order to work with the strong deformations of the 3Dprinted plastic robot parts, we select SGs with 20% elongation rate. In our settings, the SG’s finite extension is tolerant within the maximum elastic deformation of the limb. These have a long lifetime and high fatigue strength. One drawback of SG is that it only measures deformation along one direction. For our beamshaped structure, the SGs are assembled along the beam directions where the dominant deformation happens according to FEM.
Alternatively, we considered active sensors such as piezoelectric sensors [20] transforming mechanical deformation into electrical energy and triboelectric effect based sensors [2] charging through frictional contact. A typical problem of these sensors is that they cannot detect static loads. In terms of passive sensors we considered capacitive ones, but they will react differently for conducting and consisting spatial limits. Another interesting category is optical sensors like Fiber Bragg grating measuring a change in a deformed glass fiber based on the wavelength change of the reflected light. The processing equipment for these sensors is rather involved, expensive and bulky.
IiiB2 Sensor assembly
The selected Strain Gauge (SG) sensor has to be attached to the inside of the 3D plastic structure with precaution in order to avoid damage or malfunction. Since the SG only measure deformations along one direction, The assembly procedure is a bit challenging because it is inside the hollow object. We developed an assembly method with a specific support structure (SS), as shown in Fig. 6(a). The SS has little arms to pretighten each of the sensors at the right position properly to the surface. The arms are held in place by a middle axis during adhesive curing process then can be pulled out for disassembly. The assembly process is described in Tab. I
Step  Details 

1  Wire SG and cover SG with scotch tape to isolate adhesive. 
2  Cover SS with preservative film to isolate adhesive. 
3  Insert absorbent wool between SG and SS to absorb adhesive. 
4  Position SG on SS. 
5  Clean internal surface of skeleton and SG surface. 
6  Coat SG and internal skeleton surface with prepared adhesive. 
7  Pretighten the whole structure and cure for at 
8  Disassemble SS and clean surface. 
IiiB3 Data Acquisition
To acquire the dataset for training the machine learning algorithm, we have to record the sensor measurements for many force applications.
Amplifier Circuit
The conventional data acquisition circuit for SG is the Wheatstone bridge [21] which measures electrical resistance’s change by balancing two legs of a bridge circuit. As SG is sensitive not only to mechanical stress and but also to temperature variance, temperature compensation function has to be integrated into the circuit.
In our project, we adopt halfbridge Wheatstone, operational amplifier of MCP609 and Arduino Due. The circuit has 12 I/O ports with 12 bits of resolution in which the SGs’ deformation is amplified by a factor of 330 and converted to 4096 different values over .
Testbed
To collect a large amount of forcemeasurements in an automated way, we designed testbed based on a modified 3D printer. The printer offers 3 Degrees of Freedom (DoF) given by the Cartesian translation in . In addition, we add one axis of rotation. In order to measure the forces the printhead of the printer is replaced by a force sensor tip (FC2231), see Fig. 6(b).
IiiC Experimental Results
In this section, we validate the proposed HapDef on the modified limb of the Poppy robot [22]. The limb is designed to have an inside rigid support to ideally reduce the influence of forces at the joints and the support structure’s elasticity, and a flexible shell to detect the touch. We assembled 10 sensors according to the placement determined in Fig. 5(a). One of the sensors was malfunctioning, as indicated by a cross in the figure. Each sensor value is calibrated to be zero if no force is applied. While recording from the 9 remaining sensors, the force tip of the testbed is stimulating the surface with different forces. More specifically, the force tip is moved towards the structure to apply nominal force. As soon as a contact is registered, the force tip is moved in small steps to a maximum penetration depth of . In this way, different force magnitudes are obtained per location. We collect data for 3000 locations on the surface, avoiding the edges and boundaries so that the force tip does not slide off.
From each location, we first use the largest force simulation and split the 3000 data points into 80% training and 10% for validation and test respectively. After training the Support Vector Regression and performing hyperparameter selection based on the validation set (, and ) we evaluate the inference performance on the unseen forcelocations. Fig. 6(c) presents the results compared to the simulations. The hardware implementation achieves half of the simulation accuracy. The average prediction precision of the force position is below when using 9 sensors. Given that our structure has a total surface of , this is a very high precision.
Besides, we train the SVR for different force amplitude, varying from light to strong touch, and report the prediction precision of the force information in different force intervals. As shown in Tab. II, SVR has low prediction precision for light touch and high precision for strong touch. Presumably because less sensors get activated by light touch. The absolute prediction precision for the force’s amplitude varies little w. r. t. force strength. Consequently, any strong touches on the surface can be reliably detected and be used as a warning signal. This improves the haptic system’s robustness.
Force Interval  Position Error [mm]  Amplitude Error [N] 

0  4.9  25.43 13.36  1.05 1.01 
4.9  9.8  11.95 11.85  1.19 1.19 
9.8  19.6  5.90 7.79  1.42 1.78 
19.6  34.3  4.48 6.29  1.54 2.21 
Iv Discussion
We present a method to obtain a robust haptic sensing system using only a small number of inexpensive deformation sensors. The performance of the sensation device is powered by a machine learning approach. After a learning period, the system can reliably localize touch all around a curved surface. Apart from being inexpensive, the system is also very durable as the deformation sensors can be placed inside the structure. Only a few () sensor values need to be acquired and processed. The computational requirements are also low during operation as the inference of the force location is done via Support Vector Regression. However, other machine learning methods, such as Deep Neural Networks, are feasible.
We also compared different methods of computing optimal sensor locations for a very sparse sensor configuration. We found that datadriven methods outperform geometrybased methods. The right selection strategy can reduce the required number of sensors by 50% without significant loss in precision. In future work, we want to investigate multitouch and accurate force magnitude prediction.
Acknowledgment
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRSIS) and the China Scholarship Council (CSC) for supporting Huanbo Sun.
References
 [1] G. H. Büscher, R. Kõiva, C. Schürmann, R. Haschke, and H. J. Ritter, “Flexible and stretchable fabricbased tactile sensor,” Robot. Auton. Syst., vol. 63, no. P3, pp. 244–252, Jan. 2015. [Online]. Available: http://dx.doi.org/10.1016/j.robot.2014.09.007
 [2] S. Wang, L. Lin, and Z. L. Wang, “Triboelectric nanogenerators as selfpowered active sensors,” Nano Energy, vol. 11, pp. 436 – 462, 2015. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2211285514002171
 [3] M. Shimojo, A. Namiki, M. Ishikawa, R. Makino, and K. Mabuchi, “A tactile sensor sheet using pressure conductive rubber with electricalwires stitched method,” Sensors Journal, IEEE, vol. 4, no. 5, pp. 589–596, Oct. 2004.
 [4] H. Lee, “Soft nanocomposite based multipoint, multidirectional strain mapping sensor using anisotropic electrical impedance tomography,” Scientific Reports, vol. 7, no. 39837, Jan. 2017. [Online]. Available: http://doi.org/10.1038/srep39837
 [5] J. A. Fishel and G. E. Loeb, “Sensing tactile microvibrations with the biotac  comparison with human sensitivity,” in Proc. IEEERAS and EMBS Int. Conf. Biomedical Robotics and Biomechatronics(BioRob). IEEE, 2012.
 [6] N. Wettels and G. E. Loeb, “Haptic feature extraction from a biomimetic tactile sensor: Force, contact loaction and curvature,” in Proc. IEEE Int. Conf. Robotics and Biomimetics. IEEE, 2011.
 [7] B. WardCherrier, N. Pestell, L. Cramphorn, B. Winstone, M. E. Giannaccini, J. Rossiter, and N. F. Lepora, “The tactip family: Soft optical tactile sensors with 3dprinted biomimetic morphologies,” Soft Robotics, vol. 5, no. 2, pp. 216–227, 2018, pMID: 29297773. [Online]. Available: https://doi.org/10.1089/soro.2017.0052
 [8] K. Lawrence, ANSYS Tutorial Release 13. SDC Publications, 2011.
 [9] E. Candès and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008.
 [10] K. Manohar, B. W. Brunton, J. N. Kutz, and S. L. Brunton, “Datadriven sparse sensor placement,” CoRR, vol. abs/1701.07569, 2017.
 [11] C. Cortes and V. Vapnik, “Supportvector networks,” Mach. Learn., vol. 20, no. 3, pp. 273–297, Sept. 1995. [Online]. Available: https://doi.org/10.1023/A:1022627411411
 [12] A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statistics and Computing, vol. 14, no. 3, pp. 199–222, Aug. 2004. [Online]. Available: https://doi.org/10.1023/B:STCO.0000035301.49549.88
 [13] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikitlearn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
 [14] I. Jolliffe, Principal Component Analysis. Springer Verlag, 1986.
 [15] N. Cressie, Statistics for spatial data, ser. Wiley series in probability and mathematical statistics: Applied probability and statistics. J. Wiley, 1993. [Online]. Available: https://books.google.de/books?id=4SdRAAAAMAAJ
 [16] D. J. C. MacKay, Information Theory, Inference & Learning Algorithms. New York, NY, USA: Cambridge University Press, 2002.
 [17] E. Ventsel and T. Krauthammer, Thin Plates and Shells: Theory: Analysis, and Applications. Taylor & Francis, 2001. [Online]. Available: https://books.google.de/books?id=veAngEACAAJ
 [18] C. E. Rasmussen, “Gaussian processes for machine learning.” MIT Press, 2006.
 [19] A. Krause, A. Singh, and C. Guestrin, “Nearoptimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies,” Journal of Machine Learning Research (JMLR), vol. 9, pp. 235–284, February 2008.
 [20] T. H. Ng and W. H. Liao, “Sensitivity analysis and energy harvesting for a selfpowered piezoelectric sensor,” Journal of Intelligent Material Systems and Structures, vol. 16, no. 10, pp. 785–797, 2005. [Online]. Available: https://doi.org/10.1177/1045389X05053151
 [21] D. M. Stefanescu, “Strain gauges and wheatstone bridges â basic instrumentation and new applications for electrical measurement of nonelectrical quantities,” Eighth International MultiConference on Systems, Signals and Devices, pp. 1–5, 2011.
 [22] M. Lapeyre, P. Rouanet, J. Grizou, S. Nguyen, F. Depraetre, A. Le Falher, and P.Y. Oudeyer, “Poppy project: Opensource fabrication of 3d printed humanoid robot for science, education and art,” in Digital Intelligence 2014, Nantes, France, Sept. 2014, p. 6. [Online]. Available: https://hal.inria.fr/hal01096338