[current]422pt761pt
Curved Surface Patches
for
Rough Terrain Perception
\spacedlowsmallcapsDimitrios Kanoulas
July 24, 2014
Submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy
to the
Faculty of the College
of Computer and Information Science
Northeastern University
Boston, Massachusetts
[0]Colophoncolophon
Colophon
This document was typeset using the classicthesis template (http://code.google.com/p/classicthesis/) in LaTeX developed by André Miede and inspired by Robert Bringhurst’s seminal book on typography “The Elements of Typographic Style”.
This material is based upon work supported by the National Science Foundation under Grant No. 1149235. The portion of this research related to human subjects was approved by the Northeastern University Institutional Review Board (IRB # 130621).
Curved Surface Patches for Rough Terrain Perception
© July 24, 2014, Dimitrios Kanoulas
See pages 1 of Kanoulas__2014__Final_Approval.pdf
\pdfbookmark[1]DedicationDedication
You have no responsibility to live up to what other people think you ought to accomplish. I have no responsibility to be like they expect me to be. It’s their mistake, not my failing.
— Richard Feynman, Surely You’re Joking, Mr. Feynman!
[1]AbstractAbstract
Abstract
Attaining animallike legged locomotion on rough outdoor terrain with sparse foothold affordances — a primary usecase for legs vs other forms of locomotion — is a largely open problem. New advancements in control and perception have enabled bipeds to walk on flat and uneven indoor environments. But tasks that require reliable contact with unstructured world surfaces, for example walking on natural rocky terrain, need new perception and control algorithms.
This thesis introduces 3D perception algorithms for contact tasks such as foot placement in rough terrain environments. We introduce a new method to identify and model potential contact areas between the robot’s foot and a surface using a set of bounded curved patches. We present a patch parameterization model and an algorithm to fit and perceptually validate patches to 3D point samples. Having defined the environment representation using the patch model, we introduce a way to assemble patches into a spatial map. This map represents a sparse set of local areas potentially appropriate for contact between the robot and the surface. The process of creating such a map includes sparse seed point sampling, neighborhood searching, as well as patch fitting and validation. Various ways of sampling are introduced including a real time bioinspired system for finding patches statistically similar to those that humans select while traversing rocky trails. These sparse patch algorithms are integrated with a dense volumetric fusion of range data from a moving depth camera, maintaining a dynamic patch map of relevant contact surfaces around a robot in real time. We integrate and test the algorithms as part of a realtime foothold perception system on a minibiped robot, performing foot placements on rocks.
[1]Publicationspublications
Publications
This dissertation is based on the work that has been presented in the following conference/workshop papers and posters:

Dimitrios Kanoulas, Marsette Vona. BioInspired Rough Terrain Contact Patch Perception. In the 2014 IEEE International Conference on Robotics and Automation, ICRA 2014.

Dimitrios Kanoulas, Marsette Vona. The Surface Patch Library (SPL). In the 2014 IEEE International Conference on Robotics and Automation Workshop: MATLAB/Simulink for Robotics Education and Research, ICRA 2014.

Dimitrios Kanoulas. Surface Patches for Rough Terrain Perception. In the Northeast Robotics Colloquium, Second Edition (poster), NERC 2013.

Dimitrios Kanoulas, Marsette Vona. Sparse Surface Modeling with Curved Patches. In the 2014 IEEE International Conference on Robotics and Automation, ICRA 2013.

Marsette Vona, Dimitrios Kanoulas. Curved Surface Contact Patches with Quantified Uncertainty. In the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2011.
[1]Contentstableofcontents \manualmark
Contents
 \thechapter Introduction
 I Sparse Surface Modeling with Curved Patches

II Curved Patch Mapping & Tracking
 \thechapter Patch Mapping
 \thechapter Patch Tracking
 \thechapter Application to Biped Locomotion
 \thechapter Conclusions and Future Work
 III Appendix
 A Jacobians
 B The Logarithmic Map
 C Patch Fit Uncertainty Propagation
 D Taubin’s Residual Approximations
[section]chapter
[1]List of Figureslof
List of Figures
 1 Human locomotion considering a sparse set of footholds
 2 Patch mapping and tracking on the RPBP biped robot
 3 Our sensing apparatus: an RGBD camera with an affixed IMU
 4 A dense point cloud input from an RGBD camera
 5 Different types of error modeling for a 3D point cloud
 6 IMU calibration instances
 7 Biped approximates contact areas with bounded curved patches
 8 Examples of all patch types
 9 A paraboloid and a planar patch example
 10 Examples of all four different types of boundaries
 11 Rock data acquisition, error ellipsoid estimation for each 3D point, and patch fitting for manually segmented point clouds
 12 Input for the paraboloid patch fitting process
 13 Paraboloid patch fitting process
 14 Automatic fits for all patch types
 15 The “sidewall” effect reparameterization
 16 Residual, coverage, and curvature patch validation
 17 Bad residual due to an outlier sample point
 18 Perturbed sample points for each patch type.
 19 Sorted residuals for 1000 random patches.
 20 Coverage evaluation for all types of patch models.
 21 Cell intersection cases for all types of boundaries.
 22 Curvature validation example
 23 Patch Mapping system overview.
 24 Local Volumetric Workspace
 25 Dense point cloud input along with IMUderived gravity vector.
 26 Background removal preprocessing
 27 Hiking saliency filtering
 28 Illustration of the Difference of Normals measure
 29 Illustration of the Difference of NormalGravity measure
 30 Illustration of the Distance to Fixation Point measure
 31 Grid cells in the volume
 32 Neighborhood searching
 33 Triangle mesh construction
 34 Depth jumps in triangle meshes
 35 Patch fitting
 36 Patch validation
 37 Homogeneous patch map
 38 Five rock datasets
 39 Qualitative patch fitting and validation results
 40 Patch size histogram comparison
 41 The sensing apparatus (Microsoft Kinect + CH Robotics UM6 IMU)
 42 Humanselected patches
 43 Patch fitting and validation
 44 Comparison of histograms of salient measures for humanselected and automatically identified patches
 45 Camera pose with respect to the volume frame
 46 KinectFusion point clouds
 47 Moving volume KinectFusion vs the original KinectFusion system
 48 The gravity vector from the IMU sensor
 49 The frustum of the virtual birdseye view camera
 50 Bubble camera vs real camera ray casting
 51 Patch Mapping and Tracking
 52 Kinematics specifications of the RPBP robot.
 53 Physical hardware of the RPBP robot.
 54 Software interface for the RPBP robot.
 55 RPBP robot rock patch tracking
 56 Four rocks table apparatus
 57 Trained patches for foot placement
 58 The motion sequences for foot placement on four rocks
 59 RPBP foot placement on rock 1
[1]List of Tableslot
Chapter \thechapter Introduction
In 1986, Daniel Whitney in his article “Real Robots Don’t Need Jigs” Whitney (1986) highlighted the need for redesigning robots to complete tasks in very unstructured environments, under significant uncertainty. Almost three decades later, robots have achieved high efficiency in wellstructured environments like factories and labs, but still are not flexible enough to reliably deal with realworld tasks. Interest in uncertainty goes back to the beginning of robotics Mason (2012), but only over the last few years have mobile manipulators (e.g. Rusu et al. (2009); on Mobile Manipulation (2013)) and rough terrain robots (e.g. Maimone et al. (2007); Raibert et al. (2008)) started dealing with it efficiently, both in the environment and in their own state.
The Fukushima Daiichi nuclear disaster in 2011 had a profound impact on robotics. Despite rapid advancements in actuation and control, robots were unable to directly replace humans in hazardous tasks, like climbing in a damaged nuclear plant, searching rubble piles after a disaster Jajita and Sugihara (2009), or operating in humantraversable rough terrain. Legged locomotion in uneven 3D terrain is a key aspect for completing these and similar tasks, because of the primary advantage of legs to efficiently negotiate highly faceted 3D trails with more flexibility and mobility than other forms of locomotion such as wheels or tracks.
Recent advancements in control and perception have enabled bipeds to walk on flat Chestnutt et al. (2005) and uneven indoor terrains Nishiwaki et al. (2012). Major advances have also been made for outdoor quadrupeds and bipeds in rough terrain where the probability of blindly landing footholds is high Raibert et al. (2008) and uncertainty can be tolerated by lowlevel feedback control. Online footfall selection has been considered for quadrupeds and hexapods Krotkov and Simmons (1996); Bares and Wettergreen (1999); Kolter et al. (2009a); Plagemann et al. (2009); Kalakrishnan et al. (2010), but still, to the best of our knowledge, no physical humanoid has previously been shown to walk autonomously on unmodeled sparse 3D terrain. New advances in both perception and control Khatib and Chung (2014) are required; here we attempt to disentangle these two aspects to a degree and focus primarily on perception. The foothold selection problem is particularly interesting for bipeds with nonpoint feet that make contact with patches of terrain. Perception is one of the key enablers for finding such patches in the environment Katz et al. (2008); Behnke (2008); Brooks (2009). This brings us to our main hypothesis (Figure 1):
Main Hypothesis: Robots operating in many unstructured environments need to perceive sparse areas for potential contact. These can be detected and modeled using curved surface patches, and spatially mapped in realtime.
1 Thesis Outline and Contributions
Sparsity of footholds for bipedal robots requires i) a model formulation for the local contact surface areas, ii) an online perception algorithm for finding them, iii) techniques for handling uncertainty and reliability, and iv) a method for creating a map of the detected local contact areas around the robot and localizing within it during motion. This thesis presents algorithms to address each of these four requirements. We have also developed and released the Surface Patch Library (SPL) Kanoulas and Vona (2011) which contains the software implementations we used to evaluate the algorithms in our experiments.
In Chapter \thechapter we describe the sensing system we are using for acquiring data from the environment. This includes both a range sensor that produces a set of 3D point clouds over time and an Inertial Measurement Unit (IMU) that gives the corresponding gravity vector. We also discuss uncertainty models for the input 3D sample points associated with the sensor, along with some types of point cloud filtering, including outlier removal and smoothing. We also introduce a way for calibrating the IMU sensor with respect to the range sensor to which it is attached.
In Chapter \thechapter we describe the system for representing the environment. We introduce a set of 10 bounded curvedsurface patch types (Figure 8 left, Vona and Kanoulas (2011)) suitable for modeling local contact regions both in the environment and on a robot. We present minimal geometric parameterizations using the exponential map for spatial pose both in the usual 6DoF case and also for patches with revolute symmetry that have only 5DoF. We then give an algorithm to fit any patch type to 3D point samples of a surface, with quantified uncertainty both in the input points (including nonuniform variance, common in data from range sensors) and in the output patch. We also introduce an algorithm for validating the fitted patch for fit quality and fidelity to the actual data — extrapolations (like holefilling) which are not directly supported by data are avoided (Kanoulas and Vona (2013)).
In Chapter \thechapter we define the notion of a volumetric working space around the robot and we describe the patch mapping system. A dynamic map of bounded curved patches fit randomly over an environment surface that has been sampled by a range sensor is developed. The mapping algorithm is divided into four main steps after data acquisition. The first is a data preprocessing step, where both a bilateral filter is applied to the cloud to reduce noise and a sample decimation filter for performance purposes. A bioinspired saliency filter is also introduced for detecting points in a hikingtask scenario, so only relevant parts of the environment are considered for patch fitting. Recordings of human subjects traversing rough rocky trails were analyzed to give a baseline for target surface properties for foot placement. After filtering, the second step is the selection of seed points, where a random gridbased approach is introduced and applied to the filtered samples. Next is a neighborhood search around these points. Three different approaches for finding local neighborhoods were analyzed, which have different properties near surface discontinuities. The last step is to fit the pose, curvatures, and boundary of patches to the neighborhoods and validate them to quantify fit quality and to ensure that the patch is sufficiently representative of the actual data. We finally highlight the construction of a spatial map of the fitted patches around a robot.
In Chapter \thechapter we present the patch tracking method that completes the whole Patch Mapping and Tracking system. For tracking the camera pose at each frame an adapted version of the Moving Volume KinectFusion Newcombe et al. (2011a); Roth and Vona (2012) algorithm is applied. It is the first time that this camera tracking method is used for a bipedal locomotion application on physical hardware (Kinect Fusion without the moving volume algorithm is used in Ramos et al. (2013), though in simulation only). We improve the original algorithm for our particular application both by using the gravity vector from the IMU to keep the local map in a pose aligned to gravity, and also by using a virtual camera, which lies above the robot looking down in the direction of gravity, for acquiring a point cloud from a synthetic birdseye viewpoint during walking. In contrast to the original real camera raycasting method that considers upcoming surfaces only, the advantage of our virtual camera version is that the raycasting considers the environment around and under the robot’s feet, even portions that were previously visible but currently occluded by the robot itself.
In Chapter \thechapter we test the patch mapping and tracking system on a minibiped robot platform developed in our lab called RPBP (Rapid Prototyped Biped). We ran two experiments. In the first one (Figure 2, right) we test the system integrated on the robot independently of its control, making sure that shaking and vibration while the robot is walking do not hinder the tracking process. In the second one (Figure 2, left) we first train the robot to place its foot on patches that were manually fitted on four different rocks. Then we let the robot, starting from a fixed position, detect patches in the environment and if any of them matches one of the trained patches it executes the corresponding foot placement motion. These experiments conclude the thesis, whose main contributions are as follows.
Contributions

A new sparse environment surface representation using a set of bounded curved patches suitable for modeling local contact regions both in the environment and on the robot.

A fast algorithm to fit these patches to 3D point samples of a surface, with quantified uncertainty both in the input points and in the output patch.

Fast residual, coverage, and curvature patch validation tests for evaluating the fidelity of fitted patches.

Bioinspired rules for finding patches statistically similar to those selected by humans for hiking in rough terrain.

Realtime mapping of hundreds of patches near a walking biped in combination with dense volumetric depth map fusion and inertial sensing.
2 Related Work
Visual odometry has been used on several current walking robots including BigDog Howard (2008); Wooden et al. (2010) and the DLR Crawler Stelzer et al. (2012), though mainly for obstacle avoidance and traversability analysis, not detailed 3D foot placement or contact planning. Some steps have been made in that direction in Lang et al. (2007), where terrain is modeled using a Gaussian Process, but this was not applied for legged locomotion.
Online perception for foot placement has been recently implemented for quadrupeds and hexapods. In Plagemann et al. (2008, 2009) a continuous surface model is used for LittleDog locomotion, whereas in Belter et al. (2010) a local decision surface was used on a hexapod walking robot. In Kalakrishnan et al. (2009, 2010) a system learns optimal foothold choices from expert demonstration using terrain templates. Recently in Belter and Skrzypczynski (2012) a PTAM approach was used for updating the elevation map during locomotion.
In some cases foot placement has been done without perception by using a known 3D terrain map and online motion capture (e.g. Doshi et al. (2007); Kalakrishnan et al. (2010)). It is also common here to use perception for obstacle avoidance, terrain categorization, or gait selection without specific 3D foot placement Wooden et al. (2010); Stelzer et al. (2012). Quadrupedal and hexapedal studies are related to the bipedal case but often use a pointlike contact model, whereas many bipeds have extended feet to support torques for balance and may need to consider footsized terrain patches.
To date only a few works have used online perception for bipedal foot placement in uneven or rough terrain. In Okada et al. (2003); Gutmann et al. (2004, 2008) planar segments were fitted to point cloud data for indoor scenes with slopes and steps, and in Nishiwaki et al. (2012) a laser scanning system is used in a similar context. In Ramos et al. (2013) KinectFusion Newcombe et al. (2011a) was used in simulation to avoid harsh impacts. A number of other works (e.g. Maier et al. (2012); Hornung et al. (2013); Maier et al. (2013)) introduced perception for obstacle detection and navigation in cluttered surfaces, where the foot placement involves stepping over or climbing up/down flat obstacles. Recently Brossette et al. (2013) presented preliminary results in multicontact planning for a fullsize humanoid using 3D perception for extracting planar contact surfaces for navigation.
This thesis introduces a novel way to detect curved contact patches in the environment, whereas most prior work has focused on flat surfaces. We integrate this perception system with foot placement on rocks for a physical freestanding biped robot. Though other roughterrain walking robots have been developed, there is little prior work in realtime onboard 3D perception for biped foot placement. Finally, our approach to map and track the patches as the robot locomotes is based on a novel combination of our sparse patch map with a dense point cloud from newly available realtime depth map fusion algorithms.
Part I Sparse Surface Modeling with Curved Patches
Chapter \thechapter Input Data
Both perceiving the environment around a robot (exteroceptive perception) and sensing the robot’s own internal state (proprioceptive perception) are important aspects for driving planning and control actions in a real world scenario. Various perception sensors can be used for acquiring these important measurements (see Everett (1995); Siegwart et al. (2011)). In this thesis we use both exteroceptive range sensing for detecting upcoming 3D terrain contacts from a distance and proprioceptive inertial measurement unit (IMU) sensing for acquiring the robot’s orientation relative to gravity. In the next two sections we summarize the range and IMU sensors that provide the main inputs to our system.
3 Range Sensing
3D perception has gained a lot of interest over the last few years Rusu and Cousins (2011), mainly because low cost but high quality range sensors are now commonly available. Stereo and structured light systems, timeofflight cameras, and laser scanners produce clouds of 3D sample points of environment surfaces in real time. Here we focus on organized point cloud data in the form of an image grid acquired from a single point of view. Initially we take such images directly from a depth camera. Then in Chapter \thechapter a considerable level of indirection is added: the actual depth sensor images are fused (over space and time) into a volumetric model, from which a simulated sensor extracts virtual depth images for patch mapping.
In this thesis either the Microsoft Kinect or the Primesense Carmine 1.09 (see Figure 3) have been used for acquiring 3D point clouds, depending on different experimental requirements (mainly involving range limits when the sensor is handheld or on the minibiped). Both the Kinect and the Carmine sensor consists of three parts: 1) an infrared (IR) projector, 2) an infrared (IR) camera, and 3) an RGB camera. For estimating the 3D point cloud a triangulation method is applied using the IR emitter and detector that are separated by a baseline. As described in Bradski and Kaehler (2008); Khoshelham (2011); Smisek et al. (2011); Khoshelham and Elberink (2012) in detail, given an image pixel with coordinates and disparity from triangulation, the corresponding 3D point expressed in the camera frame is:
(1)  
(2)  
(3) 
using:
image pixel coordinates and disparity of the point (in pixels)  
3D point coordinates in camera frame (in physical units, e.g. m)  
the baseline between IR camera and projector (in physical units)  
IR camera focal length (in pixels)  
the principal point (in pixels) 
The origin of camera frame is the center of projection, the axis points into the scene through the principal point , the axis points to the right in the camera image, and the axis points down in the image. The 3D sample point coordinates in camera frame can be also expressed as a function of the coordinates of the measurement ray direction vector through pixel and the range of the data point along that vector as:
(4) 
From the above equations, the backprojected 2D pixel corresponding to an 3D point can be calculated as:
(5)  
(6) 
Using either of these two range sensors we receive 30Hz RGBD (red, green, blue, depth) data ^{1}^{1}1Spatial registration of the depth and RGB data used builtin calibration in the range sensor (RGB data is only used for visualization purposes in this thesis).. The structured light method used by the Kinect and Carmine does not work well in full sunlight, so when outdoor data were needed they were taken at twilight. Sunlight operation could be possible with other types of depth camera or stereo vision. Two point cloud examples of rocks along with their RGB images appear in Figure 4.
3.1 3D Point Cloud Uncertainty
A big challenge with range sensors is to quantify the uncertainty of the acquired data. The uncertainty could either be due to inaccuracies in the sensor system or due to triangulation errors (i.e. the correspondence problem Scharstein and Szeliski (2002)) and it can be twofold; the data may include outlier points and noisy inliers.
Outlier Points
An outlier point is distant from the others and does not represent the underlying surface from where it was sampled. Ideally such points would be removed from the data. In many cases outliers appear along depth discontinuities due to occlusions, jumps in surfaces, or reflections. Veil points May et al. (2008); Steder et al. (2011), which are interpolated across a depth discontinuity, usually appear in data acquired by lidars (which we do not use in the thesis).
There are various methods in the literature for detecting outliers. One simple approach is to consider as inliers only the points that have a minimum number of neighbors in a fixed distance. A heuristic has been introduced in Steder et al. (2011) for finding points that belong to borders both in foreground and background and removing those in between as veil points. Other methods, for instance the one introduced in Rusu et al. (2008), use statistical analysis for removing neighborhood points that are more than a fixed number of standard deviations away from the median. Similarly in Yao et al. (2011) another statistical method is proposed to identify and remove outliers by checking for big residuals during plane fitting. When dealing with static environments either data fusion over time Newcombe et al. (2011a); Roth and Vona (2012), or outlier removal using octree raycasting as proposed in Blas et al. (2009) can also be used.
In this thesis we address outliers both in a preprocessing step where a realtime discontinuitypreserving bilateral filter removes some outliers from the data (Section 13), and also when Kinect Fusion is used (Chapter \thechapter) for tracking and inherently ignores some outliers when data fusion over time is applied.
Noisy Inlier Points
A noisy inlier point deviates from the ground truth that represents the underlying surface. To express the data noise we use Gaussian modeling with covariance matrices. Though this is not the only way to represent uncertainty, it does cover common situations^{2}^{2}2The covariance matrices may also enable data fusion based on the Kalman filter, but in this thesis we do not explore that further.. There are various ways to estimate these covariance matrices, depending on the error model assumptions. Some assumptions can be the following (Figure 5):

Constant Error (Figure 5c): with constant nonnegative uncertainty in range, independent of the sample range, and no uncertainty in pointing direction, the covariance matrix for a sample with measurement vector is:
(7) 
Linear Error (Figure 5d): with a nonnegative factor that scales uncertainty linearly with the sample range, and no uncertainty in pointing direction, the covariance matrix for a sample with range and measurement vector is:
(8) 
Quadratic Error (Figure 5e): with a nonnegative factor that scales uncertainty quadratically with the sample range, and no uncertainty in pointing direction, the covariance matrix for a sample with range and measurement vector is:
(9) 
Stereo Error (Figure 5f): in Murray and Little’s Murray and Little (2005) twoparameter error model for stereo disparity uncertainty is represented by two nonnegative parameters and :

is the variance of the pointing error of the measurement vectors, represented as the variance in pixels of their intersections with the image plane at .

is the variance in the disparity matching error, also measured in pixels.
The covariance matrix for a 3D point in physical units is:
(10) where:
(11) is the baseline (in physical units), the disparity (in pixels), the image pixel coordinates, and the IR camera focal length (in pixels).

Whether based on stereo or timeofflight, range data exhibits heteroskedasticity (nonuniform variance) — typically there is much more uncertainty in range than aim Pathak et al. (2009); Murray and Little (2005), the variance changes with range, and because the measurement rays usually have a single center of projection, the error ellipsoids for the sampled points are not cooriented: each is elongated in the direction of its own measurement ray (Fig 5).
Thus, to estimate range data covariances we apply the twoparameter pointing/disparity stereo error model proposed by Murray and Little in Murray and Little (2005) (based on earlier work by others such as Matthies and Shafer (1990)) to estimate input sample covariances . The error model parameters we used for the Kinect are px, px; the former is from Konolige and Mihelich (2010), the latter was determined experimentally following Murray and Little (2005)^{3}^{3}3For the Bumblebee2 camera and (from the specifications document).
We use this error model when fitting patches to points sampled from a single sensor viewpoint (i.e. a single captured range image). In Chapter \thechapter we apply KinectFusion to the range data, which provides an alternate approach to handling inlier noise by averaging over many resamplings of the same environment surfaces. In some cases we also use either discontinuitypreserving bilateral Tomasi and Manduchi (1998) or median filters Huang et al. (1979) to reduce noise effects:

Median Filter The median filter replaces the central pixel of a fixed size window in the image with the median inside the window. The method can be very efficient Perreault and Hebert (2007) and effective for reducing noise and removing outliers from the data, while preserving discontinuities.

Bilateral Filter The bilateral filter is similar to the median filter with the difference that central pixel’s neighbors are weighted making the filter nonlinear Paris and Durand (2006).
3.2 3D Point Cloud Filtering
There are various other filtering methods for the acquired point clouds serving different purposes Rusu and Cousins (2011). Some used in this thesis are the following:

Passthrough: The passthrough filter removes points whose specified properties (e.g. x,y,zcoordinates, intensity, etc) are outside of some limits.

Radius Outlier Removal: Removes outliers by checking the number of points in a predefined radius neighborhood.

Decimation: Decimates the points by a given factor, discarding rows and columns of pixels in the image, e.g. a factor of 2 will discard all the even rows and columns.

Lower Resolution: Lowers the resolution by a given factor by block averaging, e.g. a factor of 2 will replace each 2by2 submatrix with its average value. It is similar to the median filter, but the latter can be more robust to outliers.

Voxel Grid: The approximate voxel grid filter downsamples the cloud by creating a 3D voxel grid and replacing all the points in each voxel with their centroid. This method leaves the point cloud unorganized. Some fast approximations have been introduced Rusu and Cousins (2011) to improve the efficiency of this filter.
4 Inertial Measurement Unit (IMU)
The use of proprioceptive Inertial Measurement Units (IMUs) for sensing the direction of gravity is very useful for locomotion. Using a CH Robotics UM6 9DoF IMU mounted on the top of our range sensors (Figure 3), we receive 100Hz IMU data spatiotemporally coregistered with the 30Hz RGBD data received from the depth sensor. Though an IMU can also sense velocities and accelerations, in this work we use only the gravity direction as input to our algorithms. In this thesis temporal registration of the RGB, depth, and IMU datastreams is based on timestamps, and is approximate because the underlying operating systems used were not hard realtime. Spatial registration of the RGB and depth data is based on manufacturer hardware calibration and image warping implemented in the hardware driver. Spatial registration of the depth and IMU data uses a custom calibration algorithm described next.
4.1 IMU Calibration
Calibration is required for calculating the rotation transform that gives the orientation of the UM6 relative to the range sensor. Given a dataset of depth images of a flat horizontal surface that includes a dominant plane (e.g. a flat floor) and the corresponding UM6 orientation data, the gravity vector is calculated for each depth image in the UM6 coordinate frame from the UM6 orientation data. We pair each gravity vector with the corresponding one in the depth camera coordinate frame, which is estimated as the downward facing normal of the dominant plane. For all these pairs of gravity vectors we solve the orthogonal Procrustes problem Eggert et al. (1997) that gives the UM6 to Camera transform (Figure 6).
5 Related Work
Perception and Sensing
Much research on locomotion focuses on control or path planning and assumes known terrain models or uses motion capture systems to extract information about the robot and its position with respect to the terrain (e.g. Doshi et al. (2007)). Other systems, such as Chew et al. (1999), use only proprioceptive sensors for driving locomotion actions. For an autonomous real world task, where there is no prior information about the environment, driving actions from highlevel—but quantitative—perception using exteroceptive sensors is essential. Tactile sensors are helpful only when actual contact is taking place. For a priori information about the surface, range sensing is required. There are systems that only use color cameras Filitchkin and Byl (2012) and others that use laser scanners Nishiwaki et al. (2012), stereo Stelzer et al. (2012), or timeofflight Roennau et al. (2010) cameras to extract depth data. Several other walking robots have used depth or RGBD sensors, as we do, including stereo vision on QRIO Gutmann et al. (2008), Xtion on NAO Maier et al. (2012), and depth camera on HRP2 Nishiwaki et al. (2012); Ramos et al. (2013); Brossette et al. (2013). Since sensors are noisy and the uncertainty of the measurements is high, perception using range sensing and IMU is a very challenging task, but it is rapidly advancing Rusu and Cousins (2011).
Uncertainty Representation
The importance of representing 3D range data uncertainty has been considered at least since the 80’s Matthies and Shafer (1990), where 2D Gennery (1980) and 3D Hallam (1983); Broida and Chellappa (1986) normal distributions were used, as well as additive zero mean Gaussian noise modeling Faugeras et al. (1986) for 3D stereo measurements. In Matthies and Shafer (1990) nonGaussian noise was considered for errors in the nonlinear triangulation operation, which are approximated with 3D Gaussian distributions, while later in Johnson and Kang (1997) a cylindrical Gaussian distribution centered at the 3D point and oriented along the measurement ray was used for modeling the stereo error uncertainty. In Dorai et al. (1997) uncertainty was modeled in the depth measurement using ellipses Hebert et al. (1993). Tasdizen and Whitaker Tasdizen and Whitaker (2003) assumed a Gaussian distribution for representing the depth noise with zero angular error. Gaussian modeling is not the only way to represent 3D point cloud uncertainty. Pauly, Mitra, and Guibas Pauly et al. (2004) considered the point cloud as a result of a stochastic process corrupted by zeromean additive noise to come up with a likelihood and a confidence map for the data. Closed form variance formulations Bae et al. (2009) and nonGaussian distributions Pan et al. (2011) are also alternative ways to represent the uncertainty of the range data. Recently an uncertainty model for the Kinect sensor has been introduced Khoshelham (2011); Smisek et al. (2011); Khoshelham and Elberink (2012), while a mixture of Gaussians has been used in Dryanovski et al. (2013).
6 Summary and Future Work
In this chapter we introduced the input data acquisition process for both exteroceptive (range sensor) and proprioceptive (IMU sensor) data. We discussed error modeling for the range data and different types of filtering for 3D point clouds. We also described a method for calibrating the IMU sensor with respect to the RGBD camera on which it is mounted.
Data fusion is a key aspect in robotics and has been studied exhaustively. An interesting direction for bipedal locomotion perception is to fuse exteroceptive (range sensing) and proprioceptive (kinematics and tactile sensing) data for detecting contact areas in the environment. Exteroception can detect upcoming terrain contact areas from a distance, but with relatively high uncertainty. Kinematic proprioception senses the pose of contact areas on the robot itself—e.g. heel, toe, foot sole—potentially with relatively low uncertainty. Once a contact is established, the environment contact area can be remeasured exproprioceptively through kinematics and touch, possibly with reduced uncertainty compared to prior exteroception. Finally, the uncertainty representation plays an important role in 3D perception. Various ways of modeling uncertainty have been introduced, but there is not yet a generally accepted model and further investigation is needed.
Chapter \thechapter Environment Representation
Some of the most challenging open problems in robotics are those which require reliable contact with unstructured world surfaces when locomoting (Figure 7 right). To enable roughterrain walking and climbing, a perception system that can spatially model and finely quantify potential 3D contact patches may be needed. Contact is wellstudied (e.g. Mason and Salisbury (1985)) but, arguably, there is not yet any accepted general system for modeling the shape and pose of potential contact surface patches, including both patches on the robot (e.g. finger tips, foot soles, etc) and also in the surrounding environment. This is especially true when (a) curved, bounded patches with (b) geometrically meaningful minimal parameterizations and (c) quantified uncertainty are desired (Figure 7 left).
Why curved patches? Our interest is legged locomotion on large rocks. Flat areas can be rare in such natural environments. More broadly, contact surfaces in manmade environments are also often curved—railings, doorknobs, steering wheels, knobs, etc. Though curved surfaces can be approximated by sets of smaller planar patches Vaskevicius et al. (2010), the job can often be done with fewer and larger curved patches. Curved surface geometry is more complex, but it may still be an advantageous tradeoff to reason about fewer and larger patches. For example, a spherical robot foot stepping into a divot on a rock might be modeled as the interaction between just one spherical and one elliptic paraboloid patch (on foot and rock, respectively). If the surfaces were approximated using collections of smaller planar patches the interaction could require combinatorial reasoning about many possible contacting pairs.
By “geometrically meaningful minimal parameterizations” we mean that each patch is defined by the fewest possible parameters, and that these have direct geometric interpretations—rotations, translations, curvatures, lengths, and angles. Geometric (vs. algebraic) parameterizations also support reasoning Dai et al. (2007) about possible actions with patches, and allow some representation of spatial uncertainty with geometric error ellipsoids. Minimality is desirable because redundant (nonminimal) parameterizations can slow the numerical optimizations used in surface fitting Grassia (1998) and must be handled specially in uncertainty modeling Pathak et al. (2009).
It is often important to get both a best estimate of patch parameters and a quantification of the uncertainty therein. We develop full uncertainty quantifications based on Gaussian modeling with covariance matrices as were described in Chapter \thechapter, by propagating the input 3D point cloud uncertainty Meyer (1992); Thrun et al. (2005) to the output patch. Though in this thesis we use dense volumetric depth map fusion for mapping (Chapter \thechapter), we also intend our models to be usable in sparse Kalmantype SLAM (Simultaneous Localization and Mapping Smith and Cheeseman (1986); Smith et al. (1990); DurrantWhyte and Bailey (2006)) algorithms that maintain a dynamic local patch map, Figure 7, of contact patch features around a robot. Such a map could potentially include both environment surfaces and contact pads on the robot itself, which may themselves be potentially uncertain due to kinematic error.
We first give the details of the patch models for representing the environment in local contact areas, followed by an algorithm to fit and validate a patch to noisy point cloud data from common types of range sensor. This fitting is the main step in using patches to represent surface shape and pose. We also demonstrate the algorithms in experiments with simulated and real range data. More experiments are presented in Chapters \thechapter and \thechapter in practical contexts including humans walking on rocky natural terrain and a biped robot walking near and stepping on rocks.
7 Patch Modeling
In Vona and Kanoulas (2011), we introduced a generalpurpose set of ten curved and flat patch types (Figure 8, Table 1) suitable for both natural and manmade surfaces and balancing expressiveness with compactness of representation. Eight come from the general secondorder polynomial approximation to a smooth surface at a given point—the principal quadric—which is always a paraboloid, possibly degenerated to a plane Petitjean (2002). We add two nonparaboloid types to better model common manmade spherical and cylindrical surfaces, and we pair each surface type with a specific boundary curve to capture useful symmetries and asymmetries. Each patch is parametrized using extrinsic and intrinsic parameters (see parametric surfaces in Mortenson (2006)) for its shape and spatial pose.
surface  bound  parameters  DoF  world frame equations  
intrin.  extrin.  
elliptic paraboloid  ellipse  10  (26,27,29);  
hyperbolic paraboloid  ellipse  10  (26,27,29);  
cylindric paraboloid  aa rect  9  (26,27,30,31);  
circular paraboloid  circle  7  (26,27,29); ,  
plane  ellipse  8  (26,27,29);  
circle  6  (26,27,29); ,  
aa rect  8  (26,27,30,31);  
c quad  11  (26,27,33,31);  
sphere  circle  7  (36,37,29); ,  
circular cylinder  aa rect  9  (40,41,30,31); 
7.1 Extrinsic and Intrinsic Surface Parameters
An instance of a patch will be a vector of real parameters which define both its shape (curvature and boundary) and its 3D rigidbody pose. We call the former intrinsic and the latter extrinsic parameters Srinivasan (2003). We must consider different issues to achieve minimal parametrization for each, and the distinction also enables the option to model shape (intrinsic) and pose (extrinsic) uncertainty separately. Minimal intrinsic parametrization for the proposed patches will be given by (a) one parameter for each variable curvature, and (b) a minimal parametrization of the boundary curve. However, minimal extrinsic parametrization depends on the continuous symmetry class of the patch. For example, a patch with two different curvatures (Figure 9 left) has no continuous symmetry: its rigid body pose—here any element in the special Euclidean group —has six degrees of freedom (DoF).
But a planar patch with a circular boundary (Figure 9 right) has a continuous rotation symmetry and only five extrinsic DoF. Remarkably, it has been shown that there are exactly seven continuous symmetry classes in 3D Srinivasan (2003): revolute, prismatic, planar, spherical, cylindrical, helical, and general (the first six correspond to the lower kinematic pairs; the last represents no continuous symmetry). Since we only consider patches with boundaries, we need only the general (no continuous symmetry, 6 DoF pose) and revolute (one continuous rotation symmetry, 5 DoF pose) classes—continuous translation symmetries are not possible for bounded patches.
7.2 Pose Representation with the Exponential Map
We require two extrinsic parameterizations: one with six parameters for asymmetric patches and one with five parameters for patches with revolute symmetry. It is well known that, because the Liemanifold of the special orthogonal group (the rotation subgroup of ) is nonEuclidean, there is no singularityfree minimal parametrization of . For the general 6DoF case we thus select a minimal parametrization with singularities that are easiest to handle for our application. One of the core computations will be patch fitting by iterative optimization, and for this Grassia showed in Grassia (1998) that a useful pose representation is^{4}^{4}4We explicitly notate transposes; orientation is crucial esp. for Jacobians.
(12) 
where is a translation and is an orientation vector giving an element of via an exponential map. Grassia observed that in this parametrization singularities are avoidable by a fast dynamic reparameterization, reviewed below.
We use Rodrigues’ rotation formula for the exponential map (Grassia used quaternions):
(13)  
Despite division by , (13) converges to as . For numerical stability we use the series expansion approximations and for small (e.g. for ). As promised, the representation has a direct geometric interpretation: is just a translation, and (wlog for ) gives the righthandrule rotation angle about the spatial axis defined by the unit vector . While exponential map approaches are not new Brockett (1983); Park (1994), matrices in , the Lie algebra of , are typically used instead of . Though elegant, the former do not satisfy our goals of minimal parametrization and direct geometric interpretation.^{5}^{5}5It is true that there is a 1:1 correspondence between matrices in and elements of the parametrization; conceptually, we have invoked this correspondence and simplified the results directly in terms of .
Using the fact that counterclockwise rotation by is equivalent to clockwise rotation by , Grassia’s reparametrization converts any into a canonical ^{6}^{6}6For there is still ambiguity between and ; this can be resolved by a consistent sign policy. one with :
(14) 
represents the same rotation as , but stays away from the singularity surfaces where is a multiple of .
Algebraically, corresponds to an element
of , a homogeneous rigid body transform, and can thus define the pose of a local coordinate frame (and a patch therein) relative to a world frame : is a basis for and is its origin. The transformation of a point in to in , and the reverse, are familiar functions
(15)  
(16) 
where (16) makes use of the inverse transform
(17) 
Equations (12–16) constitute our 6 DoF pose parametrization. For the 5 DoF case, observe that only one of the three basis vectors of need be specified; rotation symmetry allows the others to make any mutually orthogonal triple. Only two DoF are required, equivalent to specifying a point on the unit sphere. We do this by reusing (12–16) with fixed at 0:
(18) 
The geometric interpretation of is the same as for , except that is constrained to the plane. In some contexts we may want to calculate an that induces a local coordinate frame with the same as a given . For any given canonical , a canonical always exists that satisfies
(19) 
can be calculated as
(20)  
As in Brockett’s product of exponentials Brockett (1983), 6 DoF poses can be composed to make any kinematic chain. Let
(21) 
be the poses (equiv. transforms) in the chain from end to base in order from right to left. Then the pose of a patch attached to the end of the chain relative to the base is
(22)  
substituting for 5 DoF patches, and using the log map corresponding to the inverse of (13). We give an algorithm to compute in Appendix B.
7.3 Patch Models
We now present the details of ten surface patch models (Figure 8, Table 1) based on seven curved surface types. Five of these partition the paraboloids, including the important degenerate case of a plane; the other two add true spherical and circular cylinder patches, nonparaboloids that are common in manmade environments and on robots. For nonplanar surfaces we select one specific parametrized boundary shape which trims the surface into a local patch. For planes we allow a choice of four boundary shapes.
The next two sections give the details of the paraboloid and nonparaboloid patch models. This particular system is not the only possible taxonomy; it reflects our design choices in an attempt to balance expressiveness vs minimality.
7.3.1 Paraboloids
The bestfit degreetwo local polynomial approximation to any smooth surface at a given point , called the principal quadric, is always a paraboloid—a quadric of one sheet with a central point of symmetry about which the surface has two independent curvatures in orthogonal directions (Figure 9 left). These are the principal curvatures of at , and is the symmetry point. Defining and as unit vectors in the directions of the principal curvatures in the tangent plane to at , the surface normal to at is . If is considered to be embedded in a world coordinate frame , then is the origin and
is a basis for the principal coordinate frame (all standard terms) of at , which we also call local frame ^{7}^{7}7L is also the Darboux frame Hameiri and Shimshoni (2002) of the paraboloid.
Using the log map, the transform
takes points from to , enabling a short derivation of equations for a general paraboloid parametrized by , , and . Starting in where the paraboloid is in standard position, with and ,
(24)  
(25) 
are the implicit and explicit forms for the surface equation, respectively, with a point on the patch in and parameters of the explicit form. Moving to in world frame is accomplished by composing (24,25) with (15,16), yielding
(26)  
(27) 
Note that in this formulation is always the projection of onto the local frame plane:
(28) 
In the general case , giving 7 or 8 DoF paraboloids—6 pose DoF plus up to two curvatures (boundary parameterizations will add DoF). Six DoF pose is required because implies no continuous rotation symmetries, only discrete symmetries about . It is standard to separate three surface types where (Figure 8): elliptic paraboloids have two nonzero curvatures with equal signs, hyperbolic paraboloids have two nonzero curvatures with opposite signs, and cylindric paraboloids have one nonzero curvature. In all cases is the outward pointing surface normal and positive/negative curvatures correspond to concave/convex directions on the patch, respectively^{8}^{8}8To reduce ambiguity wlog choose , though some ambiguity is unavoidable due to bilateral symmetries..
Boundaries
We bound elliptic and hyperbolic paraboloid patches with ellipses in the plane of the local frame , axis aligned and centered at (Figure 10 (b)). If are the ellipse radii then the bounded patch is the subset of the full surface (24–27) where, with , satisfies
(29) 
For cylindric paraboloid patches, replace the ellipse boundary with an axis aligned rectangle with halfwidths (Figure 10 (a)). In the plane of the vertices are
(30) 
in counterclockwise order, and the bounding condition can be stated as, with ,
(31)  
where is the implicit form for a 2D line given two points on the line; is on or to the left of the directed line through towards iff
(32) 
For the special case we identify two more surface types (Figure 8): circular paraboloids have both curvatures equal and nonzero, and planes have both curvatures zero. Both of these have continuous rotation symmetry about , so we use the 5DoF pose parametrization , provided that the patch boundary also has the same continuous rotation symmetry. The latter holds for circular boundaries, which we use for circular paraboloids (Figure 10 (d)). Let be the surface curvature and the bounding circle radius; circular paraboloids are then defined by (24–29) with , , , and with the dimensions of the function domains correspondingly reduced.
For the important case of paraboloids degenerated to planes we give a choice of four boundary types: ellipses, circles, rectangles, or general convex quadrilaterals (developed next). For all except circles, the planar patch loses its continuous rotation symmetry and requires full 6DoF pose parametrization; the patch is defined by (24–27) with (and correspondingly reduced function domains) and either (29) or (31). Planar patches with circular boundary are the same as circular paraboloids but with .
For convex quadrilateral boundaries, keep at the intersection of the diagonals and (Figure 10 (c)), where are the vertices in counterclockwise order in the plane of local frame . Define as half the angle between the diagonals and the halfdiagonal lengths such that
(33)  
Then the quadrilateral is defined by (31) using vertices (33) parametrized by . Convexity is ensured by construction, and only five parameters are needed even though a general planar quadrilateral has 8 DoF—the remaining three (a rotation about the plane normal and two inplane translations) are contributed by the extrinsic pose.
7.3.2 Spheres and Circular Cylinders
Spheres and circular cylinders are common on robots and in manmade environments. Though still quadrics, neither is a paraboloid, suggesting two additional patch types (Figure 8). (We do not model complete spheres or cylinders, only bounded patches of hemispheres and halfcylinders.)
Again starting in local frame , the implicit and explicit equations of an upright hemisphere with apex at the origin and curvature (hence possibly infinite radius ) are^{9}^{9}9In the limit as (34–41) all reduce to planes., with and ,
(34)  
(35) 
Composing these with (15,16) gives the world frame forms ,
(36)  
(37) 
Circular halfcylinder surfaces are similar but (a) have no dependence on and (b) require 6 DoF pose:
(38)  
(39)  
(40)  
(41) 
Boundaries
8 Patch Fitting
Having defined the patch models, it is natural to consider recovering contact surface areas in the environment by fitting bounded curved patches to noisy point samples with quantified uncertainty both in the inputs (the points) and the outputs (the patch parameters), which is not a trivial problem^{10}^{10}10We have found very few prior reports on the particular fitting problem including boundaries and quantified uncertainty. (Figure 11). Though linear least squares (LLS) can fit a quadric surface to points Dai et al. (2007), and its extension to linear maximum likelihood fits data corrupted by white noise, the problem appears to become nonlinear when the points are heteroskedastic (i.e. have nonuniform variance). Also, we want to fit bounded paraboloids, spheres, and circular cylinders, not just unconstrained quadrics.
In Vona and Kanoulas (2011) we give a nonlinear fitting algorithm which handles these issues. It is based on a variation of LevenbergMarquardt iteration that fits a bounded curved patch to a set of 3D sample points. The algorithm minimizes a sumofsquares residual by optimizing the patch implicit and explicit parameters. The residual for an individual sample point is computed by scaling the value of the implicit form by the inverse of a firstorder estimate of its standard deviation, which is derived in turn from a covariance matrix modeling the sensor uncertainty for the point.
Next we describe the patch fitting algorithm. Elliptic, hyperbolic, circular, and cylindrical paraboloid as well as planar patches are fitted automatically depending on the detected curvatures of the underlying surface. The nonparaboloids (cylindrical and spherical patches) are fitted only if requested. Also, similarly, for planar paraboloids, the type selection for the boundary curve is only partially automatic — rectangles and convex quads are only used if requested.
8.1 Patch Fitting Algorithm
The inputs are (Figure 12):

sample points with covariance matrices (positive semidefinite)

the general surface type to fit^{11}^{11}11Taking as inputs allows constrained fitting of specific types; they could be automatically found by checking all possibilities for the best fit. {parab, plane, sphere, ccyl}

the boundary type {ellipse, circle, aarect, cquad} if ^{12}^{12}12 is implied if .

a boundary containment probability
The outputs are:

the fitted patch type

parameters , where is the DoF of the patch type (Table 1)

covariance matrix
The algorithm proceeds in 2 stages (9 total steps), which include heuristics for avoiding local minima when solving the nonlinear system. The first three steps fit an unbounded surface; the rest are largely concerned with fitting the boundaries, which can include final resolution of the patch center and orientation (in steps 6 and 9) where the bounding shape breaks symmetries of the underlying surface. An illustration of the whole fitting process for a simulated paraboloid can be shown in Figure 13.

Fit an Unbounded Surface

Plane fitting

Fit a plane with LLS, ignoring .

Unless , refit the plane with weighted LevenbergMarquardt (WLM), detailed below, including , using (36) with .

Set (perp. proj. of on plane).
Note that:

At the end of Step 1 the refined plane is defined by (i) the point on it, which is the centroid of the data and (ii) by the plane normal which is the third column of the rotation matrix. Note that the third element of the rotation vector is zero, since it is currently a 2D orientation vector, because an unbounded plane is rotationally symmetric about its normal. Note also that an unbounded plane has only 3DoF, not 5DoF. The extra two DoF are constrained during fitting by keeping at the projection of the centroid of the data points onto the plane. There is a choice of boundary shapes for plane fitting, and all except circular will break the rotational symmetry of the unbounded plane. This is handled later in Step 9, where it may be necessary to extend the rotation vector from 2D to 3D.

(if ): Since a general assymetric paraboloid will be fitted by WLM in Step 2, probably a relatively expensive WLM in Step 1 will not improve the refinement. The plane fit by LLS in Step 1i will serve to initialize WLM to the correct region of parameter space (which is important, because WLM will only find a local optimum), but all components of and (the only parameters we have estimated so far) will be replaced by the WLM in Step 2.

(if ): Since the end goal is to fit a plane and there will be no WLM in Step 2 for planes, it is required to refine it in Step 1ii, and importantly, to get its covariance matrix. Note that in Step 1ii a redundant parameterization is used, since the point on the plane (defined by ) has two extra DoF to slide around on the plane. This redundancy is compensated by constantly adjusting at the projection of the data centroid onto the plane (a point that is well centered in the data is preferable, since all the planar patch boundary shapes (ellipse, circle, aa rect, convex quad) are origincentered).

(if ): Since in this case the end goal is a spherical patch, and in Step 2 only an unbounded (i.e. complete) sphere will be fitted, there is a need to extract the orientation of the patch, which is not coming from the WLM. It is determined, instead, by the plane normal calculated here in Step 1. Thus the plane normal estimation is essential, including the uncertainty of the input data points. The LLS in Step 1i did not consider that uncertainty, so it is needed to refine the plane fit with WLM.

(if ): For circular cylindrical patch, the logic is similar to the spherical patch. The WLM in Step 2 fits an unbounded cylinder and the orientation of the final bounded cylindrical patch will come from a combination of the plane normal that is calculated here and the direction of the cylinder symmetry axis recovered in the WLM in Step 2.


Curvature Discrimination (if )


Fit the Boundary
Having fitted an unbounded patch to the 3D data, the boundary needs to be determined. Principal Component Analysis (PCA) is applied to the 2D projected points onto the local plane for finding the enclosing boundary. A closedform solution Rocha et al. (2002) for the corresponding eigendecomposition problem using 1D and 2D moments to fit approximate boundaries is used. Note that the difference between planar and nonplanar patches is that for the latter the principal direction axes are the axes of the local reference frame (implied by the principal curvatures of the underlying surface), whereas for the planar patches the axes need to be determined by fitting the boundary shape.
Determine the Boundary Type (if )
Set based on . If the patch is not a plane then the type of the boundary is uniquely defined from the type of the patch (Table 1). Otherwise it is determined from the user as an input.
Set for boundary containment scaling Ribeiro (2004).

Initialize Bounding Parameters
Project the data to using (28).
Set first and second data moments: , , , .

Cylindrical Paraboloid and Circular Cylinder Boundary Fitting
If set and , where is the standar deviation along the axis of the local frame and is the standard deviation along the axis (the data are already zeromean in due to the nonzero principal curvature in that direction).

Circular Paraboloid and Sphere Boundary Fitting
If set .

Elliptic and Hyperbolic Boundary Fitting
If set .

Plane Boundary Fitting

If , and will be available from either 1 or 3. The extents and rotation of the boundary curve are determined now by twodimensional PCA.
Set and (c.f. Rocha et al. (2002))
(42) 
If set and from using (20).

If set .

If set^{13}^{13}13We currently fit convex quad the same as aa rect, but note that the former is still useful for designed (vs fit) patches, e.g. part of a foot sole.
(43) 
If , determine the inplane rotation of the boundary shape by setting

