Prioritized Kinematic Control of Joint-Constrained Head-Eye Robots using the Intermediate Value Approach

Prioritized Kinematic Control of Joint-Constrained Head-Eye Robots using the Intermediate Value Approach

Steven Jens Jorgensen, Orion Campbell, Travis Llado, Jaemin Lee, Brandon Shang , and Luis Sentis The Department of Mechanical Engineering, Electrical and Computer Engineering, and Aerospace Engineering and Engineering Mechanics at the University of Texas at Austin.This work is partially supported by a NASA Space Technology Research Fellowship Grant Number NNX15AQ42HThe authors are grateful to the members of the Human-Centered Robotics Lab in UT Austin for their input

Existing gaze controllers for head-eye robots can only handle single fixation points. Here, a generic controller for head-eye robots capable of executing simultaneous and prioritized fixation trajectories in Cartesian space is presented. This enables the specification of multiple operational-space behaviors with priority such that the execution of a low priority head orientation task does not disturb the satisfaction of a higher prioritized eye gaze task. Through our approach, the head-eye robot inherently gains the biomimetic vestibulo-ocular reflex (VOR), which is the ability of gaze stabilization under self generated movements. The described controller utilizes recursive null space projections to encode joint limit constraints and task priorities. To handle the solution discontinuity that occurs when joint limit tasks are inserted or removed as a constraint, the Intermediate Desired Value (IDV) approach is applied. Experimental validation of the controller’s properties is demonstrated with the Dreamer humanoid robot. Our contribution is on (1) the formulation of a desired gaze task as an operational space orientation task, (2) the application details of the IDV approach for the prioritized head-eye robot controller that can handle intermediate joint constraints, and (3) a minimum-jerk specification for behavior and trajectory generation in Cartesian space.

I Introduction

Fig. 1: The Dreamer robot executing prioritized gaze tasks using the described projection-based controller, in which the eye gaze tasks have higher priority than the head gaze tasks. (a) Starting from the center, the eyes are to create a small square in a counter-clockwise direction and the head is commanded to create a big square (orange dotted-line) in a clockwise direction. The numbered arrows indicate the waypoint trajectory order. The red, blue, green spheres indicate the actual trajectory of the head, left eye, and right eye respectively. Due to joint limits, both trajectories cannot be accomplished. However, task priority and solution smoothness are preserved as joint limit tasks are automatically inserted via the Intermediate Desired Value approach. The figure shows the controller satisfying the higher prioritized eye gaze task by compromising the lower priority head gaze task. (b) Dreamer in the final configuration after executing the trajectories.

The control of a robot’s gaze behavior has practical use in Human-Robot Interaction as gaze cues can be used to initiate and ensure joint attention [1], communicate intentions and engagement [2, 3], shape conversation roles [4], and convey non-verbal expressions or emotions pertinent during social interactions [5, 6]. The gaze behavior for anthropomorphic robots with a head-eye mechanism are even more important for human likability [7]. Here, the definition of a gaze task is extended to any end-effector that can point towards a fixation point. Thus, in addition to each eye having its own fixation point, the robot head can also have a fixation point.

While control methods are available for specifying 3D gaze fixation tasks [8, 9, 10, 11, 12], control formulations that can handle multiple 3D gaze fixation points with priorities for generic head-eye robots are largely lacking. The control formulation presented here addresses that need by focusing on the precise control of multiple gaze fixation points for generic head-eye robots. Concretely, the proposed controller can handle multiple gaze orientation tasks and execute the desired tasks with prioritization. Figure 1 shows how the described controller executes three orientation tasks (two for the eyes and one for the head) with priorities under joint limits.

The prioritized controller is based on the whole-body control of robots in the operational space using null-space projection [12], [13]. Using this formulation, control policies for any robot require merely identifying the correct Jacobians and operational task description. Thus, formulating a controller in this manner creates a generic head-eye controller.

Null-space projection techniques are popular for prioritized control of redundant robots [14, 15, 16, 17] as they are analyzable [18] and computationally efficient [19]. However these controllers fail to satisfy task specification without the inclusion of joint limits. Since joint limits are intermediate, the joint limit constraints need to be constantly inserted or removed from the task specification. However, doing so changes the dimension of the task Jacobian causing discontinuities when performing pseudo-inverses or optimizations [20]. To handle this issue, the Intermediate Desired Value (IDV) [21, 22] approach is utilized, which can automatically insert joint limit tasks and preserve solution continuity.

The paper is organized as follows. Section II provides a discussion of related works on the control of head-eye robots. Section III describes the technical approach of (i) extracting the task Jacobian for head-eye robots, (ii) expressing the desired gaze fixation point as an operational space task, (iii) detailing the IDV-based prioritized controller, and (iv) generating minimum-jerk based Cartesian-space gaze trajectories. Section IV and V show experimental results on the Dreamer robot and provide concluding discussions.

Ii Related Works

Due to the importance of gaze behavior, there are many approaches to implementing gaze controllers. For research applications that need immediate results, gaze control can be as simple as executing predetermined configurations to simulate gaze aversion [23] in conversations. Approximate gaze control can also be sufficient if the imitation of human cognition [24], or the study of biomimicry [25] are more important.

For robots that need precise gaze control with biomimetic behavior, the implementation of such controllers is split between achieving gaze in a 2D image space or a 3D fixation point. Examples of the the former creates a mapping between joint positions and the optical flow of the 2D image space [26, 27, 28]. For the latter, reasoning about the robot kinematics and trigonometric constraints can give a direct inverse kinematics solution [8], but this is restricted to similarly configured robots. Other examples of 3D-cartesian controllers capable of executing biomimetic 3D gaze fixation tasks include [9] combining human data and established state-space control methods, as well as a completely optimization-based method [10] to achieve 6-DoF gaze cartesian control. However, the latter is specifically formulated for a robot with only two eyes having a single fixation point for both the head and eyes.

Thus, all the gaze control formulations above are not general enough for generic head-eye robots in that it cannot handle multiple fixation points and that task priority is non-existent. A brute-force method is also available via nonlinear optimization with the Drake control tool box [11], which can specify a single gaze task as a cone constraint and encode priorities as non-linear constraints, but this can be more computationally expensive. Lastly, a prioritized operational space formulation for gaze control was presented in [12], however it is limited to the control of head gaze only, and the joint limit task insertion suffers from the same discontinuity issues mentioned previously while also not having a method for escaping the joint limit attractor.

Iii Technical Approach

Iii-a Robot Kinematics and Jacobian

Fig. 2: Dreamer has 7 Degree of Freedoms in its head. The 6-D spatial Jacobians is derived by first finding the Screw Axes of the kinematic chain (see Table I) and then recursively using the adjoint mapping operator. This methodology for deriving the Jacobian is explicitly described in [29]
TABLE I: Dreamer Head Screw Axes with link lengths

The kinematics of Dreamer’s head is described by Fig. 2. Let be the head joints, be the eye pitch joint and be the yaw joints for the right and left eyes respectively.

Given an operational point on the robot’s body with linear and rotational components, the spatial change, , with respect to the world frame due to a joint change is described by


where , is the 6-D spatial Jacobian of a robot with joints. Deriving can be performed by first finding the screw axes of the kinematic chain (see Table I), and then recursively finding the -th column, , of using the adjoint mapping operator (See Ch.3 and Ch. 4 of [29]). Note that describes the spatial twist as a function of the first joints .

Setting , the spatial Jacobians of interest are


where the subscripts , , indicate the head, right eye, and left eye respectively. As it is trivial to control the operational space directions , here the focus is only on controlling the rotational components, , of the operational space corresponding to roll, pitch, and yaw. Thus, for the Jacobian of the head, only the first three rows corresponding to head roll, pitch, and yaw. For the Jacobian of the eyes, and , only the first two rows are kept to control eye pitch and yaw.

Iii-B Defining the Instantaneous Desired Gaze Orientation given a Fixation Point

Fig. 3: The Instantaneous Desired Gaze Orientation. An orientation, w.r.t to the world frame is constructed using the current orientation of the operational space frame and the desired fixation point. The vector defines the desired unit vector direction, , which also defines the normal of a plane. A unit vector (here is used) from the fixed frame is then used to construct by projecting to the plane defined by . Finally, is constructed by taking the cross product of and

The control structure presented here constantly steers the current head and eye orientations to point towards the corresponding desired fixation points. At every time step, an instantaneous desired gaze orientation is constructed.

Note that a rotation matrix, , with unit vector columns, , can be used to represent the orientation of a frame with respect to (w.r.t) a reference frame. Thus, defining the instantaneous desired gaze orientation is equivalent to finding the instantaneous desired unit vectors. Let , , and be unit vectors and the subscripts , , and indicate the names world, current and desired orientations respectively. All the unit vectors are w.r.t to the world frame. Next let and be the location of the fixation point and the origin of the operational space frame. Finally, let . Using Figure 3 as a visual reference, we obtain


where is a fixed frame unit vector. Therefore the instantenous desired orientation is . The choice of depends on user need and the desired generated behavior. In our case, the unit vector and fixation points have positive world frame x-coordinates so is selected to be .

Iii-C Defining the World Frame Orientation Error

Let and be the rotation matrices w.r.t frame O describing the robot’s current and desired end-effector orientation frames respectively . The goal is to find the rotation matrix described in the world frame that will bring to .

Remembering that pre-multiplying a reference frame, (described as a rotation matrix) by a rotation matrix results to an extrinsic rotation of frame by in frame . the rotation matrix which will rotate frame to in the world frame is referred to as the orientation error111This is equivalent to finding the total rotation performed by SLERP matrix, . It can be solved via


Next, this rotational frame error is described in terms of quaternions. The reader is referred to the appendix of [29] for a primer on unit quaternions. The unit quaternion with respect to frame is defined to be


where is the right-hand rotation about a unit vector axis, rotation . Note that and are the axis-angle representation of the quaternion.

Given a rotation matrix , the elements of its corresponding unit quaternions, can be obtained. For consistency, the unit quaternion, when converted to its axis-angle representation, with an angle is always selected. Then the quaternion error, is


where the inverse of the unit quaternion is simply the of its axis angle-representation, and the operator is the unit-quaternion product.

Iii-D The Operational Space Task For Orientation Control

Having specified the orientation error , the operational space task can now be specified which will bring the current orientation to a desired orientation .

To do so, we note that the quaternion error derived earlier is with respect to the world frame and that a quaternion can be decomposed into its axis-angle components, and . Specifically, for , the product of its axis-angle representation, , is equivalent to the angular velocity needed in one second to rotate frame to . For small the operational orientation task steers towards by defining as


with an appropriate operational task gain . Here, .

Iii-E Orientation Control

Fig. 4: The gaze trajectory of a prioritized controller with no joint limit tasks (a and b), and with intermediate joint limit tasks (c). The task is to trace a small square for the eyes and a bigger square for the head the arrows indicate the waypoint trajectory order. The eye tasks have higher priority than the head. In (a), the the gaze trajectory tasks for the head and eyes are within joint limits so there is perfect gaze tracking. In (b), since joint limits are not part of the controller constraints, tracking for the eye tasks fail as the robot continues to generate commands in the eye joints. In (c) the low priority head task is executed without disturbing the satisfaction of the higher priority eye tasks.

Iii-E1 Head Orientation Control as a Single Task

For robot heads without eyes, only a single fixation point orientation task is needed. The following resolved motion rate control [30] with our operational space definition for is enough:


where is the Moore-Penrose pseudo inverse of the Jacobian. However, head-eye robots naturally have two fixation points, one for the head and the other for the eyes.

Iii-E2 Simultaneous Head-Eye Orientation Control as Separable Tasks

Note that for head-eye robots, the orientation tasks for the head and the eyes are separable as the head and eyes each have enough degrees of freedom to control the head-eye robot towards multiple feasible fixation points. Where feasible here means that the fixation point is within the joint limits of the robot. In other words, the eye degrees of freedom and the head degrees of freedom are independently coordinated to point at different fixation points.

Concretely, this can be done by constructing the spatial Jacobians for the head and eyes as and with zero columns that correspond to eye and head joints respectively


where is the spatial Jacobian with head joints only and is the spatial Jacobian with eye joints only. With our definition of operational space tasks and , stacking them such that and and using Eq(14) will control the head-eye robot towards the fixation point.

However, this approach has two significant limitations. It has no notion of joint limits or prioritization. Under eye joint limits, if a user cares more about the eye fixation point over the head’s fixation point, the user must analyze if the gaze fixation point for the eye is reachable given the current head configuration. It will be more desirable to first satisfy the eye fixation point (priority 1) and then attempt to satisfy the head fixation point (priority 2) .

Iii-E3 Simultaneous Head-Eye Orientation Control with Priorities

To enforce priorities for operational tasks and , the following control structure may be implemented.


where is the null space projector due to task 1. The reader is reffered to [14, 31] for a review on setting up kinematic prioritized tasks and its recursive formulation.

While this approach has prioritization, it still has no notion of joint limits. This control approach is implemented in Fig. 4(a) and (b). where the eye gaze task has higher priority than the head gaze task. Since the formulation has no notion of joint limits, when the eye yaw joint limits are hit, the controller Eq( 18) continues to generate ’s for the eye joints (See Fig 4b). If the eyes or the task specification hits no joint limits, this formulation will be correct (See Fig 4a).

Iii-E4 Simultaneous Head-Eye Orientation Control with Joint Limits and Priorities

To address the limitations of the above controller, we introduce a task hierarchical framework with joint limits.

Let be the number of robot joints, be the number of joints that have limits and be the number of operational tasks. Our prioritized controller will have prioritized tasks with the joint limit tasks having the highest priority. Each joint limit task must have a task Jacobian defined as a row vector,


where the position of is the column corresponding to the joint. The joint limit task Jacobian, is expressed by stacking as . The lower priority tasks will be the task Jacobians for the eye and head gaze tasks. Here, the prioritized gaze fixation point tasks and will be the eye and head orientation tasks respectively.

However, each joint limit task should only activate when the joint enters an activation buffer. We utilize the intermediate task transition formulation for smooth task transitions [21]. The control structure for this formulation is as follows:


where is the desired intermediate value for the joint limit tasks , defined below, is the -th task Jacobian, defined previously, is the nullspace projector due to the higher priority tasks , defined as,


and is also a nullspace projector due to tasks , recursively defined as


Here, a special case of the IDV is used in which the only intermediate tasks are due to joint limits. Thus, only the joint limit tasks, , needs to be computed recursively. The -th joint limit task is computed as


where is the usual desired task value for the joint limit, is the task activation parameter due to a joint configuration , and is the solution without the joint limit task . Since only joint tasks will activate () or deactivate (), is the same activation function defined in [21]. Instead of permanently attracting the joint limit task [13], it is desirable that the joint attempts to leave the activation buffer so that the robot can regain the degree of freedom. Thus, the desired values for the joint limit avoidance task is


where is the center of the joint, and is an appropriate gain (set to ), which will bring the joint away from the activation buffers.

Finally, we define , the task solution without the joint limit task . Concretely, calls another instantiation of Eqs.(21 - III-E4) but without the row in the joint limit task Jacobian of . At each call, a row of is removed. As joint limits are the only intermediate values considered here, the base case for is the regular prioritized solutions without any joint limit task. A pseudocode of the algorithm in python notation is provided in Algorithm 1.

Iii-F Minimum Jerk Trajectory Generation and Tracking

For gaze behavior generation, the controller and the task error definition described in Sec.III-D can be used to follow trajectories designed in Cartesian space. Concretely, Cartesian trajectories can be constructed from the current gaze fixation point , to a final point . A minimum jerk trajectory [32] for each Cartesian dimension in the fixed frame can be constructed using a 5-th order polynomial, defined below, with boundary conditions on the position, velocity, and acceleration described as a vector , where and indicate initial and final times respectively.


For a single dimension, finding the coefficients can be done by solving for in , where is the corresponding matrix with and terms.

To perform gaze tracking on a given Cartesian trajectory, , at each time , the instantaneous desired orientation is constructed by using Eqs.(5-8) and setting . This generates the instantaneous desired orientation . Then the rotation error can be extracted with Eqs.(10) and (12) and the operational space task at this time step is extracted with Eq.(13). This is the input to the operational space controller in Sec. III-E.

1:// Initialize joint limit tasks
4:// Initialize operational space tasks
6:procedure (, , ,, ):
7:     procedure :
8:         // Compute w/out joint limit task
12:         return      
15:     if  then
16:         // Stack the joint limit constraints
18:         for  to  do
19:              // Compute IDV due to joint
20:              // Note the recursive call to
22:          //combine lists
23:          //combine lists      
25:     //Pre-compute given
27:     for  to  do
28:         if  then
30:         else
32:          // task contribution to      
33:     return
Algorithm 1 Recursive Formulation of a Prioritized Controller with Intermediate Values

Iv Controller Experiments and Results

Fig. 5: The higher priority eye gaze tasks are commanded to look at a fixed point and the lower prioritized head gaze task is commanded to trace a square, which will cause the eye joint limits to hit during task execution. (a) Shows the head-eye configuration at the specified waypoints. (b) Shows the actual trajectory of the head and eye gaze tasks. The dotted-orange and solid red lines indicate the desired and actual head gaze trajectory respectively following task priority constraints. The desired and actual eye gaze positions remain at the fixation point. (c) Shows that gaze Cartesian error is only present on the head gaze task (d) shows the minimum-jerk based trajectories for the head with perfect fixation point tracking for the eyes. (e) Shows the eye joint positions and the corresponding joint limit activation values.

Controller validation is performed on the real Dreamer robot as shown in Fig.5. The robot is tasked with three orientation trajectories, two for the eyes, and one for the head with the eye gaze tasks having higher priority than the head gaze task. The eye is commanded to stay fixated at a 3D point directly in front of the robot, while the head is commanded to create a square by following way points defining a minimum jerk trajectory. While both tasks cannot be accomplished simultaneously, the controller must maintain the eye fixation point and only execute the lower priority head gaze task if it can be done without interference.

As Fig.5 shows, our controller preserves task prioritization even under joint limits (Fig.5. a and e). Notice that only the head gaze task has a Cartesian 2-norm error (Fig.5c) and the eye gaze Cartesian positions are tracked perfectly (Fig.5d). Finally, the joint limit avoidance tasks are continuously inserted and removed, with the corresponding activation values, as the eye joints approach their limits (Fig.5e). Due to task prioritization with joint-limit awareness, the controller maintains the gaze fixation task. Note that this biomimetic behavior of the vestibulo-occular-reflex (VOR) [33] naturally occurs in our controller.

V Discussion and Conclusions

Inspired from projection-based whole-body controllers, a generic controller with task prioritization for joint-constrained head-eye robots is presented and experimentally validated on the Dreamer humanoid robot. In order to formulate simultaneous gaze tasks as operational space inputs to the controller, the construction of the instantaneous desired orientation was presented. To handle intermediate joint limits without solution discontinuity, the IDV approach is utilized and described in detail with an accompanying pseudo code. Finally gaze behavior is generated via gaze tracking of minimum jerk trajectories in Cartesian space.

The Cartesian specification of gaze trajectories transforms the problem of trajectory generation in joint space to Cartesian space, which has lower dimensions. As a future work, emotive behavior generation using Cartesian space trajectories may enable skill transfer of head-eye behavior, such as expressing different emotions, across many robots.

To conclude, the presented head-eye controller addresses the missing capability of handling multiple 3D gaze tasks with priorities under joint limits. This generic controller can enable users to execute precise gaze control for enhancing human-robot-interactions.


  • [1] C.-M. Huang and A. L. Thomaz, “Effects of responding to, initiating and ensuring joint attention in human-robot interaction,” in RO-MAN, 2011 IEEE.   IEEE, 2011, pp. 65–71.
  • [2] A. Moon, D. M. Troniak, B. Gleeson, M. K. Pan, M. Zheng, B. A. Blumer, K. MacLean, and E. A. Croft, “Meet me where i’m gazing: how shared attention gaze affects human-robot handover timing,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-robot interaction.   ACM, 2014, pp. 334–341.
  • [3] C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, “Effects of nonverbal communication on efficiency and robustness in human-robot teamwork,” in Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on.   IEEE, 2005, pp. 708–713.
  • [4] B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita, “Footing in human-robot conversations: how robots might shape participant roles using gaze cues,” in Proceedings of the 4th ACM/IEEE International Conference on Human robot Interaction.   ACM, 2009, pp. 61–68.
  • [5] H. Admoni and B. Scassellati, “Social eye gaze in human-robot interaction: A review,” Journal of Human-Robot Interaction, vol. 6, no. 1, pp. 25–63, 2017.
  • [6] C. L. Kleinke, “Gaze and eye contact: a research review.” Psychological bulletin, vol. 100, no. 1, p. 78, 1986.
  • [7] C. F. DiSalvo, F. Gemperle, J. Forlizzi, and S. Kiesler, “All robots are not created equal: the design and perception of humanoid robot heads,” in Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques.   ACM, 2002, pp. 321–326.
  • [8] A. Takanishi, T. Matsuno, and I. Kato, “Development of an anthropomorphic head-eye robot with two eyes-coordinated head-eye motion and pursuing motion in the depth direction,” in Intelligent Robots and Systems, 1997. IROS’97., Proceedings of the 1997 IEEE/RSJ International Conference on, vol. 2.   IEEE, 1997, pp. 799–804.
  • [9] M. Lopes, A. Bernardino, J. Santos-Victor, K. Rosander, and C. von Hofsten, “Biomimetic eye-neck coordination,” in Development and Learning, 2009. ICDL 2009. IEEE 8th International Conference on Development and Learning.   IEEE, 2009, pp. 1–8.
  • [10] A. Roncone, U. Pattacini, G. Metta, and L. Natale, “A cartesian 6-dof gaze controller for humanoid robots.” in Robotics: Science and Systems, 2016.
  • [11] R. Tedrake and the Drake Development Team, “Drake: A planning, control, and analysis toolbox for nonlinear dynamical systems,” 2016. [Online]. Available:
  • [12] L. Sentis, “Synthesis and control of whole-body behaviors in humanoid systems,” Ph.D. dissertation, Stanford University, July 2007.
  • [13] L. Sentis and O. Khatib, “Control of free-floating humanoid robots through task prioritization,” in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on.   IEEE, 2005, pp. 1718–1723.
  • [14] P. Baerlocher and R. Boulic, “Task-priority formulations for the kinematic control of highly redundant articulated structures,” in Intelligent Robots and Systems, 1998. Proceedings., 1998 IEEE/RSJ International Conference on, vol. 1.   IEEE, 1998, pp. 323–329.
  • [15] L. Sentis and O. Khatib, “Prioritized multi-objective dynamics and control of robots in human environments,” in Humanoid Robots, 2004 4th IEEE/RAS International Conference on, vol. 2.   IEEE, 2004, pp. 764–780.
  • [16] H. Sugiura, M. Gienger, H. Janssen, and C. Goerick, “Real-time collision avoidance with whole body motion control for humanoid robots,” in Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on.   IEEE, 2007, pp. 2053–2058.
  • [17] A. Albu-Schäffer, S. Haddadin, C. Ott, A. Stemmer, T. Wimböck, and G. Hirzinger, “The dlr lightweight robot: design and control concepts for robots in human environments,” Industrial Robot: an International Journal, vol. 34, no. 5, pp. 376–385, 2007.
  • [18] G. Antonelli, F. Arrichiello, and S. Chiaverini, “Stability analysis for the null-space-based behavioral control for multi-robot systems,” in Decision and Control, 2008. CDC 2008. 47th IEEE Conference on.   IEEE, 2008, pp. 2463–2468.
  • [19] K.-S. Chang and O. Khatib, “Operational space dynamics: Efficient algorithms for modeling and control of branching mechanisms,” in Robotics and Automation, 2000. Proceedings. ICRA’00. IEEE International Conference on, vol. 1.   IEEE, 2000, pp. 850–856.
  • [20] F. Keith, P.-B. Wieber, N. Mansard, and A. Kheddar, “Analysis of the discontinuities in prioritized tasks-space control under discreet task scheduling operations,” in Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on.   IEEE, 2011, pp. 3887–3892.
  • [21] J. Lee, N. Mansard, and J. Park, “Intermediate desired value approach for task transition of robots in kinematic control,” IEEE Transactions on Robotics, vol. 28, no. 6, pp. 1260–1277, 2012.
  • [22] H. Han and J. Park, “Robot control near singularity and joint limit using a continuous task transition algorithm,” International Journal of Advanced Robotic Systems, vol. 10, no. 10, p. 346, 2013.
  • [23] S. Andrist, X. Z. Tan, M. Gleicher, and B. Mutlu, “Conversational gaze aversion for humanlike robots,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction.   ACM, 2014, pp. 25–32.
  • [24] C. Breazeal and B. Scassellati, “A context-dependent attention system for a social robot,” rn, vol. 255, p. 3, 1999.
  • [25] T. Shibata and S. Schaal, “Biomimetic gaze stabilization,” World Scientific Series in Robotics and Intelligent Systems, vol. 24, pp. 31–52, 2000.
  • [26] R. A. Brooks, C. Breazeal, M. Marjanovic, B. Scassellati, and M. M. Williamson, “The cog project: Building a humanoid robot,” Lecture Notes in Computer Science, pp. 52–87, 1999.
  • [27] A. Edsinger-Gonzales, “Manipulating machines: Designing robots to grasp our world,” Ph.D. dissertation, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2005.
  • [28] S. Vijayakumar, J. Conradt, T. Shibata, and S. Schaal, “Overt visual attention for a humanoid robot,” in Intelligent Robots and Systems, 2001. Proceedings. 2001 IEEE/RSJ International Conference on, vol. 4.   IEEE, 2001, pp. 2332–2337.
  • [29] K. M. Lynch and F. C. Park, Modern Robotics: Mechanics, Planning, and Control.   Cambridge University Press, 2017.
  • [30] A. Liegeois, “Automatic supervisory control of the configuration and behavior of multibody mechanisms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 7, no. 12, pp. 868–871, 1977.
  • [31] S. B. Slotine, “A general framework for managing multiple tasks in highly redundant robotic systems,” in proceeding of 5th International Conference on Advanced Robotics, vol. 2, 1991, pp. 1211–1216.
  • [32] T. Flash and N. Hogan, “The coordination of arm movements: an experimentally confirmed mathematical model,” Journal of Neuroscience, vol. 5, no. 7, pp. 1688–1703, 1985.
  • [33] M. Fetter, “Vestibulo-ocular reflex,” in Neuro-Ophthalmology.   Karger Publishers, 2007, vol. 40, pp. 35–51.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description