 
      
We have been studying the control of eye movements for the
      Babybot. Various behaviors were implemented that allow the robot to
      visually explore the environment in search of interesting objects and
      events. The simplified Babybot's visual system can detect color and
      motion. The robot also uses binocular disparity to gather information
      about the depth in the visual scene and control vergence.
The
      head is embedded with three gyroscopes that are used by the Babybot to
      develop the sense of a stable world. Visual stability of the world is of
      course important to perceive it correctly: the visual processing is
      simpler if the eyes are not moving too much. Also, these signals can be
      used to actively coordinate the movement of the head with that of the eyes
      (compensatory eye movements). Experiments investigated different types of
      self-supervised learning to automatically tune the performance of this
      class of eye movements. 
We carried out experiments on
      learning saccade (fast eye movements) towards visually identified targets.
      Neural networks techniques were employed for this task.
Controlling
      the movement of the eyes and head is one of the first step required in
      learning complex cognitive tasks. It also rests at the foundation of
      reaching and manipulation.