MIRROR IST–2000-28159 Mirror Neurons based Object Recognition |
DIST -- University Of Genoa
Preliminary experiments in pre-grasp orientation
Here we study the pre-grasp orientation of the robot end-effector. The task in this case is the insertion of the end-effector into a slit; the robot learns how to pre-orient the wrist so that the action is successful. The "insertion task", considered here as a simplified type of grasping, is used to study how to learn the preparation of a motor action.
video: (1)
Learning to act on objects
In this experiment we show how a humanoid robot uses its arm to try some simple pushing actions on an object, while using vision and proprioception to learn the effects of its actions (first video). Afterwards this knowledge is used to position the arm to push/pull the target in a desired direction (second and third video).
Mirror neurons
We use a precursor of manipulation, i.e. simple poking and prodding, and show how it facilitates object segmentation, a long-standing problem in machine vision. The robot can familiarize itself with the objects in its environment by acting upon them. It can then recognize other actors (such as humans) in the environment through their effect on the objects it has learned about.
Setup for the acquisition of visual and motor data from human subjects during grasping actions
The main goal here is to build a setup to acquire data from human subjects performing different types of grasps. We are able to record motor (position and orientation of the hand, position of the fingers) as well as visual data (sequence of stereo images).
IST - Istituto Superior Tecnico in Lisbon
3D reconstruction and depth segmentation from log-polar images
The process takes a pair of log-polar images and computes a dense disparity
map that allows for depth segmentation of the scene. It is based on a set of
disparity channels whose responses are combined in a probabilistic framework to
obtain the final depth map. One of the important aspects is that depth
discontinuities are preserved, thus being useful for problems of figure-ground
segmentation based on depth cues. See DI-2.3 for more details.
The first four videos illustrate depth maps obtained when looking at a person or
at a hand. The segmentation results are also shown both for the hand and the
upper body. The fifth (last) video illustrates the cortical (log-polar) images
as they are represented and processed internally to the system.
Gesture Imitation
These videos illustrate the approach developed for an artificial system to imitate the arm gestures performed by someone. When the demonstrator performs a gesture (first video), the system starts by segmenting the hand in the images based on skin color information. This information is used with the View Point transformation (see DI-2.3) to align the demonstrator's gestures to the point of view of the system. Finally, the Sensory Motor map is applied to generate the adequate arm configurations as shown in the second video.
DP - University Of Uppsala
Rotating rod experiment
Infants' ability to adjust hand orientation when grasping a rotating rod has been studied. The rod to be reached for was either stationary or rotated. The results show that reaching movements are adjusted to the rotating rod in a prospective way and that the rotating rod affects the grasping but not the approach of the rod.
video: (1)
DBS - University Of Ferrara
In-vivo recordings of mirror neurons in behaving monkeys
Different classes of neurons are recorded during grasping action in different conditions of "visual feedback". The videos show grasping actions performed in four different conditions: 1) with ambient illumination; 2) in the dark; 3) with a flash of light at the instant of maximum finger aperture; 4) with a flash of light at the instant of touch.