A lot of information which is used in our reach or grasp models is visual. Or, can be extracted from visual information (like distance, position, size, orientation, etc.).

In the last weeks, we add image processing module built upon OpenCV library to our project and we use several OpenCV functions to process iCub’s camera image to compute target object size and orientation.

Take a look on images that describes the image processing:

Image processing in our Reaching & Grasping iCub Application

Image processing in our Reaching & Grasping iCub Application

Tagged with:

In our work, we study how methods of machine learning could be used to let iCub learn to reach and grasp objects.
Simulator offers various propriocentric information about its state. Hovewer, we could get only the position of the center of iCub’s hand and that is not enough to calculate hand orientation in the space for example.

Therefore I have modified the iCub simulator source code to enable simulator to provide information about position of other hand parts like thumb’s 1st joint, middle finger 1st joint and so on. With this additional information I am able to calculate hand orientation in the 3D space with use of goniometric functions.

In the following screenshot, I tried to get positions of many iCub hand parts (identified by indexes from iCub.cpp) and I created static boxes on these places.

iCub additional hand parts positions

Tagged with:

Today I implemented simple algorithm to lift object when iCub successfully grasps it.
While iCub was learning (online learning of Grasping CACLA), I took this very nice screenshot of sucessful grasp and lift of the grasped object.

iCub made successul grasp and lift while learning

Video – iCub learns grasping (after 200-300 learning epizodes)

Tagged with:

iCub simulator is built on Open Dynamics Engine (ODE) and OpenGL. We found out, that in the simulator, all objects have very small weight and we cannot set the weight or density of the objects. While in our work we expect iCub to grasp objects with different weights, we need to overcome this problem.

Since now, I found that while object is created, density is set by iCub Simulator constant DENSITY (world.h), which is set to 1.0.

I hope it might work, if I modify world.cpp methods which are responsible for object creation to get density as input parameter.

I also have to check out this post.


In the last two months I did several experiments on my iCub simulator.


I had enhanced my reaching module and tested it with various parameters and state/action representations.

In each experiment I train reaching CACLA for 1000-2000 episodes and test it on the fly after every 50 episodes.
In most of experiments I used exploration factor λ = 1.0 and it was decreasing continuously after each episode (I divide λ by 0.995 after epizode so it becomes 0.37 after 200 episodes and 0.1 after 450 episodes). Therefore, we can say, that iCub was doing exploration in the first 350-400 episodes and than just exploitation and fine-tuning.

The initial state of arm position in the space was generated randomly before every episode from defined subspace. The final state was set to one point (by 3D cartezian coordinates) and it was not changed in most of the experiments. This task is much easier than space aproximation with robot arm and I did it beacuse I needed to test my implementation, to test behaviour of the networks with different parameters, etc.

Reward function for CACLA was simply euclidean distance between hand and final position scaled to <-1,1>.
I tried also version were I squared the final reward (with preservation of sign), because I thought that iCub satisfies himself, when he get close to object and is only very slightly motivated to get even closer. Hovewer, I find, that learning was more difficult for iCub with this reward function.

The state representations I used were:

  • 3d coordinates of target position (x, y, z)
  • 4 DoF which we are manipulating and 3d coordinates of target position (a, b, c, d) (x, y, z) where DoF where scaled to <-1, 1>
  • 3d coordinates of hand center and 3d coordinates of target position (hx, hy, hz) (x, y, z)

The action generated by actor was 4-dimensional vector (a’, b’, c’, d’) which was interpreted as target absolute angles for particular DoF. In first experiments I try also  relative change of angle (this takes more time to learn, so I didn’t used it in later experiments).


I’ve made some first grasping experiments. In these I learn to grasp static object located in space (not on a table) by controling 8 DoF, however I placed some constraints here.

Actor generates 3-dimensional vector (t, p, f), where each component is in range <-1,1> and is rescaled to iCub DoF absolute angles. Component t controls thumb flexion, p controls palm flexion and f controls simultaneously all other fingers flexion.


iCub learns grasping

iCub learns grasping 2

iCub grasping large cube

Another view on large cube grasping

Tagged with: