Researchers Combine Low-Cost Tactile Sensor With Machine Learning To Develop Robots That Feel
Researchers from ETH Zürich have announced that they have leveraged machine learning to develop a low-cost tactile sensor. The sensor can measure force distribution in high resolution and with high accuracy. Those features enable the robot arm to grasp sensitive, fragile objects with more dexterity. Enabling robotic grippers to feel is very important to making them more efficient.
In humans, our sense of touch allows us to pick up fragile or slippery items with our hands without fear of crushing or dropping the item. If an object is about fall through our fingers, we are able to adjust the strength of our group accordingly. Scientists want robotic grippers that pick-up products to have a similar type of feedback as humans get from our sense of touch. The new sensor that the researchers have created is said to be a significant step towards a "robotic skin."
The sensor consists of an elastic silicone skin with colored plastic microbeads and a regular camera fixed to its underside. The vision-based sensor can see when it comes in contact with an object, and an indentation appears in the silicone skin. The contact changes the pattern of the microbeads that can be registered by the fisheye lens on the underside of the sensor. Changes in the pattern of the microbeads is used to calculate the force distribution on the sensor.
The robotic skin the scientists came up with can distinguish between several forces acting on the sensor surface and calculate them with high degrees of resolution and accuracy. The team can determine the direction from which the force is acting. When calculating the forces that are pushing the microbeads and in which directions, the team uses a set of experiments and data.
This approach allows the team to precisely control and systematically vary the location of the contact, the force distribution, and the size of the object making contact. Machine learning allows the researchers to record several thousand instances of contact and precisely match them with changes in the bead pattern. The team is also working on larger sensors that are equipped with several cameras and can recognize objects of complex shape. They are also working to make the sensor thinner.