An in-depth learning framework to estimate the posture of robotic arms and predict their movements

An in-depth learning framework to estimate the posture of robotic arms and predict their movements

Pose Detection and Pose Prediction Fluxogram. Credit: Rodrigues et al.

As robots are gradually introduced into various real-world environments, developers and roboticists will need to ensure that they can work safely around people. In recent years, they have introduced various approaches to estimate the positions and predict the movements of robots in real time.

Researchers at the Universidade Federal de Pernambuco in Brazil recently developed a new deep learning model to estimate the posture of robotic arms and predict their movements. Introduced in a paper pre-published on arXiv, this model is specifically designed to improve the security of robots while collaborating or communicating with humans.

“Motivated by the need to anticipate accidents during human-robot interaction (HRI), we are investigating a framework that improves the safety of people working in close proximity to robots,” Djamel H. Sadoc, one of the researchers who conducted the study, told TechXplore. “Pose detection is seen as an important part of the total solution. To this end, we propose a new architecture for Pose Detection based on Self-Calibrated Convolutions (SCConv) and Extreme Learning Machine (ELM).”

a . to estimate robot‘s pose is an essential step in predicting its future movements and intentions, and in turn reducing the risk of them colliding with objects in their environment. The pose estimation and motion prediction approach introduced by Sadoc and his colleagues has two major components, namely a SCConv and an ELM model.

The SCConvs component improves the overall spatial and channel dependencies of their model. The ELM approach, on the other hand, is known as an efficient approach to classify data.

“We determined that there were no existing studies combining these two technologies in the context of our application,” explains Sadoc. “Therefore, we decided to see if such a combination improves our application. We also improved the framework by applying motion prediction, taking into account the pose detection, using recurring neural networks (RNN).”

An in-depth learning framework to estimate the posture of robotic arms and predict their movements

Scenario example – Human and robot. Credit: Rodrigues et al.

First, Sodok and his colleagues compiled a custom dataset that included images of scenes in which a robotic arm interacts with a nearby human user. To create these images, they specifically used UR-5, a robotic arm made by Universal Robots.

The researchers annotated these images, specifically the frames of the robotic arm. This allowed them to use the new dataset to train SCNet, the SCConv-based component of their framework.

“Our goal was to improve the observed error compared to other known architectures, such as VGG or ResNet,” Sadoc said. “To extract features, we used SCNet and applied the EML at the end of the network. Then we used the Long Short-Term Memory (LSTM) algorithm and the Gated Recurrent Unit (GRU) to predict the motion. We consider this a new approach to solve this problem.”

Sadok and his colleagues evaluated the performance of their framework in a series of initial tests, trying to estimate the posture and predict future movements of a UR-5 arm as it assisted a human user with maintenance-related tasks. They found that it showed promising results, detecting the robotic arm’s posture and predicting its future movements with good accuracy.

“We believe that our main contributions are to generate a framework capable of detecting a person’s pose robotic arm and its movements, improving arm safety,” said Sadoc. “We also expanded the applicability of SSConv and EML and validated their combined capabilities.”

In the future, the framework developed by this team of researchers can be used to improve the safety of both existing and newly developed robotic systems. In addition, the SCConv and ELM algorithms they used could be adapted and applied to other tasks, such as estimating human poses, object detection, and object classification.

“We now plan to extend our framework to human pose detection and jointly provide a robot and pose estimation,” added Sadok. “By combining both data, we can work on the joint prediction of both movements, avoiding even more risks resulting from their interaction as in a factory installation and better classifying the risk level.”

A model to improve robots’ ability to transfer objects to humans

More information:
Iago Richard Rodrigues et al, A framework for robotic arm pose estimation and motion prediction based on deep and extreme learning models. arXiv:2205.13994v1 [cs.RO]†

© 2022 Science X Network

Quote: An in-depth learning framework to estimate the posture of robotic arms and predict their movements (June 2022, June 22) retrieved June 22, 2022 from -arms .html

This document is copyrighted. Other than fair dealing for personal study or research, nothing may be reproduced without written permission. The content is provided for informational purposes only.

Leave a Comment

Your email address will not be published.