About the project
In cooperation with MetraLabs and AiF Projekt GmbH, Reutlingen University is developing an AI on the Edge and deep learning-based follow-me for service robots. The project results will later be used in particular for service robots that visually check stocks in food retail. Corresponding procedures are tested as part of the project on a Tory SHELF platform.
The Follow Me function, which had already been developed by Reutlingen University, was improved by developing a control system using a Kalman filter.
Path planning using Kalman filter
A Kalman filter is based on two steps that are executed one after the other. The first step is the prediction step, which value the controlled variable will have in the next time step. uses for that the filter information such as the current speed. Step two is the correction step.
The Kalman filter receives a measured value from a sensor, compares it with the forecast and therefore forms a weighted value for the controlled variable. Thus, by using the predicted value to react faster to a movement.
It is now possible for the robot to follow a person around sharp curves, such as doors. If the person runs out of sight around a corner, the robot loses them. Until now, the robot drove to the last recognized position and then stopped. The movement of the person is now analyzed in order to observe the direction in which the person moves away from the robot’s field of view. If the person runs out of the field of view, the robot drives to the last recognized position and then turns in the direction that was recognized as the person’s last direction of movement. This means that if a person has turned a corner, they can be found again and Follow-Me will continue.
The Follow-Me function also has obstacle detection through active path planning. If this detects an obstacle, an alternative route to the person who avoids the obstacle is calculated.
Obstacle detection using neural network
The existing real-time obstacle detection system using lasers was expanded with an rgb depth camera. This captures and transfers 3D environment information in bulk and makes it possible to generate a point cloud that describes the surrounding space. An object is recognized from these point clouds using a neural network developed by the university.
Distance measurement using RGB camera
The RGB depth camera outputs a depth image in addition to the RGB image. This means that there is an additional distance measurement for almost every pixel in the RGB image, which can be used to determine the distance between the object and the robot. With this distance information, the robot can, for example, follow a person at a constant distance of one meter without crowding them.
In addition, objects can be classified and divided into security classes using the neural network. Depending on the safety class, the minimum distance between the robot and the object can be determined and safe route planning can be enabled. In addition, by recognizing individual objects, it is possible to classify the market department in which the robot is located.