Advanced 3D Perception for Mobile Robot Manipulators

About the project

Name:  Advanced 3D Perception for Mobile Robot Manipulators

Host institution:  J. J. Strossmayer University of Osijek, Faculty of Electrical Engineering Osijek

Funding: Croatian science foundation, project no. IP-2014-09-3155

Budget: 948.731,00 kn

Duration: 4 years

Principal investigator: Associate Professor Robert Cupec, PhD

Project team:    

Assistant Professor Emmanuel Karlo Nyarko, PhD
Assistant Professor Damir Filko, PhD
Assistant Professor Ratko Grbić, PhD
Ivan Vidović, PhD
Petra Đurović, mag. ing. comp.
Assistant Professor Tomislav Keser, PhD
Assistant Professor Tomislav Matić, PhD
Assistant Professor Ivan Aleksi, PhD
Assistant Professor  Ivica Lukić, PhD
dipl. ing. Marina Peko

Project goal

To develop new or improve existing robot vision methods based on 3D sensors for application in mobile robot manipulation tasks.

Motivation

For several decades robots have successfully been applied in industry. Their speed and high-precision positioning ability makes them the best solution to many technical problems in a variety of industrial production processes. Robots are today unavoidable tools in almost all industrial branches with practically unlimited potential of future industrial applications. Despite their good properties, most of today’s industrial robots actually represent nothing but precise positioning tools which execute a finite number of precisely defined trajectories and can operate only in highly structured work environments. They are still difficult to use, train and program. A key to more flexible, more self-adaptable and easier-to-implement robot aided manufacturing is perception-based artificial intelligence. An industrial robot of the future should be able to sense and interact with the human world, achieve a high degree of autonomy in an unstructured and dynamic environment and, finally, it should be possible for it to be easily trained and controlled by almost anyone and not only by trained technical specialists. Furthermore, in next generation manufacturing, mobile robots will be of special interest since their mobility expands the robots' workspace to the entire production facility. In order to exploit this ability to a full extent, and be capable of performing tasks which include moving from one location to another, a robot needs to be able to localize itself within its work environment.

Background

Since environment perception capabilities are crucial for achieving a highly autonomous operation of a mobile robot, various vision systems are employed for environment perception. Even though 2D vision systems are, currently, most commonly used on the manufacturing floor, 3D information can be of great help in scene analysis. 3D sensors such as laser range finder, stereo vision, time-of-flight camera and sensors based on structured light can provide scans of an observed scene in the form of sets of 3D points representing surfaces of objects visible from a particular viewpoint. Such 3D point sets, called point clouds, provide information of the shape of geometric structures which appear in the scene. By combining a 3D sensor with a 'standard' RGB camera an RGB-D camera is obtained which provides a colored point cloud which is commonly referred to as an RGB-D image. Recently, low cost RGB-D cameras appear on the market, triggering a real explosion of research in the field of 3D point cloud processing. As a result of this intensive research, a number of high-performance tools have been developed which enable real time 3D object and scene reconstruction from range images obtained by 3D cameras. Furthermore, research in robot localization and object recognition has today reached a mature stage. State-of-the-art object recognition systems provide high precision and recall. However, there are still challenges in this research field which motivate this project. Although today there are very successful methods for recognition of objects with precisely defined shapes, object classification, i.e. the ability of an artificial system to classify unknown objects into classes of similar but different objects is still an open problem. Furthermore, state-of-the-art object recognition algorithms, which achieve high precision and recall, still require relatively long computation time.

Project description

The proposed research considers application of 3D perception sensors in execution of prototypical tasks which include object manipulation combined with autonomous motion in an unstructured environment. The focus of the research is on robot perception based on 3D point clouds obtained by 3D sensors, while motion planning and grasping are involved only for the purpose of evaluation of the developed vision-based solutions. The research will be conducted in three directions, which define three work packages:

  1. point cloud segmentation and 3D modelling;
  2. object recognition;
  3. mobile robot localization.

In order to facilitate human-robot communication, a point cloud segmentation method will be developed which will provide the environment model in the form of separate geometric objects, compatible to human perception. The research in the field of object recognition will consider several approaches, from a standard pipeline, consisting of feature detection, hypothesis generation by feature matching and hypothesis evaluation, to approaches based on machine learning, artificial neural networks, decision trees etc. This research will be aimed to improve the computational efficiency as well as to achieve high precision and recall. Furthermore, the object classification approaches will be investigated, with the focus on object classes important for various mobile manipulation tasks. The developed object recognition methods will be applied for place recognition in a framework of a mobile robot navigation system. By integrating place recognition with the ability to use sequence information and active vision, which assures that the camera is pointed in the direction which maximizes the information content inside the field of view, a robot localization system with high precision and recall will be obtained. All developed scene interpretation and object/place recognition method will be evaluated by a series of experiments with a real robot, in which the considered methods will be applied in solving prototypical mobile manipulation tasks.

Expected results

The expected results of the proposed research are:

  1. development of a novel algorithm for segmentation of point clouds into objects of interest for robot manipulation;
  2. human-robot communication interface which enables a user a simple and intuitive means of specifying the object the robot should manipulate with as well as the surfaces which these objects should be place on;
  3. novel efficient hypothesis generation methods;
  4. novel probabilistic approach to hypothesis evaluation;
  5. novel algorithm for recognition of object of variable shape based on graph matching;
  6. mobile robot localization system based on the object recognition methods developed in the project.

The final result of the project will be a mobile robot manipulator control system which will allow simple task definition, recognition of objects and places relevant for a particular task, positioning of a robot manipulator based on robot vision and navigation in unstructured environments with the purpose of completing various mobile manipulation tasks.