The OpenDR Toolkit has been released!

We are glad to announce the first public release of the Open Deep Learning Toolkit for Robotics (OpenDR)! Following months of development, integration and debugging, the first official public release of OpenDR is finally available! You can download the toolkit through GitHub, pip, and Docker Hub

OpenDR provides an intuitive and easy-to-use  Python interface, a C API for selected tools, a wealth of usage examples and supporting tools, as well as ready-to-use ROS nodes. The toolkit provides more than 20 methods, for human pose estimation, face detection, recognition, facial expression recognition, semantic and panoptic segmentation, video and skeleton-based action recognition, image, multimodal and point cloud-based object detection, 2D and 3D object tracking, speech command recognition, heart anomaly detection, navigation for wheeled robots, and grasping. A set of data generation utilities, a hyperparameter tuning tool, and a framework to easily apply RL both in simulation and real robotics applications are also included. 

All methods and their parameters are thoroughly documented, demonstration examples are available to showcase their functionality, and continuous integration tests ensure both the consistency of the code and that no conflicts arise between the different tools. At the same time, OpenDR is built to support Webots Open Source Robot Simulator, while it also extensively follows industry standards, such as the ONNX model format. 

We look forward to receiving your feedback, bug reports, and suggestions for improvements at !the