ROS for Human-Robot Interaction (or ROS4HRI [ros4hri]) is the main API that
your robot implements to represent information about the human surrounding
and interacting with the robot.
ROS4HRI actually refers to a set of conventions and tools that help developing
Human-Robot Interaction capabilities. The specification (originally developed by
PAL Robotics) is available online as the ROS REP-155.
The ROS4HRI API defines several types of identifiers (IDs)ΒΆ
We have implemented the following main parts of the specification in your robot:
We follow the ROS4HRI human model representation, as a combination of a
permanent identity (person) and transient parts that are intermittently
detected (e.g. face, skeleton, voice);
In PAL OS edge, we specifically support:
face detection and recognition (including extraction of facial
landmarks);
single body detection, and 2D and 3D skeleton tracking;
speech recognition (without support for voice separation or voice
identification)
probabilistic fusion of faces, bodies and voices
We follow the ROS4HRI topic naming conventions, with all human-related
messages published under the /humans/ topic namespace;
We follow the ROS4HRI kinematic model of the human and 3D tf frame
conventions (naming, orientation), as specified here.
In addition, your robot also provides implementation for:
To ease access to these topics, you can use pyhri (Python) or libhri
(C++). These two open-source libraries are developed by PAL Robotics under an
Apache 2.0 license, and are available on GitHub:
https://github.com/ros4hri/libhri
Social perception with ROS4HRIΒΆ
ROS for Human-Robot Interaction (or ROS4HRI [ros4hri]) is the main API that your robot implements to represent information about the human surrounding and interacting with the robot.
ROS4HRI actually refers to a set of conventions and tools that help developing Human-Robot Interaction capabilities. The specification (originally developed by PAL Robotics) is available online as the ROS REP-155.
The ROS4HRI API defines several types of identifiers (IDs)ΒΆ
We have implemented the following main parts of the specification in your robot:
We follow the ROS4HRI human model representation, as a combination of a permanent identity (person) and transient parts that are intermittently detected (e.g. face, skeleton, voice);
In PAL OS edge, we specifically support:
face detection and recognition (including extraction of facial landmarks);
single body detection, and 2D and 3D skeleton tracking;
speech recognition (without support for voice separation or voice identification)
probabilistic fusion of faces, bodies and voices
We follow the ROS4HRI topic naming conventions, with all human-related messages published under the
/humans/
topic namespace;We follow the ROS4HRI kinematic model of the human and 3D tf frame conventions (naming, orientation), as specified here.
In addition, your robot also provides implementation for:
gaze estimation
automatic engagement detection
How to use ROS4HRI?ΒΆ
The ROS4HRI topics are documented in Social perception topics.
To ease access to these topics, you can use
pyhri
(Python) orlibhri
(C++). These two open-source libraries are developed by PAL Robotics under an Apache 2.0 license, and are available on GitHub: https://github.com/ros4hri/libhriNext stepsΒΆ
Get started with ROS4HRI and play with the social perception capabilities with the following tutorials:
How to launch the different components for person detection
How to access information of a face, a skeleton or a person?
Detect people oriented toward the robot (Python)
Detect people around the robot (C++)
See alsoΒΆ
The REP-155 aka ROS4HRI specification, on ROS website.
The ROS wiki contains useful resources about ROS4HRI.
ReferencesΒΆ
ROS for Human-Robot Interaction Y. Mohamed; S. Lemaignan, IROS 2021, doi: 10.1109/IROS51168.2021.9636816