How to launch the different components for person detectionยถ
This how-to describes how to test the hri_person_manager
node in your
local environment to familiarise yourself with the ROS4HRI pipeline. The
node aggregates information about faces, bodies and voices in a single
structure, person
, which facilitates the management of the human features
detected around the robot.
Prerequisitesยถ
You will need the hri_face_detect and hri_face_identification ROS nodes. These nodes will feed hri_person_manager. They are pre-installed in the PAL Docker images.
Check Developing with the PAL Developer Docker image if you have not yet installed your Docker image.
You will also need a webcam and a calibration file for the camera. If you donโt have one,
here
you can find a standard calibration file sample that you can temporarily use for this tutorial. If you are using this calibration file, do not expect the face pose estimation to be precise.
Testingยถ
In terminal 1, start
gscam
:
ros2 run gscam gscam_node --ros-args -p gscam_config:='v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1 ! videoconvert' \
-p use_sensor_data_qos:=True \
-p camera_name:=camera \
-p frame_id:=camera \
-p camera_info_url:="file:///<calibration_file_location>"
Note
You must provide a calibration file for your camera (like this
generic file for webcams
),
otherwise hri_face_detect
will not be able to publish the TF frames of the
faces (note that the name of the camera in the calibration file should be
default_cam
, as I will assume this name for the camera TF frame).
Without calibration, hri_person_manager
will still partially work, but
for example, proximal zones will not be published, and the person TF frames
will not be available.
Note
Depending on your setup, your camera might be avaiable under a different
device name (e.g., /dev/video1
). If so, change the device
parameter
accordingly in the command above.
In terminal 2, start
hri_face_detect
:
First, configure the remappings for the node: by default, gscam
publishes
the camera image on /camera/image_raw
and the camera info on
/camera/camera_info
. We need to tell hri_face_detect
about it:
mkdir -p $HOME/.pal/config
nano $HOME/.pal/config/ros4hri-tutorials.yml
Then, paste the following content:
/hri_face_detect:
remappings:
image: /camera/image_raw
camera_info: /camera/camera_info
Press Ctrl+O
to save, then Ctrl+X
to exit.
Then, launch the node:
$ ros2 launch hri_face_detect face_detect.launch.py
In terminal 3, start
hri_face_identification
:
$ ros2 launch hri_face_identification face_identification.launch.py
In terminal 4, publish a static (identity) transform between the camera and
/map
:
$ ros2 run tf2_ros static_transform_publisher --frame-id map --child-frame-id default_cam
In terminal 5, start the
hri_person_manager
node:
$ ros2 launch hri_person_manager person_manager.launch.py
In terminal 6, check the output of
/humans/persons/tracked
,/humans/persons/in_{personal,social,public}_space
,/humans/persons/<id>/*
โฆ
For instance, you should see your own person ID:
$ ros2 topic echo /humans/persons/in_personal_space
Optionally, you can display the face - person association graph:
In terminal 7, run:
$ ros2 run hri_person_manager show_humans_graph
In terminal 8, view the output:
$ evince /tmp/graph.pdf
Thatโs all! You can now start developing your own code using the available libraries in C++ and python, or continue playing with other tutorials in the ๐ฅ Social perception main page.