List of ROS Topics#

This page list all the public topics exposed in PAL OS 24.9.

Caution

Only the topics contributing to the public API of the PAL OS 24.9 are listed here.

Additional ROS topics might be present on the robot, for internal purposes. They are however not part of the documented and supported robot API.

Alphabetic index#

By capability#

πŸ’¬ Communication#

  • /active_listening (documentation) Whether or not recognized speech should be further processed (eg by the chatbot). See overview_nlp for details.

  • /chatbot/trigger (documentation) Publish here chatbot intents you want to trigger. This is espectially useful to implement a pro-active behaviour, where the robot starts itself the conversation.

    See overview_nlp for details.

  • /humans/voices/*/audio (documentation) The audio stream of the voice.

  • /humans/voices/*/is_speaking (documentation) Whether verbal content is currently recognised in this voice’s audio stream.

  • /humans/voices/*/speech (documentation) The recognised text, as spoken by this voice.

  • /humans/voices/tracked (documentation) The list of voices currently detected by the robot.

πŸ“œ Developing applications#

  • /intents (documentation) An intent, encoding a desired activity to be scheduled by the robot (not to be confused by the chatbot intents). Read more about Intents.

πŸ˜„ Expressive interactions#

  • /look_at (documentation) Set a target for the robot to look at. Uses both the eyes and the head position.

  • /robot_face/background_image (documentation) Displays a ROS video stream as background of the robot’s face/eyes. See Background and overlays for details.

  • /robot_face/expression (documentation) Set the expression of ARI eyes. See Robot face and expressions for details.

  • /robot_face/image_raw/* (documentation) The left and right images to be displayed on the robot’s eyes. Published by default by the expressive_eyes node. If you want to publish your own face on this topic, you might want to first stop the

  • /robot_face/look_at (documentation) Sets the direction of eyes. If you want to control the gaze direction, use instead /look_at. See attention-management for details.

βš™οΈ Robots hardware#

  • /audio_in/channel0 (documentation) Merged audio channel of the ReSpeaker’s 4 microphones

  • /audio_in/channel1 (documentation) Audio stream from the ReSpeaker’s first microphone.

  • /audio_in/channel2 (documentation) Audio stream from the ReSpeaker’s second microphone.

  • /audio_in/channel3 (documentation) Audio stream from the ReSpeaker’s third microphone.

  • /audio_in/channel4 (documentation) Audio stream from the ReSpeaker’s fourth microphone.

  • /audio_in/channel5 (documentation) Monitor audio stream from the ReSpeaker’s audio input (used for self-echo cancellation).

  • /audio_in/raw (documentation) Merged input audio channel from the microphone. For robot equipped with a ReSpeaker array, this is an alias for /audio_in/channel0.

  • /audio_in/sound_direction (documentation) The estimated Direction of Arrival of the detected sound.

  • /audio_in/sound_localization (documentation) The estimated sound source location.

  • /audio_in/speech (documentation) Raw audio data of detected speech (published once the person has finished speaking).

  • /audio_in/status_led (documentation) The topic controlling the reSpeaker microphone LEDs. Do not use this topic directly. Instead, use /pal_led_manager/do_effect.

  • /audio_in/voice_detected (documentation) Publishes a boolean indicating if a voice is currently detected (ie, whether someone is currently speaking)

  • /audio_out/raw (documentation) Audio data published on this topic is directly played on the robot’s loudspeakers.

  • /base_imu (documentation) Inertial data from the IMU.

  • /end_effector_camera/camera_info (documentation) Intrinsic and distortion parameters of the RGB endoscopic camera.

  • /end_effector_camera/image_raw (documentation) RGB image of the endoscopic camera

  • /end_effector_left_camera/camera_info (documentation) Intrinsic and distortion parameters of the RGB endoscopic camera for the left arm.

  • /end_effector_left_camera/image_raw (documentation) RGB image of the endoscopic camera for the left arm.

  • /end_effector_right_camera/camera_info (documentation) Intrinsic and distortion parameters of the RGB endoscopic camera for the right arm.

  • /end_effector_right_camera/image_raw (documentation) RGB image of the endoscopic camera for the right arm.

  • /head_front_camera/color/camera_info (documentation) Camera calibration and metadata

  • /head_front_camera/color/image_raw/* (documentation) Color rectified image. RGB format

  • /head_front_camera/image_throttle/compressed (documentation) Compressed head image.

  • /joint_states (documentation) The current state of the robot’s joints (eg angular position of each joint).

  • /joint_torque_states (documentation) The current state of the robot’s joints with effort indicating the measured torque instead of the current (eg angular position of each joint).

  • /sonar_base (documentation) Readings of the sonar.

  • /torso_back_camera/fisheye1/camera_info (documentation) Camera calibration and metadata (fisheye2)

  • /torso_back_camera/fisheye1/image_raw/* (documentation) Fisheye image

  • /torso_back_camera/fisheye2/camera_info (documentation) Camera calibration and metadata (fisheye2)

  • /torso_back_camera/fisheye2/image_raw/* (documentation) Fisheye image (

  • /torso_front_camera/aligned_depth_to_color/camera_info (documentation) Intrinsics parameters of the aligned dept to color image

  • /torso_front_camera/aligned_depth_to_color/image_raw/* (documentation) Aligned depth to color image

  • /torso_front_camera/color/camera_info (documentation) Camera calibration and metadata

  • /torso_front_camera/color/image_raw/* (documentation) Color rectified image. RGB format

  • /torso_front_camera/depth/camera_info (documentation) Camera calibration and metadata

  • /torso_front_camera/depth/color/points (documentation) Registered XYZRGB point cloud.

  • /torso_front_camera/depth/image_rect_raw/* (documentation) Rectified depth image

  • /torso_front_camera/infra1/camera_info (documentation) Camera calibration and metadata (infra1 and infra2)

  • /torso_front_camera/infra1/image_rect_raw/* (documentation) Raw uint16 IR image

  • /torso_front_camera/infra2/image_rect_raw/compressed (documentation)

  • /wrist_ft (documentation) Force and torque vectors currently detected by the Force/Torque sensor.

  • /xtion/depth_registered/camera_info (documentation) Intrinsic parameters of the depth image.

  • /xtion/depth_registered/image_raw (documentation) 32-bit depth image. Every pixel contains the depth of the corresponding point in meters.

  • /xtion/depth_registered/points (documentation) Point cloud computed from the depth image.

  • /xtion/rgb/camera_info (documentation) Intrinsic and distortion parameters of the RGB camera.

  • /xtion/rgb/image_raw (documentation) RGB image.

  • /xtion/rgb/image_rect_color (documentation) Rectified RGB image.

πŸ›  Robot management#

πŸ‘‹ Gestures and motions#

  • /arm_controller/joint_trajectory (documentation) Sequence of positions that the joints have to reach in given time intervals.

  • /arm_left_controller/command (documentation)

  • /arm_left_controller/safe_command (documentation)

  • /arm_right_controller/command (documentation)

  • /arm_right_controller/safe_command (documentation)

  • /gripper_controller/joint_trajectory (documentation) Sequence of positions that the joints have to reach in given time intervals.

  • /hand_left_controller/command (documentation)

  • /hand_right_controller/command (documentation)

  • /head_controller/command (documentation)

  • /head_controller/joint_trajectory (documentation) Sequence of positions that the joints have to reach in given time intervals.

  • /torso_controller/command (documentation) This topic takes a sequence of positions that the torso joint needs to reach at given time intervals.

  • /torso_controller/joint_trajectory (documentation) Sequence of positions that the joints have to reach in given time intervals.

  • /torso_controller/safe_command (documentation) This topic takes a sequence of positions that the torso joint needs to reach at given time interval; the motion is only executed if it does not lead to a self-collision.

🧭 Navigation#

  • /amcl_pose (documentation) Current robot pose estimated by amcl

  • /cmd_vel (documentation) Command Velocity topic used by the autonomous navigation

  • /dlo_ros/odom (documentation) Laser Odometry (fused with wheel odometry)

  • /eulero_manager/feedback (documentation) Visualizes and modifies environmental metadata using RViz.

  • /eulero_manager/update (documentation) Visualizes and modifies environmental metadata using RViz.

  • /global_costmap/costmap (documentation) Global Costmap used by the Planner (Global Planner)

  • /global_costmap/footprint (documentation) Robot Footprint used by the Planner (Global Planner) and for the creation of the inflation layer

  • /goal_pose (documentation) Goal Position and Orientation that the autonomous navigation system will try to reach

  • /initialpose (documentation) Initial robot pose estimated used as initial guess for amcl

  • /input_joy/cmd_vel (documentation) Command Velocity topic used by the joystick

  • /joy (documentation) Raw joystick readings coming from the joystick driver

  • /joy_priority (documentation) Trigger used to assign the priority either to the joystick or to the autonomous navigation

  • /joy_vel (documentation) Raw command velocity message coming from the joystick and used by the joystick relay

  • /keepout_map_mask/mask (documentation) Nav2 Costmap Filter containing Keepout Areas (Virtual Obstacles) and Highways

  • /keepout_map_mask/mask_info (documentation) Nav2 Costmap Filter metadata for the Keepout Filter

  • /key_vel (documentation) Command Velocity topic used by the key teleop node to move the robot using the keyboard

  • /local_costmap/costmap (documentation) Local Costmap used by the Controller (Local Planner)

  • /local_costmap/footprint (documentation) Robot Footprint used by the Controller (Local Planner)

  • /local_plan (documentation) Plan generated by the Controller (Local Planner)

  • /map (documentation) Current map of the environment (OccupancyGrid) used by the navigation stack

  • /map_metadata (documentation) Metadata for the current map (OccupancyGrid) with its size, origin, etc.

  • /mobile_base_controller/cmd_vel_unstamped (documentation) Command Velocity topic directly connected to the wheels controller

  • /mobile_base_controller/odom (documentation) Wheel odometry

  • /particle_cloud (documentation) Array of poses representing the different robot pose hypotheses managed by amcl

  • /pause_navigation (documentation) Trigger used to disable or enable the command velocity topic used by the autonomous navigation

  • /phone_vel (documentation) Command Velocity topic used by mobile devices to move the robot

  • /plan (documentation) Plan generated by the Planner (Global Planner)

  • /pose (documentation) Pose of the base_frame in the configured map_frame along with the covariance calculated from the scan match estimated by slam_toolbox_sync

  • /rviz_joy_vel (documentation) Command Velocity topic used by RViz to move the robot

  • /scan (documentation) Ready-to-use and filtered Scan readings. If the robot has multiple lasers, this topic cointains their merged readings.

  • /scan_front_raw (documentation) Only available when multi-laser. Raw scan message coming from the Front laser driver

  • /scan_raw (documentation) If multi-laser: Merged scan message. Otherwise: Raw scan message coming from the laser driver

  • /scan_rear_raw (documentation) Only available when multi-laser. Raw scan message coming from the Rear laser driver

  • /slam_toolbox/graph_visualization (documentation) Visualization of the graph generated by slam_toolbox_sync

  • /slam_toolbox/scan_visualization (documentation) Visualization of the scan used by slam_toolbox_sync

  • /slam_toolbox/update (documentation) Visualize the graph in RViz and allows the user to apply changes in the position of the graph’s nodes.

  • /speed_limit (documentation) Configure the Controller (Local Planner) to move at a reduced speed

  • /speed_map_mask/mask (documentation) Nav2 Costmap Filter containing Speed Areas

  • /speed_map_mask/mask_info (documentation) Nav2 Costmap Filter metadata for the Speed Area Filter

  • /target_detector/goal (documentation) Topic from which the Target Detector Server reads the Target goal

  • /target_detector_server/image (documentation) Topic to publish the detected image for debugging

  • /updated_goal (documentation) Updated position and orientation of a detected target

πŸ’‘ Knowledge and reasoning#

  • /kb/active_concepts (documentation) Lists the symbolic concepts that are currently active (an active concept is a concept of rdf:type ActiveConcept).

    See the KnowledgeCore API for details.

  • /kb/add_fact (documentation) Statements published to this topic are added to the knowledge base. The string must represent a <s, p, o> triple, with terms separated by a space.

    See the KnowledgeCore API for details.

  • /kb/events/* (documentation) Event notifications for previously subscribed events. See /kb/events for details.

  • /kb/remove_fact (documentation) Statements published to this topic are removed from the knowledge base. The string must represent a <s, p, o> triple, with terms separated by a space.

    See the KnowledgeCore API for details.

πŸ‘₯ Social perception#

  • /hri_face_detect/ready (documentation) A semaphore topic: True is published when the node is ready.

  • /hri_face_identification/ready (documentation) A semaphore topic: True is published when the node is ready.

  • /humans/bodies/*/cropped (documentation) The cropped image of the detected body.

  • /humans/bodies/*/joint_states (documentation) For each detect human body, the joint state of the person’s skeleton.

  • /humans/bodies/*/position (documentation) Filtered body position, representing the point between the hips of the tracked body. Only published when parameter use_depth = true.

  • /humans/bodies/*/roi (documentation) Region of the whole body in the source image

  • /humans/bodies/*/skeleton2d (documentation) The 2D points of the the detected skeleton, in the image space

  • /humans/bodies/*/velocity (documentation) Ffiltered body velocity. Only published when parameter use_depth = true.

  • /humans/bodies/tracked (documentation) The list of bodies currently seen by the robot.

  • /humans/candidate_matches (documentation) Potential matches (eg, associations) between detected faces, bodies, voices and persons.

  • /humans/faces/*/aligned (documentation) Aligned (eg, the two eyes are horizontally aligned) version of the cropped face, with same resolution as /humans/faces/*/cropped.

  • /humans/faces/*/cropped (documentation) Cropped face image, if necessary scaled, centered and 0-padded to match a 128x128px image.

  • /humans/faces/*/landmarks (documentation) 2D facial landmarks extracted from the face

  • /humans/faces/*/roi (documentation) Region of the face in the source image

  • /humans/faces/tracked (documentation) The list of faces currently seen by the robot.

  • /humans/persons/*/alias (documentation) If this person has been merged with another, this topic contains the person ID of the new person

  • /humans/persons/*/anonymous (documentation) If true, the person is anonymous, ie has not yet been identified, and has not been issued a permanent ID. Latched topic.

  • /humans/persons/*/body_id (documentation) Body matched to that person (if any). Latched topic.

  • /humans/persons/*/engagement_status (documentation) Engagement status of the person with the robot.

  • /humans/persons/*/face_id (documentation) Face matched to that person (if any). Latched topic.

  • /humans/persons/*/location_confidence (documentation) Location confidence; 1 means person currently seen, 0 means person location unknown. See REP-155 Person Frame section for details.

  • /humans/persons/*/voice_id (documentation) Voice matched to that person (if any). Latched topic.

  • /humans/persons/known (documentation) The list of all the persons known by the robot, eithr currently seen or not.

  • /humans/persons/tracked (documentation) The list of persons currently seen by the robot.

πŸ–₯️ Touchscreen#