List of ROS Topics#

This page list all the topics exposed by the SDK 23.12.

Caution

Only the topics contributing to the public API of the pal-sdk-23.12 are listed here.

Additional ROS topics might be present on the robot, for internal purposes. They are however not part of the documented and supported robot API.

Alphabetic index#

By capability#

Developing applications#

  • /intents (documentation) An intent, encoding a desired activity to be scheduled by the robot (not to be confused by the chatbot intents). Read more about Intents.

Expressive interactions#

Robot hardware#

  • /audio/channel0 (documentation) Merged audio channel of the ReSpeaker’s 4 microphones

  • /audio/channel1 (documentation) Audio stream from the ReSpeaker’s first microphone.

  • /audio/channel2 (documentation) Audio stream from the ReSpeaker’s second microphone.

  • /audio/channel3 (documentation) Audio stream from the ReSpeaker’s third microphone.

  • /audio/channel4 (documentation) Audio stream from the ReSpeaker’s fourth microphone.

  • /audio/channel5 (documentation) Monitor audio stream from the ReSpeaker’s audio input (used for self-echo cancellation).

  • /audio/raw (documentation) Merged audio channel of the ReSpeaker’s 4 microphones (alias for /audio/channel0).

  • /audio/sound_direction (documentation) The estimated Direction of Arrival of the detected sound.

  • /audio/sound_localization (documentation) The estimated sound source location.

  • /audio/speech (documentation) Raw audio data of detected speech (published once the person has finished speaking).

  • /audio/status_led (documentation) The topic controlling the reSpeaker microphone LEDs. Do not use this topic directly. Instead, use /pal_led_manager/do_effect.

  • /audio/voice_detected (documentation) Publishes a boolean indicating if a voice is currently detected (ie, whether someone is currently speaking)

  • /head_front_camera/color/camera_info (documentation) Camera calibration and metadata

  • /head_front_camera/color/image_raw/* (documentation) Color rectified image. RGB format

  • /head_front_camera/image_throttle/compressed (documentation) Compressed head image.

  • /joint_states (documentation) The current state of the robot’s joints (eg angular position of each joint).

  • /joint_torque_states (documentation) The current state of the robot’s joints with effort indicating the measured torque instead of the current (eg angular position of each joint).

  • /mobile_base_controller/cmd_vel (documentation) Set the desired linear and angular velocity of the robot (in meters per second).

  • /scan (documentation) Laser scan readings of ARI’s back LIDAR.

  • /torso_back_camera/fisheye1/camera_info (documentation) Camera calibration and metadata (fisheye2)

  • /torso_back_camera/fisheye1/image_raw/* (documentation) Fisheye image

  • /torso_back_camera/fisheye2/camera_info (documentation) Camera calibration and metadata (fisheye2)

  • /torso_back_camera/fisheye2/image_raw/* (documentation) Fisheye image (

  • /torso_front_camera/aligned_depth_to_color/camera_info (documentation) Intrinsics parameters of the aligned dept to color image

  • /torso_front_camera/aligned_depth_to_color/image_raw/* (documentation) Aligned depth to color image

  • /torso_front_camera/color/camera_info (documentation) Camera calibration and metadata

  • /torso_front_camera/color/image_raw/* (documentation) Color rectified image. RGB format

  • /torso_front_camera/depth/camera_info (documentation) Camera calibration and metadata

  • /torso_front_camera/depth/color/points (documentation) Registered XYZRGB point cloud.

  • /torso_front_camera/depth/image_rect_raw/* (documentation) Rectified depth image

  • /torso_front_camera/infra1/camera_info (documentation) Camera calibration and metadata (infra1 and infra2)

  • /torso_front_camera/infra1/image_rect_raw/* (documentation) Raw uint16 IR image

  • /torso_front_camera/infra2/image_rect_raw/compressed (documentation)

  • /base_imu (documentation) Inertial data from the IMU.

  • /sonar_base (documentation) Readings of the sonar.

  • /xtion/depth_registered/camera_info (documentation) Intrinsic parameters of the depth image.

  • /xtion/depth_registered/image_raw (documentation) 32-bit depth image. Every pixel contains the depth of the corresponding point in meters.

  • /xtion/depth_registered/points (documentation) Point cloud computed from the depth image.

  • /xtion/rgb/camera_info (documentation) Intrinsic and distortion parameters of the RGB camera.

  • /xtion/rgb/image_raw (documentation) RGB image.

  • /xtion/rgb/image_rect_color (documentation) Rectified RGB image.

  • /wrist_ft (documentation) Force and torque vectors currently detected by the Force/Torque sensor.

Robot management#

Gestures and motions#

Navigation#

  • /current_zone_of_interest (documentation) Name of the zone of interest (zoi) where the robot is currently located, if any.

  • /map (documentation)

  • /mobile_base_controller/odom (documentation)

  • /move_base_simple/goal (documentation) Direct interface to request the robot to move to a given position

  • /move_base/current_goal (documentation)

  • /pause_navigation (documentation) Returns whether the navigation is current paused (eg because the robot is charging)

Knowledge and reasoning#

  • /kb/add_fact (documentation) Statements published to this topic are added to the knowledge base. The string must represent a <s, p, o> triple, with terms separated by a space. See KnowledgeCore documentation for details.

  • /kb/events/* (documentation) Event notifications for previously subscribed events. See /kb/events.

  • /kb/remove_fact (documentation) Statements published to this topic are removed from the knowledge base. The string must represent a <s, p, o> triple, with terms separated by a space. See KnowledgeCore documentation for details.

Social perception#

  • /hri_face_detect/ready (documentation) A semaphore topic: True is published when the node is ready.

  • /hri_face_identification/ready (documentation) A semaphore topic: True is published when the node is ready.

  • /humans/bodies/*/cropped (documentation) The cropped image of the detected body.

  • /humans/bodies/*/joint_states (documentation) For each detect human body, the joint state of the person’s skeleton.

  • /humans/bodies/*/position (documentation)

  • /humans/bodies/*/roi (documentation) Region of the whole body in the source image

  • /humans/bodies/*/skeleton2d (documentation) The 2D points of the the detected skeleton, in the image space

  • /humans/bodies/*/velocity (documentation)

  • /humans/bodies/tracked (documentation) The list of bodies currently seen by the robot.

  • /humans/candidate_matches (documentation) Potential matches (eg, associations) between detected faces, bodies, voices and persons.

  • /humans/faces/*/aligned (documentation) Aligned (eg, the two eyes are horizontally aligned) version of the cropped face, with same resolution as /humans/faces/*/cropped.

  • /humans/faces/*/cropped (documentation) Cropped face image, if necessary scaled, centered and 0-padded to match the /humans/faces/width and /humans/faces/height ROS parameters.

  • /humans/faces/*/landmarks (documentation) 2D facial landmarks extracted from the face

  • /humans/faces/*/roi (documentation) Region of the face in the source image

  • /humans/faces/tracked (documentation) The list of faces currently seen by the robot.

  • /humans/persons/*/alias (documentation) If this person has been merged with another, this topic contains the person ID of the new person

  • /humans/persons/*/anonymous (documentation) If true, the person is anonymous, ie has not yet been identified, and has not been issued a permanent ID. Latched topic.

  • /humans/persons/*/body_id (documentation) Body matched to that person (if any). Latched topic.

  • /humans/persons/*/engagement_status (documentation) Engagement status of the person with the robot.

  • /humans/persons/*/face_id (documentation) Face matched to that person (if any). Latched topic.

  • /humans/persons/*/location_confidence (documentation) Location confidence; 1 means person currently seen, 0 means person location unknown. See REP-155 Person Frame section for details.

  • /humans/persons/*/voice_id (documentation) Voice matched to that person (if any). Latched topic.

  • /humans/persons/known (documentation) The list of all the persons known by the robot, eithr currently seen or not.

  • /humans/persons/tracked (documentation) The list of persons currently seen by the robot.

Speech and language processing#

  • /active_listening (documentation) Whether or not recognized speech should be further processed (eg by the chatbot). See Dialogue management for details.

  • /chatbot/trigger (documentation) Publish here chatbot intents you want to trigger. This is espectially useful to implement a pro-active behaviour, where the robot starts itself the conversation.

    See Dialogue management for details.

  • /humans/voices/*/audio (documentation) The audio stream of the voice.

  • /humans/voices/*/is_speaking (documentation) Whether verbal content is currently recognised in this voice’s audio stream.

  • /humans/voices/*/speech (documentation) The recognised text, as spoken by this voice.

  • /humans/voices/tracked (documentation) The list of voices currently detected by the robot.

  • /web_subtitles (documentation)

Touchscreen#