List of ROS Topics#
This page list all the topics exposed by the SDK 23.12.
Caution
Only the topics contributing to the public API of the pal-sdk-23.12 are listed here.
Additional ROS topics might be present on the robot, for internal purposes. They are however not part of the documented and supported robot API.
Alphabetic index#
/active_listening/arm_left_controller/command/arm_left_controller/safe_command/arm_right_controller/command/arm_right_controller/safe_command/audio/channel0/audio/channel1/audio/channel2/audio/channel3/audio/channel4/audio/channel5/audio/raw/audio/sound_direction/audio/sound_localization/audio/speech/audio/status_led/audio/voice_detected/base_imu/chatbot/trigger/current_zone_of_interest/diagnostics/diagnostics_agg/hand_left_controller/command/hand_right_controller/command/head_controller/command/head_front_camera/color/camera_info/head_front_camera/color/image_raw/*/head_front_camera/image_throttle/compressed/hri_face_detect/ready/hri_face_identification/ready/humans/bodies/*/cropped/humans/bodies/*/joint_states/humans/bodies/*/position/humans/bodies/*/roi/humans/bodies/*/skeleton2d/humans/bodies/tracked/humans/bodies/*/velocity/humans/candidate_matches/humans/faces/*/aligned/humans/faces/*/cropped/humans/faces/*/landmarks/humans/faces/*/roi/humans/faces/tracked/humans/persons/*/alias/humans/persons/*/anonymous/humans/persons/*/body_id/humans/persons/*/engagement_status/humans/persons/*/face_id/humans/persons/known/humans/persons/*/location_confidence/humans/persons/tracked/humans/persons/*/voice_id/humans/voices/*/audio/humans/voices/*/is_speaking/humans/voices/*/speech/humans/voices/tracked/intents/interaction_logger/interaction_profile_manager/parameter_updates/joint_states/kb/add_fact/kb/events/*/kb/remove_fact/left_eye/look_at/look_at_with_style/map/mobile_base_controller/cmd_vel/mobile_base_controller/odom/move_base/current_goal/move_base_simple/goal/pause_navigation/power/battery_level/power/is_charging/power/is_docked/power/is_emergency/power/is_plugged/power_status/right_eye/robot_face/robot_face/background_image/robot_face/expression/robot_face/look_at/scan/sonar_base/torso_back_camera/fisheye1/camera_info/torso_back_camera/fisheye1/image_raw/*/torso_back_camera/fisheye2/camera_info/torso_back_camera/fisheye2/image_raw/*/torso_front_camera/aligned_depth_to_color/camera_info/torso_front_camera/aligned_depth_to_color/image_raw/*/torso_front_camera/color/camera_info/torso_front_camera/color/image_raw/*/torso_front_camera/depth/camera_info/torso_front_camera/depth/color/points/torso_front_camera/depth/image_rect_raw/*/torso_front_camera/infra1/camera_info/torso_front_camera/infra1/image_rect_raw/*/torso_front_camera/infra2/image_rect_raw/compressed/touch_web_state/user_input/web/go_to/web_subtitles/wrist_ft/xtion/depth_registered/camera_info/xtion/depth_registered/image_raw/xtion/depth_registered/points/xtion/rgb/camera_info/xtion/rgb/image_raw/xtion/rgb/image_rect_color
By capability#
Developing applications#
/intents(documentation) An intent, encoding a desired activity to be scheduled by the robot (not to be confused by the chatbot intents). Read more about Intents.
Expressive interactions#
/left_eye(documentation) The image to be displayed on the left eye of the robot./look_at(documentation) Set a target for the robot to look at. Uses both the eyes and the head position./look_at_with_style(documentation) Set a target for the robot to look at, while letting you configure how the gaze will be performed./right_eye(documentation) The image to be displayed on the left eye of the robot./robot_face(documentation) The left and right images to be displayed on the robot’s eyes. Published by default by the expressive_eyes node. If you want to publish your own face on this topic, you might want to first stop the/robot_face/expression(documentation) Set the expression of ARI eyes. See Robot face and expressions for details./robot_face/background_image(documentation) Displays a ROS video stream as background of the robot’s face/eyes. See Background and overlays for details./robot_face/look_at(documentation) Sets the direction of eyes. If you want to control the gaze direction, use instead /look_at. See Controlling the attention and gaze of the robot for details.
Robot hardware#
/audio/channel0(documentation) Merged audio channel of the ReSpeaker’s 4 microphones/audio/channel1(documentation) Audio stream from the ReSpeaker’s first microphone./audio/channel2(documentation) Audio stream from the ReSpeaker’s second microphone./audio/channel3(documentation) Audio stream from the ReSpeaker’s third microphone./audio/channel4(documentation) Audio stream from the ReSpeaker’s fourth microphone./audio/channel5(documentation) Monitor audio stream from the ReSpeaker’s audio input (used for self-echo cancellation)./audio/raw(documentation) Merged audio channel of the ReSpeaker’s 4 microphones (alias for /audio/channel0)./audio/sound_direction(documentation) The estimated Direction of Arrival of the detected sound./audio/sound_localization(documentation) The estimated sound source location./audio/speech(documentation) Raw audio data of detected speech (published once the person has finished speaking)./audio/status_led(documentation) The topic controlling the reSpeaker microphone LEDs. Do not use this topic directly. Instead, use /pal_led_manager/do_effect./audio/voice_detected(documentation) Publishes a boolean indicating if a voice is currently detected (ie, whether someone is currently speaking)/head_front_camera/color/camera_info(documentation) Camera calibration and metadata/head_front_camera/color/image_raw/*(documentation) Color rectified image. RGB format/head_front_camera/image_throttle/compressed(documentation) Compressed head image./joint_states(documentation) The current state of the robot’s joints (eg angular position of each joint)./joint_torque_states(documentation) The current state of the robot’s joints with effort indicating the measured torque instead of the current (eg angular position of each joint)./mobile_base_controller/cmd_vel(documentation) Set the desired linear and angular velocity of the robot (in meters per second)./scan(documentation) Laser scan readings of ARI’s back LIDAR./torso_back_camera/fisheye1/camera_info(documentation) Camera calibration and metadata (fisheye2)/torso_back_camera/fisheye1/image_raw/*(documentation) Fisheye image/torso_back_camera/fisheye2/camera_info(documentation) Camera calibration and metadata (fisheye2)/torso_back_camera/fisheye2/image_raw/*(documentation) Fisheye image (/torso_front_camera/aligned_depth_to_color/camera_info(documentation) Intrinsics parameters of the aligned dept to color image/torso_front_camera/aligned_depth_to_color/image_raw/*(documentation) Aligned depth to color image/torso_front_camera/color/camera_info(documentation) Camera calibration and metadata/torso_front_camera/color/image_raw/*(documentation) Color rectified image. RGB format/torso_front_camera/depth/camera_info(documentation) Camera calibration and metadata/torso_front_camera/depth/color/points(documentation) Registered XYZRGB point cloud./torso_front_camera/depth/image_rect_raw/*(documentation) Rectified depth image/torso_front_camera/infra1/camera_info(documentation) Camera calibration and metadata (infra1 and infra2)/torso_front_camera/infra1/image_rect_raw/*(documentation) Raw uint16 IR image/torso_front_camera/infra2/image_rect_raw/compressed(documentation)/base_imu(documentation) Inertial data from the IMU./sonar_base(documentation) Readings of the sonar./xtion/depth_registered/camera_info(documentation) Intrinsic parameters of the depth image./xtion/depth_registered/image_raw(documentation) 32-bit depth image. Every pixel contains the depth of the corresponding point in meters./xtion/depth_registered/points(documentation) Point cloud computed from the depth image./xtion/rgb/camera_info(documentation) Intrinsic and distortion parameters of the RGB camera./xtion/rgb/image_raw(documentation) RGB image./xtion/rgb/image_rect_color(documentation) Rectified RGB image./wrist_ft(documentation) Force and torque vectors currently detected by the Force/Torque sensor.
Robot management#
/diagnostics(documentation)/diagnostics_agg(documentation)/interaction_logger(documentation) Logs arbitrary strings to a CSV file. See How-to: Log and retrieve data from the robot for details./interaction_profile_manager/parameter_updates(documentation)/power_status(documentation)/power/battery_level(documentation)/power/is_charging(documentation)/power/is_docked(documentation)/power/is_emergency(documentation)/power/is_plugged(documentation)
Gestures and motions#
/arm_left_controller/command(documentation)/arm_left_controller/safe_command(documentation)/arm_right_controller/command(documentation)/arm_right_controller/safe_command(documentation)/hand_left_controller/command(documentation)/hand_right_controller/command(documentation)/head_controller/command(documentation)
Knowledge and reasoning#
/kb/add_fact(documentation) Statements published to this topic are added to the knowledge base. The string must represent a <s, p, o> triple, with terms separated by a space. See KnowledgeCore documentation for details./kb/events/*(documentation) Event notifications for previously subscribed events. See /kb/events./kb/remove_fact(documentation) Statements published to this topic are removed from the knowledge base. The string must represent a <s, p, o> triple, with terms separated by a space. See KnowledgeCore documentation for details.
Speech and language processing#
/active_listening(documentation) Whether or not recognized speech should be further processed (eg by the chatbot). See Dialogue management for details./chatbot/trigger(documentation) Publish here chatbot intents you want to trigger. This is espectially useful to implement a pro-active behaviour, where the robot starts itself the conversation.See Dialogue management for details.
/humans/voices/*/audio(documentation) The audio stream of the voice./humans/voices/*/is_speaking(documentation) Whether verbal content is currently recognised in this voice’s audio stream./humans/voices/*/speech(documentation) The recognised text, as spoken by this voice./humans/voices/tracked(documentation) The list of voices currently detected by the robot./web_subtitles(documentation)
Touchscreen#
/touch_web_state(documentation)/user_input(documentation)/web/go_to(documentation) Sets the webpage to be displayed on the touchscreen.