PAL OS 25.01 Highlights#
Communication#
Automatic Speech Recognition
π Learn more!
TIAGo Pro non-verbal interaction
If or when the robot does not speak, it can communicate using sounds or non-verbal utterances (think of the βbeepsβ of R2D2). Recent research on the topic: https://liu.diva-portal.org/smash/record.jsf?pid=diva2%3A1754957&dswid=9259
new syntax for TTS annotation
We have introduced a new, simpler - yet more powerful - syntax to annotate speech with actions. This makes it easy to start, stop and synchronize speech with gestures, facial expressions, LED effects, etc.
π Learn more!
Internationalisation support
PAL robots support multiple languages. You can switch from one language to the other at any time (note that the languages actually available on your robot depend on your purchase options).
π Learn more!
automatic speech recognition (single voice)
π Learn more!
voice synthesis (Text-to-speech, TTS)
Soft Wake-up word detection
Wake-up keywords can (optionally) be configured to start or stop speech processing on the robot.
π Learn more!
chit chatβ chatbot available in Catalan
chit chatβ chatbot available in French
chit chatβ chatbot available in Italian
chit chatβ chatbot available in Spanish
chit chatβ chatbot available in English (US + UK)
Developer experience#
π rpk: robot app template generator
Use the command-line
rpk
tool to quickly create a range of apps for your robot. It includes templates for custom skills, tasks and mission controller.π Learn more!
π PAL Interaction simulator
PAL SDK includes several tools to better visualize the people interacting with the robot, from caemra overlays with the detected faces and skeleton, 3D models of the humans integrated in rviz, or a new custom rqt plugin to display the human around the robot with a top-down view.
π PAL Command-line interface
A new command-line tool, pre-installed on PAL robots, to easily list and configure what services run on the robot.
π Learn more!
π Gazebo simulation
A complete simulated version of the robot is available in Gazebo Classic. All capabilities of the real robot are available in simulation, improving the developer experience and simplifying deployment from a simulated environment to the real world.
RViz sensor visualization
Visualize the robot model and its sensor data in RViz2 to simplify the development of robotic applications
Application example: Self Presentation
An application where a presentation is given by the robot itself, explaining its capabilities and use cases.
PAL Deploy
A developer utility that enables developers to quickly deploy applications or bug fixes on the robot.
π Docker Development images
A personalized Docker image, delivered together with the robot, that contains the ROS 2 development environment required for developing applications for the robot. Also contains the simulation of the robot with its exact configuration.
WebGUI
A Web User Interface that is accessible through any browser on a laptop, table or phone. The WebGUI shows information about the operating status of the robot.
Module manager
The Module Manager is the central program on the robot that manages the numerous applications of the robot. It provides an interface to start, stop, check the status and see the logs of any application running on the robot. The Module Manager also configures which applications are βrun on start upβ on the robot.
π PAL Documentation centre
The PAL Documentation Centre is the central place where all information about the robot is available.
π Full ROS 2 documentation
π Updated joystick controls
Introduction of dead-man button βRBβ on the joystick. To move the robot the user has to keep this button pressed for safety reasons
π Learn more!
π Cyclone DDS integration
Improved communication protocol between the robot and developing environment.
Hardware#
Access to raw and pre-processed audio channels of the microphone array
βThe robot now exposes each four individual audio channels of its microphone array, as well as a pre-processed merge channel, perfect for further feeding into an audio pipeline (eg for speech recognition).β
π Learn more!
Speaker and microphone
Support of the following speakers and cameras in ROS 2:
π Learn more!
2D Laser Sensors: Sick 561
Integration of the Sick 561 LiDAR in ROS 2
2D Laser Sensors: Sick 571
Integration of the Sick 571 LiDAR in ROS 2
2D Laser Sensors: Hokuyo
Integration of the Hokuyo LiDAR in ROS 2
RGBD Cameras: Realsense d435
Integration of the Realsense d435 camera in ROS 2
RGBD Cameras: Realsense d435i
Integration of the Realsense d435i camera in ROS 2
RGBD Cameras: Orbbec Astra
Integration of the Orbbec Astra camera in ROS 2
RGBD Cameras: Orbbec Astra S
Integration of the Orbbec Astra S camera in ROS 2
PAL Gripper end effector
Integration of the PAL Gripper in ROS 2
Robotiq-2f 85/140 end effector
Integration of the Robotiq-2f 85/140 in ROS 2
PAL Hey-5 end effector
Integration of the PAL Hey-5in ROS 2
PAL Pro gripper end effector
Integration of the PAL Pro Gripper in ROS 2
Interactions#
Rich set of styles for more expressive gazing
Automatic engagement detection
π Live subtitles on robotsβ screen
Display live captions of what the robot hears and says on its screen (currently only available on ARI)
Faster and smoother eyes animation on ARI and TIAGoPro, thanks to OpenGL acceleration
The eyes and faces animations of the robot are now accelerated with OpenGL, providing a small performance gain.
π Learn more!
Expressive procedural eyes
Directly display one of the 26 available expressions, or set the expression using the (valence, arousal) emotion model.
π Learn more!
Motions/manipulation#
Gaze manager for neck-head-eye coordination
The robot can now smoothly combine quick eye motions with head movement, to create a natural-looking gazing behaviour.
π Learn more!
π ROS 2 Control integration
The control stack of the robots is built on top of the ROS 2 control framework. The ROS 2 control framework uses the Real Time kernel featured on the robot for the hardware components that deal with the communication with the actuators of the robot as well as for the ROS 2 controllers that either computes commands to the joints such as joint_trajectory_controller or broadcasts information, such as joint_state_broadcaster. This framework simplifies the integration for the users to design robot agnostics controllers that can be easily deployed and tested on the robot
π MoveIt2 Integration
The robot has full MoveIt2 integration. MoveIt2 is a robotics manipulation platform and provides a simple and standardized way to interact with the robots planning, collision avoidance and manipulation capabilities.
π PlayMotion2
PlayMotion2 provides an easy interface to create and execute complex pre-defined motions in a safe way.
Gravity compensation
In Gravity Compensation mode the arms are controlled in effort to counteract the effects of gravity while at the same time allowing the arms to be moved by external forces, such as humans.
Perception#
gesture recognition: basic hand gestures
gaze tracking (βwho is looking at what?β)
Live display of human radar in rqt
This should be thought about to provide a useful page, not just βan imageβ
π Multi body 2d/3d skeleton tracking
π Body/face matcher and probabilistic fusion
Automatically associate faces to detected body βskeletonsβ, to provide a more complete human model.
π Learn more!
Adaptive far faces detection
We have introduced a new fast & adaptive face detector. The new detector can detect and track face far from the robot (up to 10m), and will automatically switch to a much more accurate face detection algorithm when the person is close to the robot (about 2 meters), providing accurate head and gaze estimation at close distances
π Learn more!
Detection and modelling user intents using RASA chatbot
βAutomatically recognise the user commands, and expose them as intents to which applications can react.β
π Learn more!
π ROS tools for HRI: rviz plugins and rqt human_radar
PAL SDK includes several tools to better visualize the people interacting with the robot, from caemra overlays with the detected faces and skeleton, 3D models of the humans integrated in rviz, or a new custom rqt plugin to display the human around the robot with a top-down view.
Reasoning/ai#
π Symbolic knowledge base (ontology)
A knowledge base (RDF/OWL ontology) that makes it easy to store and reason about symbolic knowledge. You can run first-order-logic inferences and construct complex queries using the SPARQL language. The knowledge base also supports several mental models: you can represent what the robot knows, and what the robot knows about other peopleβs knowledge.
π Learn more!