PAL’s robots SDK 23.12 Highlights#

Developing applications#

  • 🌟 New documentation centre for TIAGo

    PAL’s TIAGo robot is joining ARI in the PAL SDK Documentation centre: a new, beautiful hub for documentation and tutorials about PAL flagship robot!

  • 🌟 Noetic-compatible simulation of ARI

    ARI simulation is now fully supported on ROS noetic. The installation guide is available on the ROS wiki.

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Visual programming

    PAL SDK 23.12 introduces a new experimental visual programming mode, to quickly develop interactive behaviours for your robot. This first release is currently only available for the TIAGo robot.

    🎓 Learn more!

Expressive interactions#

  • 🌟 Custom overlays for ARI eyes and TIAGoPro face

    You can now display images or play animations in the background or foreground of ARI and TIAGoPro faces. This can be controlled at runtime via a ROS action.

    🎓 Learn more!

  • 🌟 Faster and smoother eyes animation on ARI and TIAGoPro, thanks to OpenGL acceleration

    The eyes and faces animations of the robot are now accelerated with OpenGL, providing a small performance gain.

Mobile manipulation#

  • 🌟 Advanced grapsing pipeline

    A new, complete pipeline to pick and place objects with the TIAGo robot.

    🎓 Learn more!

Navigation#

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] move_base_flex for 2D navigation

    A new navigation interface, based on move_base_flex, and fully compatible with existing move_base API, and resulting in a much improved navigation experience. Currently only available on ARI.

section_other#

  • 🌟 ARI Guessing game with verbal interactions

    A fun game to play with ARI: in a few question, ARI can guess what animal you are thinking of!

Knowledge and reasoning#

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] New live display of knowledge base

    A brand new interactive display of the knowledge base, where all the facts known to the robot can easily be explored.

Social perception#

  • 🌟 Body/face matcher

    Automatically associate faces to detected body ‘skeletons’, to provide a more complete human model.

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] New algorithm to automatically detect user engagement

    Our new hri_engagement node now tracks how likely someone is to be actively engaged with the robot. It is based on the Visual Social Engagement metric.

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Adaptive far faces detection

    We have introduced a new fast & adaptive face detector. The new detector can detect and track face far from the robot (up to 10m), and will automatically switch to a much more accurate face detection algorithm when the person is close to the robot (about 2 meters), providing accurate head and gaze estimation at close distances.

    🎓 Learn more!

Speech and language processing#

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Live subtitles on ARI screen

    Display live captions of what the robot hears and says on its screen (currently only available on ARI)

Touchscreen#

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Touchscreen projects

    New Projects infrastructure for allowing easier page management