ARI SDK 23.1 Highlights#

Developing applications#

  • 🌟 ROS noetic/Ubuntu 20.04

  • 🌟 New Developer Center

    A new, more complete documentation center for ARI

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] New workflow for app development on Linux

    Write apps for your robot on Linux, and easily distribute them via self- contained packages

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Data logging to CSV

    Save custom data directly to CSV files.

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Easy creation of Python applications with ``pal_app``

    Use the command-line pal_app tool to quickly create a Python behaviour skeleton that can then be customized for your application.

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] User intents

    Automatically recognise the user commands, and expose them as intents to which applications can react.

    🎓 Learn more!

Expressive interactions#

  • 🌟 Expressive eye generation

    🎓 Learn more!

  • 🌟 Gaze manager for neck-head-eye coordination

ARI hardware#

  • 🌟 Control of LED effects

  • 🌟 Single merged audio channel on `/audio`

Robot management#

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Internationalisation support

    PAL robots support multiple languages. You can switch from one language to the other at any time.

    🎓 Learn more!

Gestures and motions#

  • 🌟 Basic idle behaviours

  • 🌟 library of pre-recorded social gestures


  • 🌟 Autonomous navigation

    The robot can autonomously navigate in your environment, and offer several tools to facilitate the creation of maps and points of interest.

    🎓 Learn more!


  • 🌟 Welcome to ARI’ demo, showcasing the main features of the robot

    A friendly demo of what your robot can do, that launch automatically at the robot’s first start-up.

Knowledge and reasoning#

  • 🌟 semantic knowledge base (ontology)

    A knowledge base (RDF/OWL ontology) that makes it easy to store and reason about symbolic knowledge. You can run first-order-logic inferences and construct complex queries using the SPARQL language. The knowledge base also supports several mental models: you can represent what the robot knows, and what the robot knows about other people’s knowledge.

    🎓 Learn more!

Social perception#

  • 🌟 face-skeleton matching and fusion

  • 🌟 gaze tracking (based on head pose)

  • 🌟 ROS tools for HRI: rqt human_radar

  • 🌟 ROS tools for HRI: rviz plugin

  • 🌟 single body 2d/3d skeleton tracking

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Automatic engagement detection, single user

Speech and language processing#

  • 🌟 ASR supports Catalan

  • 🌟 voice localisation

  • 🌟 automatic speech recognition (single voice)

  • 🌟 “chit chat” chatbot available in English

  • 🌟 “chit chat” chatbot available in Spanish

  • 🌟 voice synthesis (Text-to-speech, TTS)

  • 🌟 Automatic sub-titles

  • 🌟 chit chat’ chatbot available in Catalan

  • 🌟 chit chat’ chatbot available in French

  • 🌟 chit chat’ chatbot available in Italian

  • 🌟 Support full locale (eg en_US) for the chatbots

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Language center

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Wake-up word detection

    Wake-up keywords can (optionally) be configured to start or stop speech processing on the robot

    🎓 Learn more!


  • 🌟 Remote display of the robot’s touchscreen

    Possibility to display on a remote computer the exact content of the robot’s touchscreen, using the VNC technology.