ARI SDK 23.1 Highlights#

Developing applications#

  • 🌟 ROS noetic/Ubuntu 20.04

  • 🌟 New Developer Center

    A new, more complete documentation center for ARI

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] New workflow for app development on Linux

    Write apps for ARI on Linux, and easily distribute them via self- contained packages

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Data logging to CSV

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Easy creation of Python applications with ``pal_app``

    Use the command-line pal_create_app tool to quickly create a Python behaviour skeleton that can then be modified for your application.

    🎓 Learn more!

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] User intents

    Automatically recognise the user commands, and expose them as intents to which applications can react.

    🎓 Learn more!

Expressive interactions#

  • 🌟 Expressive eye generation

  • 🌟 Gaze manager for neck-head-eye coordination

ARI hardware#

  • 🌟 Control of LED effects

  • 🌟 Single merged audio channel on `/audio`

Robot management#

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Internationalisation support

    ARI supports multiple languages. You can switch from one language to the other at any time.

    🎓 Learn more!

Gestures and motions#

  • 🌟 Basic idle behaviours

  • 🌟 library of pre-recorded social gestures

Navigation#

  • 🌟 Autonomous navigation

    ARI can autonomously navigate in your environment, and offer several tools to facilitate the creation of maps and points of interest.

    🎓 Learn more!

section_other#

  • 🌟 Welcome to ARI’ demo, showcasing the main features of the robot

    A friendly demo of what ARI can do, that launch automatically at the robot’s first start-up

Knowledge and reasoning#

  • 🌟 semantic knowledge base (ontology)

    A knowledge base (RDF/OWL ontology) that makes it easy to store and reason about symbolic knowledge. Also supports several mental models: you can represent what the robot knows, and what the robot knows about other people’s knowledge.

    🎓 Learn more!

Social capabilities#

  • 🌟 face-skeleton matching and fusion

  • 🌟 gaze tracking (based on head pose)

  • 🌟 ROS tools for HRI: rqt human_radar

  • 🌟 ROS tools for HRI: rviz plugin

  • 🌟 single body 2d/3d skeleton tracking

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Automatic engagement detection, single user

Speech and language processing#

  • 🌟 automatic speech recognition (single voice)

  • 🌟 “chit chat” chatbot available in English

  • 🌟 “chit chat” chatbot available in Spanish

  • 🌟 voice synthesis (Text-to-speech, TTS)

  • 🌟 Automatic sub-titles

  • [👩‍‍🔧 TECHNOLOGY PREVIEW] Wake-up word detection

    Wake-up keywords can (optionally) be configured to start or stop speech processing on the robot

    🎓 Learn more!

Touchscreen#

  • 🌟 Remote display of the robot’s touchscreen

    Possibility to display on a remote computer the exact content of the robot’s touchscreen, using the VNC technology.