PAL OS modules

PAL OS’ modules are the software service started automatically started on your robot when you turn it on.

They implement all the functionalities of the robot.

Important

  • You can check the full list of modules installed on your robot, as well as their status, either from the Web user interface, or using the following command:

    $ pal module list
    
  • You can also check the log of a specific module using:

    $ pal module log <module_id> cat
    

    where <module_id> is the name of the module you want to check (see list below).

  • To enable (or disable) auto-start of a module, use:

    $ pal module enable <module_id>
    $ pal module disable <module_id>
    

Modules by capability

⚙️ Robot hardware

😄 Interaction

  • asr_vosk (documentation)

    Starts the speech-to-text service

  • attention_manager (documentation)

    Starts the attention manager. This module implements eg the robot’s face tracking functionality.

  • audio_capture (documentation)

    Starts the standard ROS audio capture pipeline.

  • audio_play (documentation)

    Starts the standard ROS audio playback pipeline.

  • chatbot_rasa (documentation)

    Starts a RASA-based chatbot. The chatbot implements general chit-chat capability, as well as some action recognition. See How-to: RASA chatbot for details.

  • communication_hub (documentation)

    Starts PAL’s communication hub, in charge of routing incoming speech with chatbots and TTS. See 💬 Communication for details.

  • expressive_eyes (documentation)

    Starts the procedural face manager that generates facial expression, controls the gaze direction, generate lip sync motions (where relevant).

  • gaze_manager (documentation)

    Starts the node in charge of combining eye motion and neck motion to generate smooth gazing behaviour.

  • hri_body_detect (documentation)

    Starts the 3D body and skeleton detector.

  • hri_emotion_recognizer (documentation)

    Starts the face expression recognizer.

  • hri_engagement (documentation)

    Starts the human engagement estimator.

  • hri_face_body_matcher (documentation)

    Starts the node in charge of matching detected faces with detected bodies.

  • hri_face_detect (documentation)

    Starts the face detector and 3D head pose estimator.

  • hri_face_identification (documentation)

    Starts the face recognition node.

  • hri_person_manager (documentation)

    Starts the probabilistic person manager, in charge of ‘aggregating’ person features (faces, bodies, voices) into full persons.

  • hri_visualization (documentation)

    Starts the node that generates a video stream with Human-Robot Interaction-related overlays.

  • soft_wakeup_word (documentation)

    Starts the node spotting wake-up keyword(s).

  • task_emotion_mirror (documentation)

    One of ARI’s standard tasks, implementing a ‘emotion copy-cat’.

  • task_self_presentation_ari (documentation)

    One of ARI’s standard tasks, implementing a self-presentation activity.

  • tts_engine (documentation)

    Starts the text-to-speech engine.

  • welcome_app_ari (documentation)

    The ‘welcome’ application, displayed on the ARI screen at start-up.

🛠 Robot management

🦾 Manipulation

👋 Gestures and motions

🧭 Navigation

💡 Knowledge and reasoning

🖥️ User interfaces

Alphabetic index

See also