../_images/tiagopro-icon.png ../_images/tiago-head-icon.png ../_images/kangaroo-icon.png ../_images/tiago-icon.png ../_images/triago-icon.png ../_images/ari-icon.png ../_images/talos-icon.png ../_images/mobile-bases-icon.png

PAL Interaction simulator#

The interaction simulator is a hybrid simulation tool within ROS4HRI that runs a substantial part of PAL’s interaction pipeline, including components like hri_face_detect, hri_person_manager, communication_hub, expressive_eyes, and knowledge_core. Additionally, it simulates certain elements, such as using a chat interface instead of the ASR/TTS tools and enabling drag-and-drop of virtual objects instead of detecting them.

Interactive Simulator

Running the simulator#

To execute the simulator, run the following command in your development container or from PAL’s public tutorials Docker image:

ros2 launch interaction_sim simulator.launch.py

The interaction simulator starts several nodes, including:

The simulator’s display is based on ROS’s RQt, with two custom plugins:

  • rqt_human_radar (documentation): Visualizes

    detected people around the robot, and enables adding virtual objects and people

  • rqt_chat: Simulates chat interactions with the robot. Messages sent

    are published to the topic /humans/voices/anonymous_speaker/speech, and responses are sent to action /tts_engine/tts.

The figure below gives a complete overview of the architecture.

Software diagram of the interaction simulator

People Perception#

Outputs the image detected by the USB camera, highlighting faces and bodies with bounding boxes. Each person is assigned a unique ID, and detected emotions are represented as emojis near their bounding boxes.

Testing Instructions

  • Position yourself in front of the camera.

  • Ensure your face is detected and given an ID.

  • Change facial expressions (e.g., smile, surprise) and observe corresponding emoji updates.

Happy face detection

Happy face detection#

Surprised face detection

Surprised face detection#

Human Radar#

The Human Radar component displays (and optionally simulate) humans in the vicinity of the robot. It provides spatial data, including distance and angle relative to the robot.

RQT Human Radar

Testing Instructions

  • Place yourself in front of the camera and ensure detection.

  • Move side-to-side or closer/further away and observe changes in the radar

Human radar and robot gaze updates (1)

Human radar and robot gaze updates (1)#

Human radar and robot gaze updates (2)

Human radar and robot gaze updates (2)#

Simulating objects and interaction with the knowledge base#

You can also add virtual objects or people by pressing Settings ‣ Enable objects simulation ‣ Done.

Enable adding virtual objects

Right click and add one of the available objects, such as book, phone or apple, or simulated humans.

Adding objects and humans

It will drop the object at the designated location. The green zone is the field of view of the robot, and blue zones the field of view of the humans (with the darker orange being the detected real user, and lighter orange being a virtual human just added). Once you add an object, it will be automatically added on robot’s knowledge base.

You can drag and drop the object inside or outside of the robot’s field of view to automatically update the visibility status in the knowledge base.

Example of RDF triples added to the knowledge base:#
- sim_person_vplsp rdf:type Human
- myself sees sim_person_vplsp
- book_rqeiw rdf:type Book
- myself sees book_rqeiw
- apple_zwzik rdf:type dbr:Apple
- myself sees apple_zwzik
- cup_yeahs rdf:type Cup
- myself sees cup_yeahs
- sim_person_vplsp sees cup_yeahs
Adding objects to the knowledge base

You can then query the knowledge base to list what the robot is currently seeing (ie, objects or people in its field of view). In the figure above, the robot sees an apple, a cup, a book and two people, while sim_person_vplsp only sees a cup:

> ros2 service call /kb/query kb_msgs/srv/Query "patterns: ['myself sees ?var']"

requester: making request: kb_msgs.srv.Query_Request(patterns=['myself sees ?var'], vars=[], models=[])

response:
kb_msgs.srv.Query_Response(success=True, json='[{"var": "sim_person_vplsp"}, {"var": "person_lkhgx"}, {"var": "cup_yeahs"}, {"var": "apple_zwzik"}, {"var": "book_rqeiw"}]', error_msg='')
> ros2 service call /kb/query kb_msgs/srv/Query "patterns: ['sim_person_vplsp sees ?var']"

requester: making request: kb_msgs.srv.Query_Request(patterns=['sim_person_vplsp sees ?var'], vars=[], models=[])

response:
kb_msgs.srv.Query_Response(success=True, json='[{"var": "cup_yeahs"}]', error_msg='')

Note

If you add an object outside the field of view of the robot or simulated human, it will be added to the knowledge base as an object, but not as an object seen by any entity. For instance, as cup_yeahs is in the field of view of the simulated human, but apple_zwzik is only indicated as seen by myself (the robot).

Robot face#

Displays the expressive face of the robot (e.g., TIAGo Pro face) and tracks people detected using the attention_manager.

Testing Instructions

  • Publish desired expressions directly via ROS on /robot_face/expression. See available expressions in Expression.msg.

ros2 topic pub /robot_face/expression hri_msgs/msg/Expression "expression: sad"
  • Move side to side and, as you see changes in the position of humans detected in the radar, see how the robot adjusts its gaze to look at you (see example below)

Chat interface#

The rqt_chat simulates a person speaking by publishing on the /humans/voices/anonymous_speaker/speech (see /humans/voices/*/speech) whenever the user types in the chat, and simulates a Text-to-Speech (TTS) engine by listening on the /say action and outputting the response back on the interface.

Asides the simulation environment, you need to run the dialogue engine separately. Right now, it supports:

See Create an application with rpk on how to create a new application with LLM integration.

See also#