💬 Communication

Your robot can recognize speech, handle dialogues, and synthesize voice in several languages. It is fully compliant with the ROS4HRI REP-155 ROS standard.

In its default configuration, the entire speech pipeline runs on-board; no cloud-based services are used (and consequently, no Internet connection is required).

The following figure provides complete picture of the communication pipeline on the robot, and how it interacts with components of different subsystems (clicking on the boxes redirects to the relevant page).

image/svg+xml 3D environment:reMapother semanticknowledge sourcespeople perception:ROS4HRI semantic state aggregator ~/get_updates ~/configure communication hubinputmaps inputto dialogue2missioncontrollerchat skill /chat[skills_list/Chat.action] mic: reSpeaker /audio_in/raw /humans/voices/*/speech knowledge baseKnowledgeCore ASR: vosk robot state chatbot enginechatbot_rasaor chatbot_ollama ~/get_supported_roles ~/start_dialogue ~/dialogue_interaction speech/tts_engine/say gestures expressions/robot_face/expressions closed captions/communication_hub/closed_captions ask skill /ask[.../Ask.action] say skill /say[.../Say.action] create adialogue1get response and/oruser intent3responseintentactive dialogues- role- person_id/group_id- prioritya5b001 f457e06ff6e2578d5eeeb9a1 .../intents[hri_action_msgs/Intent.msg] inputsoft wake-up word

The main components are:

The communication subsystem is also closely integrated with the internationalization manager, with many of the components above being language-dependent.

How-to

Tutorials

FAQ

References