Frequently Asked Questions#

💬 Communication#

❓ How to change the robot’s language?

You can set the default language on your robot for the Web user interface. You can also set the /pal/language parameter to change the currently active language. See internationalization for details.

❓ What languages are supported by my robot?

You can access the list of the language currently available on your robot for the Web user interface. Alternatively, you can access the ROS parameter tts/supported_languages. If you wish to add additional languages, contact us.

❓ How to change the ‘wake-up’ (or ‘sleep’) keywords?

You can easily set a custom regular expression as wake-up/sleep keyword. See How to change the wake-up/sleep keywords? for details.

❓ What is the default wake-up word?

Check Wake-up word detector for the default wake-up/sleep keywords.

📜 Developing applications#

❓ Can I compile my nodes in a ROS Docker container, and run it on the robot?

Yes, Docker is installed on your robot and you can run any image you wish.

Once you have the Docker image in your repository, pull it on your robot. For instance:

$ ssh pal@<robot>-Xc
$ docker login registry.gitlab.com
$ docker pull registry.gitlab.com/myuser/mydocker

You can then run the docker as you would from your laptop/PC.

❓ How do I communicate between containers?

All communication between containers is done through regular ROS msgs, topics, actions and services that have been defined for each module. All containers will be using ROS, and will be able to see and directly communicate with other containers (and with the Nvidia Jetson if you are using it).

Make sure however that you are starting the docker image with the --net host option, so that the container sees the ROS core.

Links to ROS examples, depending if you use C++ or Python:

C++: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29

Python: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29

It is recommended to complete all the ROS tutorials available in the coding language you use.

❓ What method should I use to debug inside a container?

Accessing the container (with eg docker exec -it <image name> bash) and using gdb or similar tools as normal from inside the container, as if it was a standard computer is recommended.

❓ How can I automatically launch an application at start up?

Applications can be automatically launched at start-up. See this documentation section.

❓ How to access the log file of a module running on the robot?

You can easily access a module’s log via the robot Web user interface. If you connect to the robot over SSH, you can also use pal module log <name of the module> to display the log file.

😄 Expressive interactions#

❓ How to enable/disable the eyes blinking?

You can enable or disable blinking at any time by setting the general.is_blinking parameter of the /robot_face/expressive_eyes node. For instance, to stop the blinking:

ros2 param set /robot_face/expressive_eyes general.is_blinking false

🏁 Getting started#

❓ Where do I find the serial number of my robot?

You can find the serial number on the robot’s identification sticker. See Robot identification for details.

⚙️ Robots hardware#

❓ How to change frame rate of fisheye cameras?

The fisheye cameras are connected to the Nvidia Jetson and are running on a docker (1 per camera). In order to make them visible to the robot’s main PC and externally we have a ROS bridge, corresponding to the following launch file: https://github.com/pal-robotics/ari_navigation/blob/melodic-devel/ari_rgbd_sensors/launch/elp_cameras_relay.launch

As you see here we remap the input topics from the Jetson and also specify the frame rate, in this case 5Hz. You can pull this package, adjust the frame-rate and deploy it on the robot (the package would then be in deployed_ws/share/ari_rgbd_sensors) or alternatively adjust the permanent partition of the robot (/opt/pal/ferrum/share/ari_rgbd_sensors).

<node name="fisheye_rear_relay" type="throttle" pkg="topic_tools" args="messages /rear_fisheye_camera/image_raw/compressed  5 /rear_camera/fisheye/image_raw/compressed" />

❓ What does the LEDs color/pattern mean?

Check the LEDs API page for explanations about the meaning of the different LEDs.

❓ My NVidia Jetson accelerator is not responding to ping, what can I do?

❓ I’m unable to see the messages published in a topic or an action from within the Jetson

Please refer to Using the Nvidia Jetson accelerator to check the best practices to approach this issue.

🛠 Robot management#

❓ What would be a typical usage length before the batteries run out of power?

ARI has a battery life of approximately 8-12 hours, and needs about 5 hours. Note that the autonomy is significantly reduced if the robot needs to move frequently. to be fully charged.

❓ Can we change the robot’s name once set?

In theory no, however if it is a feature that you would be interested in, the team at PAL Robotics could look into it. Contact the support.

❓ Can we work on the robot while the red emergency button is pressed, for example, for remote work?

Yes, you will be able to connect to its cameras and work with perception for example. However your robot won’t produce any speech output nor will it move its joints (arms, head or base)

We however suggest to always have the emergency button unpressed when working with the robot.

❓ How can I find the IP address of my robot?

By default, all robots are configured with the static IP address 10.68.0.1. You can select this, by either specifying the robot’s IP address or its hostname <robot>-0c.

See Network configuration for details.

❓ What networking port are used/required by the robot?

Check the data-management page.

❓ How can I see the list, and stop/start the robot’s background services?

The running robot background services (also called startups) are listed (and can be started/stopped) in their Web user interface. You can also use command-line tools or even the ROS API.

❓ How to update or upgrade my robot?

See system-upgrade.

❓ Can I add a new ROS workspace to the robot environment?

In cases where the workspace resolution process needs to be changed, the file /usr/bin/init_pal_env.sh can be modified to adapt the environment of the startup process.

👋 Gestures and motions#

❓ Is there some kind of ‘self collision avoidance’ check so we can’t make the robot ‘hurt’ itself with its arms?

You robot features a whole body motion controller that protects against self-collisions when automatically generating gestures like pointing. See whole_body_motion_control for details.

However, if you create your own pre-recorded motions using e.g. the motion builder, you are responsible for avoiding self-collisions as the robot will ‘blindly’ execute the pre-recorded motion.

❓ How can I control the robot’s head orientation with absolute values?

You can publish a target position for the robot to look at on the /look_at topic.

See controlling_gaze for details and examples.

❓ Is it possible to control the arm/head joints in velocity?

The dynamixel motors do have velocity control mode, however it is not implemented in ActuatorsMgr, as its lowest layer does not support it

❓ How can I stop the default idle motion of the arms?

The default idle motion of the arms can be stopped from the web-user-interface startup panel, by stopping the interaction_profile service.

❓ How can I stop the default gaze behaviour?

You can call the /attention_manager/set_policy ROS service with the policy DISABLED. See attention-management for details.

❓ How can I control the gaze direction of the robot?

Your robot’s touchscreen displays a Chrome webpage. Specifically it has a topic called [🤔 ROS2?] /web/go_to where you specify the URL browser link you want it to go to.

It uses Chrome’s Selenium driver: https://chromedriver.chromium.org/getting-started

See touchscreen_mgr for details.

🖥️ Touchscreen#

❓ How can I create ARI touch-screen content so that I can output my applications information e.g. ROS image topics, text, custom messages?

You can use ARI’s rrlib.js and pallib.js modules, that enable calling ARI and ROS functions from Javascript. See the robot’s developer manual for more information.

❓ How do I get the default touch-screen back?

Edit .pal/www/webapps/page_cfg/config.json to point to any of the json files in .pal/www/webapps/pages/. The default web is default_page.json:

{"default_page": "default_page.json"}

❓ How does ARI’s touchscreen work?

Your robot’s touchscreen displays a Chrome webpage. Specifically it has a topic called [🤔 ROS2?] /web/go_to where you specify the URL browser link you want it to go to.

It uses Chrome’s Selenium driver: https://chromedriver.chromium.org/getting-started

See touchscreen_mgr for details.

❓ How can I visualize what is displayed on the robot’s screen?

ARI’s touchscreen is running the Chrome browser in ‘kiosk’ mode. You can reproduce this setup on your own machine by launching Chrome with the parameter --kiosk. ARI is using Chrome version 63.0.3239.132-1 with no specific extra plugins or software enabled.

In addition, you can connect to http://<robot-0c:11011/webapps/pages/pal_touch_web_display/ with your browser so you can load with the same infrastructure that is loaded in the chest touchscreen and have it interact with the rest of the robot infrastructure.