Frequently Asked Questions#
💬 Communication#
❓ How to change the robot’s language?
You can set the default language on your robot for the Web user interface. You can also set the /pal/language parameter to change the currently active language. See Internationalisation and language support for details.
❓ How to change the ‘wake-up’ (or ‘sleep’) keywords?
You can easily set a custom regular expression as wake-up/sleep keyword. See How to change the wake-up/sleep keywords? for details.
❓ What is the default wake-up word?
Check Wake-up word detector for the default wake-up/sleep keywords.
❓ What languages are supported by my robot?
You can access the list of the language currently available on your robot for the Web user interface. Alternatively, you can access the ROS parameter tts/supported_languages. If you wish to add additional languages, contact us.
📜 Developing applications#
❓ Can I compile my nodes in a ROS Docker container, and run it on the robot?
Yes, Docker is installed on your robot and you can run any image you wish.
Once you have the Docker image in your repository, pull it on your robot. For instance:
$ ssh pal@<robot>-Xc $ docker login registry.gitlab.com $ docker pull registry.gitlab.com/myuser/mydockerYou can then run the docker as you would from your laptop/PC.
❓ What method should I use to debug inside a container?
Accessing the container (with eg
docker exec -it <image name> bash
) and usinggdb
or similar tools as normal from inside the container, as if it was a standard computer is recommended.
❓ How do I communicate between containers?
All communication between containers is done through regular ROS msgs, topics, actions and services that have been defined for each module. All containers will be using ROS, and will be able to see and directly communicate with other containers (and with the Nvidia Jetson if you are using it).
Make sure however that you are starting the docker image with the
--net host
option, so that the container sees the ROS core.Links to ROS examples, depending if you use C++ or Python:
C++: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29
Python: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29
It is recommended to complete all the ROS tutorials available in the coding language you use.
❓ How can I automatically launch an application at start up?
Applications can be automatically launched at start-up. See this documentation section.
❓ Can I connect to multiple robots at the same time using the Cyclone DDS configuration?
This is not supported for the time being.
❓ Why do the topics take time to appear?
On unreliable networks, the P2P discovery process may take some time because the robot has many nodes performing discovery simultaneously.
❓ Why can I not visualize the camera in real-time?
This depends heavily on the speed of your infrastructure, but take into account that Wi-Fi is an unreliable medium and the camera driver is publishing at 30Hz. The throughput can be improved by setting up your image subscription with the Best Effort QoS policy, which prevents a lot of resending and ACKs.
You should NEVER subscribe to high-throughput topics using a Reliable QoS Policy, as it can potentially flood your network. In that regard, note that the standard
usb_cam
webcam driver is wrongly using theRELIABLE
QoS policy. We recommend you to instead usegscam
withuse_sensor_data_qos:=True
if you want to stream your webcam to a ROS 2 network.
❓ Why can’t I open RViz or run terminator from my container?
By default Docker containers do not have access to the X server (required by
RViz
or terminator). To allow this, runxhost +
on your host machine before starting your Docker container.
🏁 Getting started#
❓ Where do I find the serial number of my robot?
You can find the serial number on the robot’s identification sticker. See Robot identification for details.
⚙️ Robots hardware#
❓ How can I record audio data using ROS?
Your robot publishes the audio in the following topic: /audio_in/raw. You can use rosbags to record any kind of ROS topics: http://wiki.ros.org/rosbag. Namely, enter the robot and in the desired directory:
ssh pal@<robot>-0c rosbag record -O audio.bag /audio/rawwhere
audio.bag
is the name of the bag that will be stored in the directory where the command is run.
❓ How to change frame rate of fisheye cameras?
The fisheye cameras are connected to the Nvidia Jetson and are running on a docker (1 per camera). In order to make them visible to the robot’s main PC and externally we have a ROS bridge, corresponding to the following launch file: https://github.com/pal-robotics/ari_navigation/blob/melodic-devel/ari_rgbd_sensors/launch/elp_cameras_relay.launch
As you see here we remap the input topics from the Jetson and also specify the frame rate, in this case 5Hz. You can pull this package, adjust the frame-rate and deploy it on the robot (the package would then be in
deployed_ws/share/ari_rgbd_sensors
) or alternatively adjust the permanent partition of the robot (/opt/pal/ferrum/share/ari_rgbd_sensors
).<node name="fisheye_rear_relay" type="throttle" pkg="topic_tools" args="messages /rear_fisheye_camera/image_raw/compressed 5 /rear_camera/fisheye/image_raw/compressed" />
❓ My NVidia Jetson accelerator is not responding to ping, what can I do?
Please refer to Using the Nvidia Jetson accelerator.
❓ I’m unable to see the messages published in a topic or an action from within the Jetson
Please refer to Using the Nvidia Jetson accelerator to check the best practices to approach this issue.
❓ What does the LEDs color/pattern mean?
Check the LEDs API page for explanations about the meaning of the different LEDs.
🛠 Robot management#
❓ Can I add a new ROS workspace to the robot environment?
In cases where the workspace resolution process needs to be changed, the file
/usr/bin/init_pal_env.sh
can be modified to adapt the environment of the startup process.
❓ What would be a typical usage length before the batteries run out of power?
ARI has a battery life of approximately 8-12 hours, and needs about 5 hours. Note that the autonomy is significantly reduced if the robot needs to move frequently. to be fully charged.
❓ How can I find the IP address of my robot?
The IP address of a robot can depend on the network mode that is configured. Please refer to Troubleshooting for instructions to retrieve the ip.
On voice-enable robots, you can also simply ask the robot: “what is your IP address?” See Network configuration for details.
❓ What networking port are used/required by the robot?
Check the Security page.
❓ Can we change the robot’s name once set?
In theory no, however if it is a feature that you would be interested in, the team at PAL Robotics could look into it. Contact the support.
❓ How can I see the list, and stop/start the robot’s background services (ie modules)?
The running robot background services (also called modules) are listed (and can be started/stopped) in the Web user interface. You can also use command-line tools or the ROS API.
❓ How to update or upgrade my robot?
See System upgrade.
❓ Can we work on the robot while the red emergency button is pressed, for example, for remote work?
Yes, you will be able to connect to its cameras and work with perception for example. However your robot won’t produce any speech output nor will it move its joints (arms, head or base)
We however suggest to always have the emergency button unpressed when working with the robot.
❓ The robot does not respond to commands of the gamepad.
There may be several reasons why the robot is not responding. In section gamepad a troubleshoot guide can be found.
👋 Gestures and motions#
❓ How can I control the robot’s head orientation with absolute values?
You can publish a target position for the robot to look at on the /look_at topic.
See How to set the gaze direction? for details and examples.
❓ How can I control the gaze direction of the robot?
❓ Is there some kind of ‘self collision avoidance’ check so we can’t make the robot ‘hurt’ itself with its arms?
You robot features a whole body motion controller that protects against self-collisions when automatically generating gestures like pointing. See 🚧 Introduction to whole body motion control – when to use it for details.
However, if you create your own pre-recorded motions using e.g. the motion builder, you are responsible for avoiding self-collisions as the robot will ‘blindly’ execute the pre-recorded motion.
❓ How can I stop the default gaze behaviour?
You can call the /attention_manager/set_policy ROS service with the policy
DISABLED
. See Controlling the attention and gaze of the robot for details.
❓ How can I stop the default idle motion of the arms?
The default idle motion of the arms can be stopped from the Web User Interface startup panel, by stopping the
interaction_profile
service.
❓ Is it possible to control the arm/head joints in velocity?
The dynamixel motors do have velocity control mode, however it is not implemented in ActuatorsMgr, as its lowest layer does not support it
🖥️ User interfaces#
❓ How can I visualize what is displayed on the robot’s screen?
ARI’s touchscreen is running the Chrome browser in ‘kiosk’ mode. You can reproduce this setup on your own machine by launching Chrome with the parameter
--kiosk
. ARI is using Chrome version 63.0.3239.132-1 with no specific extra plugins or software enabled.In addition, you can connect to
http://<robot-0c:11011/webapps/pages/pal_touch_web_display/
with your browser so you can load with the same infrastructure that is loaded in the chest touchscreen and have it interact with the rest of the robot infrastructure.