Frequently Asked Questions#
Developing applications#
❓ How to wrap my code on a ROS Noetic container and upload it to the robot
First of all create a Dockerfile with the dependencies that you need, taking as base
ros:noetic
docker. See examples:Once you have a Dockerfile, build your docker:
docker build -t <DOCKERFILE NAME>Run the docker and add your changes/packages/ that you need. Test that it works properly, e.g. using an ARI docker for simulation if needed.
Push the docker to a docker registry accessible from the robot.
For instance:
$ docker login registry.gitlab.com $ docker push registry.gitlab.com/myuser/mydockerOnce you have the docker in your repository, pull it from ARI:
$ ssh pal@ari-Xc $ docker login registry.gitlab.com $ docker pull registry.gitlab.com/myuser/mydockerYou can then run the docker as you would from your laptop/PC.
❓ How do I communicate between containers?
All communication between containers is done through regular ROS msgs, topics, actions and services that have been defined for each module. All containers will be using ROS, and will be able to see and directly communicate with other containers (and with the Nvidia Jetson if you are using it).
Make sure however that you are starting the docker image with the
--net host
option, so that the container sees the ROS core.Links to ROS examples, depending if you use C++ or Python:
C++: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29
Python: http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28python%29
It is recommended to complete all the ROS tutorials available in the coding language you use.
❓ What method should I use to debug inside a container?
Accessing the container (with eg
docker exec -it <image name> bash
) and usinggdb
or similar tools as normal from inside the container, as if it was a standard computer is recommended.
❓ How can I automatically launch an application at start up?
Applications can be automatically launched at start-up. See this documentation section.
ARI hardware#
❓ How can I record audio data using ROS?
Right now ARI publishes the audio in the following topic: /audio. You can use rosbags to record any kind of ROS topics: http://wiki.ros.org/rosbag. Namely, enter the robot and in the desired directory:
ssh pal@ari-Xc rosbag record -O audio.bag /audiowhere
audio.bag
is the name of the bag that will be stored in the directory where the command is run.
❓ How to change frame rate of fisheye cameras?
The fisheye cameras are connected to the Nvidia Jetson and are running on a docker (1 per camera). In order to make them visible to ARI’s main PC and externally we have a ROS bridge, corresponding to the following launch file: https://github.com/pal-robotics/ari_navigation/blob/melodic-devel/ari_rgbd_sensors/launch/elp_cameras_relay.launch
As you see here we remap the input topics from the Jetson and also specify the frame rate, in this case 5Hz. You can pull this package, adjust the frame-rate and deploy it on the robot (the package would then be in
deployed_ws/share/ari_rgbd_sensors
) or alternatively adjust the permanent partition of the robot (/opt/pal/ferrum/share/ari_rgbd_sensors
).<node name="fisheye_rear_relay" type="throttle" pkg="topic_tools" args="messages /rear_fisheye_camera/image_raw/compressed 5 /rear_camera/fisheye/image_raw/compressed" />
❓ What does the LEDs color/pattern mean?
Check the Meaning of ARI LEDs colors and patterns page for explanations about the meaning of the different LEDs.
Robot management#
❓ What would be a typical usage length before the batteries run out of power?
ARI has a battery life of approximately 8-12 hours, and needs about 5 hours. Note that the autonomy is significantly reduced if the robot needs to move frequently. to be fully charged.
❓ Can we change ARI’s name once set?
In theory no, however if it is a feature that you would be interested in, the team at PAL Robotics could look into it. Contact the support.
❓ Can we work on the robot while the red emergency button is pressed, for example, for remote work?
Yes, you will be able to connect to its cameras and work with perception for example. However ARI won’t produce any speech output nor will it move its joints (arms, head or base)
We however suggest to always have the emergency button unpressed when working with ARI.
❓ How can I find the IP address of ARI?
By default, all ARIs are configured with the static IP address
10.68.0.1
. You can select this, by either specifying ARI’s IP address or its hostnameari-SNc
.See ARI networking for details.
❓ What networking port are used/required by |Productname|?
Check the Data management, security, data privacy on ARI page.
❓ How to change |Productname|’s language?
Refer to the 🚧 Internationalisation and language support page.
❓ What languages are supported by |Productname|?
ARI supports English and Spanish by default. It might support additional languages depending on your purchse options. Check 🚧 Internationalisation and language support for details.
Gestures and motions#
❓ Is there some kind of ‘self collision avoidance’ check so we can’t make the robot ‘hurt’ itself with its arms?
ARI features a whole body motion controller that protects against self-collisions when automatically generating gestures like pointing. See whole_body_motion_control for details.
However, if you create your own pre-recorded motions using e.g. the motion builder, you are responsible for avoiding self-collisions as the robot will ‘blindly’ execute the pre-recorded motion.
❓ How can I control ARI’s head orientation with absolute values?
You can publish a target position for the robot to look at on the /look_at topic.
❓ Is it possible to control the arm/head joints in velocity?
The dynamixel motors do have velocity control mode, however it is not implemented in ActuatorsMgr, as its lowest layer does not support it
Speech and language processing#
❓ How to change the ‘wake-up’ (or ‘sleep’) keywords?
You can easily set a custom regular expression as wake-up/sleep keyword. See How to change the wake-up/sleep keywords? for details.
Touchscreen#
❓ How can I create ARI touch-screen content so that I can output my applications information e.g. ROS image topics, text, custom messages?
You can use ARI’s
rrlib.js
andpallib.js
modules, that enable calling ARI and ROS functions from Javascript. See the robot’s developer manual for more information.
❓ How do I get the default touch-screen back?
Edit
.pal/www/webapps/page_cfg/config.json
to point to any of the json files in.pal/www/webapps/pages/
. The default web isdefault_page.json
:{"default_page": "default_page.json"}
❓ How does ARI’s touchscreen work?
ARI’s touchscreen displays a Chrome webpage. Specifically it has a topic called /web/go_to where you specify the URL browser link you want it to go to.
It uses Chrome’s Selenium driver: https://chromedriver.chromium.org/getting-started
See using-the-touchscreen for details.
❓ How can I visualize what is displayed on the robot’s screen?
ari’s touchscreen is running the Chrome browser in ‘kiosk’ mode. You can reproduce this setup on your own machine by launching Chrome with the parameter
--kiosk
. ARI is using Chrome version 63.0.3239.132-1 with no specific extra plugins or software enabled.In addition, you can connect to
http://ari-Xc:11011/webapps/pages/pal_touch_web_display/
with your browser so you can load with the same infrastructure that is loaded in the chest touchscreen and have it interact with the rest of the robot infrastructure.