Interaction skills¶
This page lists system skills (ie, installed by default) related to 😄 Interaction. You can directly use these skills in your tasks and mission controller.
ask_human_for_help
¶
Interface: /skill/ask_human_for_help
Message type: interaction_skills/action/AskHumanForHelp
Ask a human for help
Input parameters¶
question_to_human
string
, requiredThe question to ask to the human.
person_id
string array
The preferred person IDs to ask for help. Those humans will be prioritized when asking for help. If left empty, all tracked humans are considered.
Quick snippets¶
$ ros2 action send_goal /skill/ask_human_for_help interaction_skills/action/AskHumanForHelp # then press Tab to complete the message prototype
How to use in your code¶
See code samples for the corresponding /skill/ask_human_for_help: code samples
See code samples for the corresponding /skill/ask_human_for_help: code samples
To be added soon
You can call this skill from QML using the following code snippet. See Building a touchscreen interface via ROS and QML to learn more.
import Ros 2.0
// ...
AskHumanForHelpSkill {
id: mySkill
onResult: {
console.log("Skill result: " + result);
}
onFeedback: {
console.log("Skill feedback: " + feedback);
}
}
// Call the skill
mySkill.ask_human_for_help(question_to_human);
do_led_effect
¶
Interface: /skill/do_led_effect
Message type: interaction_skills/action/DoLedEffect
Perform light effects using the robot’s LEDs.
Input parameters¶
groups
string array
, default:[]
The LED groups to use for the effect (eg
ear_leds
,back_leds
). An empty list means the effect is applied to all LED groups.effect
string
, default:solid_color
The selected LED effect, one of:
solid_color
: Applies up to two solid colors side-by-side to the specified LED groups, with the primary color applied to the first partition of the LED group, and the secondary color applied to the second partition.rainbow
: Lights the LED groups in a rainbow effect, which moves through the LED group cyclically.fade
: Fades between two colors cyclically. It starts with the primary color, fades to the secondary color over the first partition, then fades back to the primary color over the second partition.blink
: Blinks the LED groups between two colors. It applies a solid primary color in the first partition of a cycle, then applies a solid secondary color in the second partition.flow
: Displays a loading-like effect, with the partition ratio of LEDs colored with the primary color moving through the LED group, at a constant speed such to traverse the entire LED group in one cycle, and the secondary color filling the rest of the LED group.
duration
float
, default:0.0
Total duration of the effect before the action is completed. A null or negative number will mean the action continues indefinitely until canceled.
color interaction_skills/msg/LedColor
The primary color for the effect. Ignored for the
rainbow
effect.secondary_color interaction_skills/msg/LedColor
The secondary color for the effect. Ignored for the
rainbow
effect.cycle
float
, default:1.0
Duration in seconds of a single cycle of the effect. Ignored for the
solid_color
effect.partition
float
, default:1.0
For effects with two phases in a cycle (or two spatially separated partitions as in
solid_color
), the proportion of the first one ([0., 1.]) Ignored for therainbow
effect.
Quick snippets¶
$ ros2 action send_goal /skill/do_led_effect interaction_skills/action/DoLedEffect # then press Tab to complete the message prototype
How to use in your code¶
See code samples for the corresponding /skill/do_led_effect: code samples
See code samples for the corresponding /skill/do_led_effect: code samples
To be added soon
You can call this skill from QML using the following code snippet. See Building a touchscreen interface via ROS and QML to learn more.
import Ros 2.0
// ...
DoLedEffectSkill {
id: mySkill
onResult: {
console.log("Skill result: " + result);
}
onFeedback: {
console.log("Skill feedback: " + feedback);
}
}
// Call the skill
mySkill.do_led_effect();
look_for_human
¶
Interface: /skill/look_for_human
Message type: interaction_skills/action/LookFor
Search and localize specific humans
Input parameters¶
patterns
string array
, default:[]
List of RDF patterns that identify the human to look for. Exactly one variable is expected in the patterns. If left empty, all visible humans are returned.
Quick snippets¶
$ ros2 action send_goal /skill/look_for_human interaction_skills/action/LookFor # then press Tab to complete the message prototype
How to use in your code¶
See code samples for the corresponding /skill/look_for_human: code samples
See code samples for the corresponding /skill/look_for_human: code samples
To be added soon
You can call this skill from QML using the following code snippet. See Building a touchscreen interface via ROS and QML to learn more.
import Ros 2.0
// ...
LookForHumanSkill {
id: mySkill
onResult: {
console.log("Skill result: " + result);
}
onFeedback: {
console.log("Skill feedback: " + feedback);
}
}
// Call the skill
mySkill.look_for_human();
look_for_object
¶
Interface: /skill/look_for_object
Message type: interaction_skills/action/LookFor
Search and localize specific objects
Input parameters¶
patterns
string array
, default:[]
List of RDF patterns that identify the objects to look for. Exactly one variable is expected in the patterns.
Quick snippets¶
$ ros2 action send_goal /skill/look_for_object interaction_skills/action/LookFor # then press Tab to complete the message prototype
How to use in your code¶
See code samples for the corresponding /skill/look_for_object: code samples
See code samples for the corresponding /skill/look_for_object: code samples
To be added soon
You can call this skill from QML using the following code snippet. See Building a touchscreen interface via ROS and QML to learn more.
import Ros 2.0
// ...
LookForObjectSkill {
id: mySkill
onResult: {
console.log("Skill result: " + result);
}
onFeedback: {
console.log("Skill feedback: " + feedback);
}
}
// Call the skill
mySkill.look_for_object();
set_expression
¶
Interface: /skill/set_expression
Message type: interaction_skills/msg/SetExpression
Sets the expression of the robot. This might include changing the robot’s face, body posture, or other expressive features.
One either sets the expression by name, or by specifying the valence and arousal of the expression.
Input parameters¶
expression.expression
string
Name of the expression. See List of expressions for details.
One of:
neutral
angry
sad
happy
surprised
disgusted
scared
pleading
vulnerable
despaired
guilty
disappointed
embarrassed
horrified
skeptical
annoyed
furious
suspicious
rejected
bored
tired
asleep
confused
amazed
excited
expression.valence
float
The desired valence of the expression, ranging from -1 (very negative) to 1 (very positive).
expression.arousal
float
The desired arousal of the expression, ranging from -1 (very calm) to 1 (very excited).
Quick snippets¶
$ ros2 topic pub /skill/set_expression interaction_skills/msg/SetExpression # then press Tab to complete the message prototype
How to use in your code¶
See code samples for the corresponding /skill/set_expression: code samples
See code samples for the corresponding /skill/set_expression: code samples
To be added soon
You can call this skill from QML using the following code snippet. See Building a touchscreen interface via ROS and QML to learn more.
import Ros 2.0
// ...
SetExpressionSkill {
id: mySkill
}
// Call the skill
mySkill.set_expression();
look_at
¶
Interface: /skill/look_at
Message type: interaction_skills/action/LookAt
Defines the gazing direction of the robot. This skill can be used to either look at a specific point in space (a ROS tf frame), or to set a generic gaze policy, such as looking at people around the robot.
Using the glance
policy, you can also use this skill to brielfy look at
a specific point in space, before returning to the previous gaze policy.
Input parameters¶
policy
string
One of the policies below, or empty string (in that case, the robot will look atpoint in space specified with the
target
parameter).Available policies:
random
: randomly look around, with short fixationssocial
: look around for faces, with fixations on detected facesglance
: glance at a specific point in space, then return to the previous gaze policy.target
must be specified.auto
: automatic gaze policy, implementation-dependent. On PAL interactive robots, equivalent tosocial
.reset
: reset the gaze of the robot to looking straight.
target geometry_msgs/msg/PointStamped
tf frame to track. Ignored if the
policy
parameter is set to one of the predefined policies.
Quick snippets¶
$ ros2 action send_goal /skill/look_at interaction_skills/action/LookAt # then press Tab to complete the message prototype
How to use in your code¶
See code samples for the corresponding /skill/look_at: code samples
See code samples for the corresponding /skill/look_at: code samples
To be added soon
You can call this skill from QML using the following code snippet. See Building a touchscreen interface via ROS and QML to learn more.
import Ros 2.0
// ...
LookAtSkill {
id: mySkill
onResult: {
console.log("Skill result: " + result);
}
onFeedback: {
console.log("Skill feedback: " + feedback);
}
}
// Call the skill
mySkill.look_at();