...
Action | Description | ||
---|---|---|---|
Animations | These actions currently only work on Nao/Pepper robots. Use the computer-robot JAR to emulate the expected responses to these actions locally if needed. | ||
| Stops breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head]. A Example: | ||
| Starts breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head]. A Example: | ||
| Performs the given animation. See http://doc.aldebaran.com/2-5/naoqi/motion/alanimationplayer-advanced.html for the available animations per robot. A Example: | ||
| Make the robot go the given posture at the given speed (0-100; default when it's left out is 100% speed). See http://doc.aldebaran.com/2-5/family/pepper_technical/postures_pep.html#pepper-postures and http://doc.aldebaran.com/2-8/family/nao_technical/postures_naov6.html#naov6-postures for the available postures on Peppers and Naos respectively. A Example: | ||
| See the A | ||
| Plays the XML-file at the given path (e.g. from Choregraphe), with optionally an emotion that will be used to modify the given animation. The emotion can be one of: [fear, mad, supersad, alarmed, tense, afraid, angry, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, aroused, astonished, excited, delighted, happy, pleased, glad, serene, content, atease, satisfied, relaxed, calm]. A Example: | ||
| Make the robot go into rest mode. This is the inverse of the An | ||
| Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue. An Example: | ||
| Set the colour of the robot's eye LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. An Example: | ||
| Set the colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. A Example: | ||
| Set the 'idle' mode of the robot. This can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead). A Example: | ||
| Sets the given list of LEDs to the given colours in the given duration (in milliseconds, the default of 0 means instantly). A Example: | ||
| Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. A | ||
| Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out). A Example: | ||
| On the given group of LEDs ([eyes, chest, feet, all]), starts an animation of the given type ([rotate, blink, alternate]) using the given colors at the given speed (in milliseconds). A Example: | ||
| Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out). A Example: | ||
| Cancels any ongoing LED animation. | ||
| Stops any ongoing motion recording. A | ||
| Make the robot turn to the left. Optionally, if the parameter is set to 'true', this will be a small turn. A |
| Make the robot turn to the right. Optionally, if the parameter is set to 'true', this will be a small turn(Pepper) robot turn the given number of degrees (-360 to 360). A |
| Get the robot out of rest mode. This is the inverse of the rest action. An | ||
Audiovisual | These action work on any supported audio device (a robot, laptop, tablet, etc.) | ||
| Clear any audio that was preloaded on an audio device (using the A | ||
| Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see | ||
| Make the Dialogflow service send the audio of each fragment to the client (see the | ||
| Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling The result (once the audio file is preloaded) is a Example: | ||
| Directly play the given audio file (which can be either a local file or a remote url) on the audio device. A Example: | ||
| Play the preloaded audio file associated with the given A Example: | ||
| Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used. A Example: | ||
| The same as A Example: | ||
| Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a A Example: | ||
| For influencing the text-to-speech engine parameters on the Nao/Pepper. See http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-api.html#ALTextToSpeechProxy::setParameter__ssCR.floatCR for more details. A Example: Note: this does not always seem to fully work on Naos. Some workarounds: - In the say/sayAnimated actions, the text-to-speech output can be shaped using tags; see http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-tuto.html#using-tags-for-voice-tuning Example: - On the robot itself, the default settings can be changed by updating the voiceSettings.xml file.
Example: <Setting name="defaultVoiceSpeed" description="Voice speed" value="150.0"/> | ||
| Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas). Example: | ||
| Opens up the selected camera and starts streaming the video, e.g. to a face recognition or emotion detection service, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Example: | ||
| See | ||
| See | ||
| See | ||
| Instructs the face recognition and/or people detection service to send the next camera image (see | ||
Browser | These are actions for connected browsers, e.g. the Pepper's tablet. The webserver service is always required for this. | ||
| Render the given HTML code in the body of the page on the connected browser. For more information, see Tablets/Phones/Browsers. | ||
Google Assistant | These are actions specifically for a connected Google Assistant devices (which must be done through the computer-google-assistant JAR). The dialogflow service is not required to in order to receive intents from a Dialogflow webhook (connected to the Google Assistant). | ||
| Show and say the given text on the Google Assistant. This must be a response to some (webhook) intent. Through the second argument a list of strings can be passed that the assistant will show to the end-user as possible responses. A Example: | ||
| The same as A Example: | ||
| The same as A Example: |
...
Percept | Description |
---|---|
| The text from a button which was pressed in the browser (see |
| A new audio language has been requested, possibly by something external like the browser (see Example: |
| A new recorded audio file is available; the filename is always Example: |
| The current percentage of battery charge left in the robot. Example: |
| Sent by the coronachecker service once some valid Dutch CoronaCheck QR code has been recognised in the video stream. |
| Provides information about the selected devices when the agent starts. Example: |
| An emotion was detected in the given image by the emotion detection service (see Example: |
| Either an event related to some action above, i.e. one of [BreathingDisabled, BreathingEnabled, ClearLoadedAudioDone, EarColourDone, EarColourStarted, EyeColourDone, EyeColourStarted, GestureDone, GestureStarted, HeadColourDone, HeadColourStarted, LanguageChanged, ListeningDone, ListeningStarted, MemoryEntryStored, PlayAudioDone, PlayAudioStarted, PlayMotionDone, PlayMotionStarted, RecordMotionStarted, SetIdle, SetNonIdle, SetSpeechParamDone, TextDone, TextStarted, TurnDone, TurnStarted, UserDataSet, WatchingDone, WatchingStarted], an event related to one of the robot's sensors, i.e. one of [BackBumperPressed, FrontTactilTouched, HandLeftBackTouched, HandLeftLeftTouched, HandLeftRightTouched, HandRightBackTouched, HandRightLeftTouched, HandRightRightTouched, LeftBumperPressed, MiddleTactilTouched, RearTactilTouched, RightBumperPressed] (see http://doc.aldebaran.com/2-5/family/robots/contact-sensors_robot.html), or an event originating from one of the services, i.e. one of [EmotionDetectionDone,EmotionDetectionStarted,FaceRecognitionDone,FaceRecognitionStarted,IntentDetectionDone,IntentDetectionStarted,MemoryEntryStored,PeopleDetectionDone,PeopleDetectionStarted,UserDataSet]. Example: |
| A face was recognised by the face recognition service. The identifier is a unique number for the given face, starting from 0. The percept will be sent continuously as long as the camera is open and the face is recognised. Example: |
| One or more devices of the robot are (too) hot. Example: |
| An Intent with the given name was detected by the Dialogflow service, possibly under the current context (see Note that when an Example: |
| The rest-mode of the robot has changed, it’s either resting (awake=false) or it woke up (awake=true). See the Example: |
| The robot is plugged-in (charging=true) or not (charging=false). Example: |
| The audio given in the latest Example: |
| A response to a Example: |
| The result of a Example: |
| Sent when the people detection service detected someone. The percept detects someone; the X and Y coordinates represent the (estimated) center of the person’s face in the image. The percepts will be sent continuously as long as the camera is open and a person someone is detected. |
| A new picture file is available; the filename is always Example: |
| The robot has taken the posture; see the Example: |
| Sent by the sentiment analysis service for each Example: |
| A quick direct indication of the text spoken by an end-user whilst the intent detection is running; the final text given in the Example: |