...
Table of Contents |
---|
Init Parameters
Parameter | Description |
---|---|
| The IP address of the server running the CBSR cloud (or the Docker version).
|
| The (absolute or relative) path to the Dialogflow authentication JSON file. Example: Warning: never push such an authentication file to a public repository. |
| The name of the Dialogflow agent. Example: |
| The language the Dialogflow agent should use.
|
| The (absolute or relative) path to the Google TTS authentication JSON file. The file can coincide with the Dialogflow agent file Example: Warning: never push such an authentication file to a public repository. |
| The voice Google Text-to-Speech should use. Can be chosen from https://cloud.google.com/text-to-speech/docs/voices .
|
| The Text-to-Speech voice of the agent. Can choose between
|
Actions
In principle, all actions are of the durative type; an agent does not wait until their factual completion (but the corresponding events need to be used instead).
Action | Description | ||
---|---|---|---|
Animations | These actions currently only work on Nao/Pepper robots. Use the computer-robot JAR to emulate the expected responses to these actions locally if needed. | ||
| Starts/Stops breathing animations on the robot, based on the truth value of the parameter (1 enables breathing, 0 disables it) A Example: | ||
| Performs the given animation. See http://doc.aldebaran.com/2-5/naoqi/motion/alanimationplayer-advanced.html for the available animations per robot. A Example: | ||
| Make the robot go the given posture at the given speed (0-100; default when it's left out is 100% speed). See http://doc.aldebaran.com/2-5/family/pepper_technical/postures_pep.html#pepper-postures and http://doc.aldebaran.com/2-8/family/nao_technical/postures_naov6.html#naov6-postures for the available postures on Peppers and Naos respectively. A Example: | ||
See the | A The method can also play the XML-file at the given path (e.g. from Choregraphe), with optionally an emotion that will be used to modify the given animation. The emotion can be one of: [fear, mad, supersad, alarmed, tense, afraid, angry, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, aroused, astonished, excited, delighted, happy, pleased, glad, serene, content, atease, satisfied, relaxed, calm]. A Example: | ||
| Make the robot go into rest mode. This is the inverse of the An | ||
| Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue. An Example: | ||
| Set the colour of the robot's eye LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. An Example: | ||
| Set the colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. A Example: | ||
| Set the 'idle' mode of the robot. This can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead). A Example: | ||
| Sets the given list of LEDs to the given colours in the given duration (in milliseconds, the default of 0 means instantly). A Example: | ||
| Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. A | ||
| Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out). A Example: | ||
| On the given group of LEDs ([eyes, chest, feet, all]), starts an animation of the given type ([rotate, blink, alternate]) using the given colors at the given speed (in milliseconds). A Example: | ||
| Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out). A Example: | ||
| Cancels any ongoing LED animation. | ||
| Stops any ongoing motion recording. A | ||
| Make the (Pepper) robot turn the given number of degrees (-360 to 360). A | ||
| Get the robot out of rest mode. This is the inverse of the rest action. An | ||
Audiovisual | These action work on any supported audio device (a robot, laptop, tablet, etc.) | ||
| Clear any audio that was preloaded on an audio device (using the A | ||
| Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see | ||
| Make the Dialogflow service send the audio of each fragment to the client (see the | ||
| Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling The result (once the audio file is preloaded) is a Example: | ||
| Directly play the given audio file (which can be either a local file or a remote url) on the audio device. A Example: | ||
| Play the preloaded audio file associated with the given A Example: | ||
| Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used. A Example: | ||
| The same as A Example: | ||
| Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a A Example: | ||
| Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas). Example: | ||
| Opens up the selected camera and starts streaming the video, e.g. to a face recognition or emotion detection service, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Example:
Output via callback after | ||
| See play_motion. Instead of a JSON string input of the motion it expects a string with a path to a JSON file containing the motion. | ||
| Make the robot go into rest mode. This is the inverse of the An | ||
| Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue. An Example: | ||
| Set the colour of the robot's eye LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. An Example: | ||
| Set the colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. A Example: | ||
| Set the 'idle' mode of the robot. This can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead). A Example: | ||
| Sets the given list of LEDs to the given colours in the given duration (in milliseconds, the default of 0 means instantly). A Example: | ||
| Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. A | ||
| Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out). A Example: | ||
| On the given group of LEDs ([eyes, chest, feet, all]), starts an animation of the given type ([rotate, blink, alternate]) using the given colors at the given speed (in milliseconds). A Example: | ||
| Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out). A Example: | ||
| Cancels any ongoing LED animation. | ||
| Stops any ongoing motion recording. A | ||
| Make the (Pepper) robot turn the given number of degrees (-360 to 360). A | ||
| Get the robot out of rest mode. This is the inverse of the rest action. An | ||
Audiovisual | These action work on any supported audio device (a robot, laptop, tablet, etc.) | ||
| Uses Dialogflow to recognise a person’s speech.
| ||
| Uses Dialogflow to recognise short answers (e.g.: yes/no) in a person’s speech. This is useful because Dialogflow often listens for a longer time to make sure that a person has stopped talking. In the case of short answers, this causes a somewhat unnatural pause.
| ||
| Clear any audio that was preloaded on an audio device (using the A | ||
| Records audio for a number of duration seconds. The location of the audio is returned via the callback function.
| ||
| Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see | ||
| Make the Dialogflow service send the audio of each fragment to the client (see the | ||
| Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling The result (once the audio file is preloaded) is a Example: | ||
| Directly play the given audio file (which can be either a local file or a remote url) on the audio device. A Example: | ||
| Play the preloaded audio file associated with the given A Example: | ||
| Use Google’s TTS service to play the given text. The voice and the language used is set at initialisation time. A Example: | ||
| Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used. A Example: | ||
| The same as A Example: | ||
| Set various speech parameters. These parameters can be Example: | ||
| Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a A Example: | ||
| Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas ). Example: | ||
| Subscribes to the results of a type of vision. This function is needed in order to get the results of the vision type (e.g.: coordinates of a person in the image) via the callback.
| ||
| Unsubscribes from the type of vision.
| ||
| Subscribes to the results of a type of events
| ||
| Unsubscribes from a type of events.
| ||
| Make the robot wait for a certain time
| ||
| See | ||
| See | ||
| See | ||
| Instructs the face recognition and/or people detection service to send the next camera image (see | ||
Browser | These are actions for connected browsers, e.g. the Pepper's tablet. The webserver service is always required for this. | ||
| Render the given HTML code in the body of the page on the connected browser. For more information, see Tablets/Phones/Browsers. |
...
Event | Description |
---|---|
| A new audio language has been requested, possibly by something external like the browser (see Example: |
| A new recorded audio file is available; the filename is always Example: |
| The current percentage of battery charge left in the robot. Example: |
| Sent by the |
| An emotion was detected in the given image by the emotion detection service (see Example: |
| Either an event related to some action above, i.e. one of [BreathingDisabled, BreathingEnabled, ClearLoadedAudioDone, EarColourDone, EarColourStarted, EyeColourDone, EyeColourStarted, GestureDone, GestureStarted, HeadColourDone, HeadColourStarted, LanguageChanged, ListeningDone, ListeningStarted, MemoryEntryStored, PlayAudioDone, PlayAudioStarted, PlayMotionDone, PlayMotionStarted, RecordMotionStarted, SetIdle, SetNonIdle, SetSpeechParamDone, TextDone, TextStarted, TurnDone, TurnStarted, UserDataSet, WatchingDone, WatchingStarted], an event related to one of the robot's sensors, i.e. one of [BackBumperPressed, FrontTactilTouched, HandLeftBackTouched, HandLeftLeftTouched, HandLeftRightTouched, HandRightBackTouched, HandRightLeftTouched, HandRightRightTouched, LeftBumperPressed, MiddleTactilTouched, RearTactilTouched, RightBumperPressed] (see http://doc.aldebaran.com/2-5/family/robots/contact-sensors_robot.html ), or an event originating from one of the services, i.e. one of [EmotionDetectionDone,EmotionDetectionStarted,FaceRecognitionDone,FaceRecognitionStarted,IntentDetectionDone,IntentDetectionStarted,MemoryEntryStored,PeopleDetectionDone,PeopleDetectionStarted,UserDataSet]. Example: |
| A face was recognised by the face recognition service. The identifier is a unique number for the given face, starting from 0. The percept will be sent continuously as long as the camera is open and the face is recognised. Example: |
| One or more devices of the robot are (too) hot. Example: |
| An Intent with the given name was detected by the Dialogflow service, possibly under the current context (see Note that when an Example: |
| The rest-mode of the robot has changed, it’s either resting (awake=false) or it woke up (awake=true). See the Example: |
| The robot is plugged-in (charging=true) or not (charging=false). Example: |
| The result of a Example: |
| Sent when the people detection service detects someone; the X and Y coordinates represent the (estimated) center of the person’s face in the image. The percepts will be sent continuously as long as the camera is open and someone is detected. |
| A new picture file is available; the filename is always Example: |
| The robot has taken the posture; see the Example: |
| Sent by the sentiment analysis service for each Example: |
| A quick direct indication of the text spoken by an end-user whilst the intent detection is running; the final text given in the Example: |
...