The CBSR EIS-environment will always launch 1 entity of type ‘robot'. Upon launching a MAS2G, first a dialog will pop-up requesting the user’s log-in (see https://socialrobotics.atlassian.net/wiki/spaces/CBSR). Then, a dialog will pop-up requesting the devices (of that user) that should be used. Only after the completion of these two dialogs will the agent start executing.
Parameter | Description |
---|---|
| The IP address of the server running the CBSR cloud (or the Docker version).
|
| The (absolute or relative) path to the Dialogflow authentication JSON file. Example: Warning: never push such an authentication file to a public repository. |
| The name of the Dialogflow agent. Example: |
| The language the Dialogflow agent should use.
|
| If provided, a webhook for use with Dialogflow (Google Assistant) will be started at <name>.serveo.net You can use this URL as the 'Fulfillment > Webhook' URL in Dialogflow. See the |
In principle, all actions are of the durative type; an agent does not wait until their factual completion (but the corresponding events need to be used instead).
Action | Description |
---|---|
Animations | These actions currently only work on Nao/Pepper robots. |
| Stops breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head]. A Example: |
| Starts breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head]. A Example: |
| Performs the given animation. See http://doc.aldebaran.com/2-5/naoqi/motion/alanimationplayer-advanced.html for the available animations per robot. A Example: |
| Make the robot go the given posture at the given speed (0-100; default when it's left out is 100% speed). See http://doc.aldebaran.com/2-5/family/pepper_technical/postures_pep.html#pepper-postures and http://doc.aldebaran.com/2-8/family/nao_technical/postures_naov6.html#naov6-postures for the available postures on Peppers and Naos respectively. A Example: |
| See the A |
| Plays the XML-file at the given path (e.g. from Choregraphe), with optionally an emotion that will be used to modify the given animation. The emotion can be one of: [fear, mad, supersad, alarmed, tense, afraid, angry, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, aroused, astonished, excited, delighted, happy, pleased, glad, serene, content, atease, satisfied, relaxed, calm]. A Example: |
| Make the robot go into rest mode. This is the inverse of the An |
| Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue. An Example: |
| Set the colour of the robot's eye LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. An Example: |
| Set the colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. A Example: |
| Set the 'idle' mode of the robot. This can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead). A Example: |
| Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. A |
| Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out). An Example: |
| Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out). A Example: |
| Stops any ongoing motion recording. A |
| Make the robot turn to the left. Optionally, if the parameter is set to 'true', this will be a small turn. A |
| Make the robot turn to the right. Optionally, if the parameter is set to 'true', this will be a small turn. A |
| Get the robot out of rest mode. This is the inverse of the rest action. An |
Audiovisual | These action work on any supported audio device (a robot, laptop, tablet, etc.) |
| Clear any audio that was preloaded on an audio device (using the A |
| Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see |
| Make the Dialogflow service send the audio of each fragment to the client (see the |
| Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling The result (once the audio file is preloaded) is a Example: |
| Directly play the given audio file (which can be either a local file or a remote url) on the audio device. A Example: |
| Play the preloaded audio file associated with the given A Example: |
| Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used. A Example: |
| The same as A Example: |
| Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a A Example: |
| For influencing the text-to-speech engine parameters on the Nao/Pepper. See http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-api.html#ALTextToSpeechProxy::setParameter__ssCR.floatCR for more details. A Example: Note: this does not always seem to fully work on Naos. Some workarounds: - In the say/sayAnimated actions, the text-to-speech output can be shaped using tags; see http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-tuto.html#using-tags-for-voice-tuning Example: - On the robot itself, the default settings can be changed by updating the voiceSettings.xml file.
Example: <Setting name="defaultVoiceSpeed" description="Voice speed" value="150.0"/> |
| Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas). Example: |
| Opens up the selected camera and starts streaming the video, e.g. to a face recognition or emotion detection service, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Example: |
| See |
| See |
| See |
| Instructs the face recognition and/or people detection service to send the next camera image (see |
Memory | These actions are specifically for the robot-memory service. |
| Adds the given Data for the given User (by his/her identifier). The given User needs to exist first (see setUserData), the CounterKey is used to keep track of the number of entries according to a specific category, and finally the data is expected to be either a plain string, a plain list, or a list in the format [key1=value1,key2=value2,...]. A Example: |
| Retrieves the data stored for the given User (by his/her identifier) at the given Key. If any data is present, a Example: |
| Creates a new session for the given User (by his/her identifier). This is currently used for post-analysis of the user data only. Example: |
| Sets the given (string or numeric) Data identified by the given Key for the given User (by his/her identifier). If the User does not exist, he/she is created. If something is already stored at the given Key, it is overwritten. A Example: |
Tablet | These are actions for touchscreen devices specifically, e.g. the Pepper's tablet. The webserver service is always required for this. |
| Render the given HTML code in the body of the page on the tablet. By default, the Bootstrap framework is loaded (including jQuery), and can thus be used to style elements. Any <button> element will automatically send its contents (see the
|
| Renders the image at the given URL full-screen on the tablet. |
| Plays the video at the given URL full-screen on the tablet. |
| Opens the webpage at the given URL full-screen on the tablet. |
| Close anything that was opened by us on the tablet. |
| This needs to be called first before any other tablet action can be called. It ensures a connection between the tablet and the client is established. The GOAL agent will wait immediately after this action has been called until such a connection has been established. |
Percept | Description |
---|---|
| The text from a button which was pressed on the tablet (see |
| A new audio language has been requested, possibly by something external like the tablet (see Example: |
| A new recorded audio file is available; the filename is always Example: |
| The current percentage of battery charge left in the robot. Example: |
| An emotion was detected in the given image by the emotion detection service (see Example: |
| Either an event related to some action above, i.e. one of [BreathingDisabled, BreathingEnabled, ClearLoadedAudioDone, EarColourDone, EarColourStarted, EyeColourDone, EyeColourStarted, GestureDone, GestureStarted, HeadColourDone, HeadColourStarted, LanguageChanged, ListeningDone, ListeningStarted, MemoryEntryStored, PlayAudioDone, PlayAudioStarted, PlayMotionDone, PlayMotionStarted, RecordMotionStarted, SetIdle, SetNonIdle, SetSpeechParamDone, TextDone, TextStarted, TurnDone, TurnStarted, UserDataSet, WatchingDone, WatchingStarted], an event related to one of the robot's sensors, i.e. one of [BackBumperPressed, FrontTactilTouched, HandLeftBackTouched, HandLeftLeftTouched, HandLeftRightTouched, HandRightBackTouched, HandRightLeftTouched, HandRightRightTouched, LeftBumperPressed, MiddleTactilTouched, RearTactilTouched, RightBumperPressed] (see http://doc.aldebaran.com/2-5/family/robots/contact-sensors_robot.html), or an event originating from one of the services, i.e. one of [EmotionDetectionDone,EmotionDetectionStarted,FaceRecognitionDone,FaceRecognitionStarted,IntentDetectionDone,IntentDetectionStarted,MemoryEntryStored,PeopleDetectionDone,PeopleDetectionStarted,UserDataSet]. Example: |
| A face was recognised by the face recognition service. The identifier is a unique number for the given face, starting from 0. The percept will be sent continuously as long as the camera is open and the face is recognised. Example: |
| One or more devices of the robot are (too) hot. Example: |
| An Intent with the given name was detected by the Dialogflow service, possibly under the current context (see Note that when an Example: |
| The rest-mode of the robot has changed, it’s either resting (awake=false) or it woke up (awake=true). See the Example: |
| The robot is plugged-in (charging=true) or not (charging=false). Example: |
| The audio given in the latest Example: |
| A response to a Example: |
| The result of a Example: |
| Sent when the people detection service detected someone. The percept will be sent continuously as long as the camera is open and a person is detected. |
| A new picture file is available; the filename is always Example: |
| The robot has taken the posture; see the Example: |
| A number indicating the current average stiffness in the robot’s body. 0: less than 0.05 average, 1: between 0.05 and 0.95 average, 2: above 0.95 average. See the Example: |