The CBSR EIS-environment EIS connector, of which the latest version can be found here, will always launch 1 entity of type ‘robot'. Upon launching launch, a MAS2G, dialog will pop-up requesting the devices that should be used. Only after selecting the devices will the this agent (see e.g. https://goalapl.atlassian.net/wiki) start executing.
Table of Contents |
---|
Init Parameters
Action
Description
Animations
These actions currently only work on Nao/Pepper robots. Use the computer-robot JAR to emulate the expected responses to these actions locally if needed.
disableBreathing(BodyPart)
Stops breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head].
A BreathingDisabled
event will be sent when this is done.
Example: disableBreathing('Body')
.
enableBreathing(BodyPart)
Starts breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head].
A BreathingEnabled
event will be sent when this is done.
Example: enableBreathing('Arms')
.
gesture(Animation)
Performs the given animation. See http://doc.aldebaran.com/2-5/naoqi/motion/alanimationplayer-advanced.html for the available animations per robot.
A GestureStarted
and GestureDone
event will be sent before the animation plays and after it finishes respectively.
Example: gesture('animations/Stand/Gestures/Hey_1')
.
goToPosture(Posture,Speed=100)
Parameter | Description |
---|---|
| The IP address of the server running the CBSR cloud (or the Docker version).
|
| The (absolute or relative) path to the Dialogflow authentication JSON file. Example: Warning: never push such an authentication file to a public repository. |
| The name of the Dialogflow agent. Example: |
| The language the Dialogflow agent should use.
|
Actions
In principle, all actions are of the durative type; an agent does not wait until their factual completion (but the corresponding events need to be used instead).
| The voice Google Text-to-Speech should use. Can be chosen from https://cloud.google.com/text-to-speech/docs/voices .
|
| The Text-to-Speech voice of the agent. Can choose between
|
Actions
In principle, all actions are of the durative type; an agent does not wait until their factual completion (but the corresponding events need to be used instead).
Action | Description |
---|---|
Animations | These actions currently only work on Nao/Pepper robots. Use the computer-robot JAR to emulate the expected responses to these actions locally if needed. |
| Stops breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head]. A Example: |
| Starts breathing animations on a chain: [Body, Legs, Arms, LArm, RArm or Head]. A Example: |
| Performs the given animation. See http://doc.aldebaran.com/2-5/ |
A posture
percept will be sent reflecting any change in posture.
Example: goToPosture('Stand', 50)
.
playMotion(RecordedMotion)
See the startMotionRecording
action for more information; the data that comes out of such a recording can be fed into this action.
A PlayMotionStarted
and PlayMotionDone
event will be sent before the recording is performed and after it has finished respectively.
playMotionFile(Path,Emotion='')
Plays the XML-file at the given path (e.g. from Choregraphe), with optionally an emotion that will be used to modify the given animation.
The emotion can be one of: [fear, mad, supersad, alarmed, tense, afraid, angry, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, aroused, astonished, excited, delighted, happy, pleased, glad, serene, content, atease, satisfied, relaxed, calm].
APlayMotionStarted
and PlayMotionDone
event will be sent before the recording motion/alanimationplayer-advanced.html for the available animations per robot. A Example: | |
| Make the robot go the given posture at the given speed (0-100; default when it's left out is 100% speed). See http://doc.aldebaran.com/2-5/family/pepper_technical/postures_pep.html#pepper-postures and http://doc.aldebaran.com/2-8/family/nao_technical/postures_naov6.html#naov6-postures for the available postures on Peppers and Naos respectively. A Example: |
| See the A |
|
|
rest
Make the robot go into rest mode. This is the inverse of the wakeUp
action.
An isAwake
percept will be sent reflecting any change in the state of the robot.
setEarColour(Colour)
Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue.
An EarColourStarted
and EarColourDone
event will be sent before the ear LEDs change colour and after they have changed colour respectively.
Example: setEarColour('0x0000000A')
.
setEyeColour(Colour)
Plays the XML-file at the given path (e.g. from Choregraphe), with optionally an emotion that will be used to modify the given animation. The emotion can be one of: [fear, mad, supersad, alarmed, tense, afraid, angry, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, aroused, astonished, excited, delighted, happy, pleased, glad, serene, content, atease, satisfied, relaxed, calm]. A Example: | |
| Make the robot go into rest mode. This is the inverse of the An |
| Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue. An |
EyeColourStarted
|
EyeColourDone
|
ear LEDs change colour and after they have changed colour respectively. Example: |
|
|
| Set the colour of the robot's eye LEDs |
. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. |
HeadColourStarted
An |
HeadColourDone
|
eye LEDs change colour and after they have changed colour respectively. Example: |
|
|
|
| Set the |
colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either |
SetIdle
a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. A |
before the head LEDs change colour and after they have changed colour respectively. Example: |
|
|
setNonIdle
| Set the 'idle' mode of the robot. This |
SetNonIdle
can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead). A |
went into an idle mode ('true' or 'straight'). Example: | |
| Sets the |
Anstiffness
percept will be sent reflecting any change in the stiffness of the robot.
Example: setStiffness(['LArm', 'RArm'], 100)
.
startMotionRecording(JointList,Framerate=5)
given list of LEDs to the given colours in the given duration (in milliseconds, the default of 0 means instantly). A Example: | |
| Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. A |
| Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out). A |
RecordMotionStarted
event |
when the |
robot starts changing the stiffness and after it's done respectively. Example: |
|
|
stopMotionRecording
Stops any ongoing motion recording. A robot_motion_recording
percept will be sent as a result.
turnLeft(Small=false)
Make the robot turn to the left. Optionally, if the parameter is set to 'true', this will be a small turn.
ATurnStarted
and TurnDone
event will be sent when the robot starts turning
| On the given group of LEDs ([eyes, chest, feet, all]), starts an animation of the given type ([rotate, blink, alternate]) using the given colors at the given speed (in milliseconds). A |
turnRight(Small=false)
Make the robot turn to the right. Optionally, if the parameter is set to 'true', this will be a small turn.
A TurnStarted
and TurnDone
event will be sent when the robot starts turning and after it's done respectively.
wakeUp
Get the robot out of rest mode. This is the inverse of the rest action.
An isAwake
percept will be sent reflecting any change in the state of the robot.
Audiovisual
These action work on any supported audio device (a robot, laptop, tablet, etc.)
clearLoadedAudio
Clear any audio that was preloaded on an audio device (using the loadAudio
action).
A ClearLoadedAudioDone
event will be sent once this has completed.
disableRecording
Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see enableRecording
).
enableRecording
Make the Dialogflow service send the audio of each fragment to the client (see the audioRecording
percept as well). The audio is normalised to make it more suitable for playback.
loadAudio(Path)
Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling playAudio
.
The result (once the audio file is preloaded) is a loadedAudioID
percept, which gives you an identifier to use in the playLoadedAudio
action.
Example: loadAudio('12-00-00.wav')
.
playAudio(Path)
Directly play the given audio file (which can be either a local file or a remote url) on the audio device.
A PlayAudioStarted
and PlayAudioDone
event will be sent when the audio starts playing (i.e. after it has been loaded on the audio device) and when it has finished playing respectively.
Example: playAudio('12-00-00.wav')
.
playLoadedAudio(Identifier)
Play the preloaded audio file associated with the given loadedAudioID
on the audio device.
A PlayAudioStarted
and PlayAudioDone
event will be sent when the audio starts playing and when it has finished playing respectively.
Example: playLoadedAudio(1)
.
say(Text)
Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used.
A TextStarted
and TextDone
event will be sent when the device starts playing the text and after it has completed this respectively.
Example: say('Hello, world!')
.
sayAnimated(Text)
The same as say(Text)
, but on a Nao/Pepper the robot will automatically add some animations whilst saying this text.
A TextStarted
and TextDone
event will be sent when the audio device starts playing the text and after it has completed this respectively.
Example: sayAnimated('Hello. Goodbye!')
.
setLanguage(LanguageKey)
Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a flowLang
is given in the init parameters, this language will be set accordingly; otherwise it is 'nl-NL' by default.
A LanguageChanged
event will be sent when the text-to-speech engine has switched language (Dialogflow will use the given language for the next detection).
Example: setLanguage('en-US')
.
setSpeechParam(Param,Value)
For influencing the text-to-speech engine parameters on the Nao/Pepper. See http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-api.html#ALTextToSpeechProxy::setParameter__ssCR.floatCR for more details.
A SetSpeechParamDone
will be sent once the parameter has been applied.
Example: setSpeechParam('speed', 85)
.
Note: this does not always seem to fully work on Naos. Some workarounds:
- In the say/sayAnimated actions, the text-to-speech output can be shaped using tags; see http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-tuto.html#using-tags-for-voice-tuning
Example: say('\\rspd=150\\Hello I'm talking faster now')
.
- On the robot itself, the default settings can be changed by updating the voiceSettings.xml file.
Dutch voiceSettings file path on Nao:
/var/persistent/home/nao/.local/share/PackageManager/apps/robot-language-dutch/share/tts/acapela/DUN/voiceSettings.xml
English voiceSettings file path on Nao:
/var/persistent/home/nao/.local/share/PackageManager/apps/robot-language-english/share/tts/nuance/en_US/voiceSettings.xml
Example: <Setting name="defaultVoiceSpeed" description="Voice speed" value="150.0"/>
startListening(Timeout,Context='')
Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a stopListening
action is called. If the given Timeout is 0, the microphone will remain open until stopListening
is called. The optional Context in this action is fed to Dialogflow (see https://cloud.google.com/dialogflow/docs/contexts-input-output#input_contexts).
A ListeningStarted
and ListeningDone
event will be sent when the microphone is opened and when it has closed again respectively. There are, however, also specific events from services that depend on the microphone (see the event percept for more information).
Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas).
Example: startListening(2.5, 'answer_yesno')
.
startWatching(Timeout)
Opens up the selected camera and starts streaming the video, e.g. to a face recognition or emotion detection service, either until the Timeout (in seconds, possibly with decimals) has been reached or a stopWatching
action is called. If the given Timeout is 0, the camera will remain open until stopWatching
is called.
A WatchingStarted
and WatchingDone
event will be sent when the camera is opened and when it has closed again respectively. There are, however, also specific events from services that depend on the camera (see the event percept for more information).
Example: startWatching(0)
.
stopListening
See startListening
; force-closes the microphone (if it was open).
stopTalking
See say
and sayAnimated
; aborts the current text-to-speech being executed (if any).
stopWatching
See startWatching
; force-closes the camera (if it was open).
takePicture
Instructs the face recognition and/or people detection service to send the next camera image (see startWatching
) to the client. See the picture
percept for more information.
Memory
These actions are specifically for the robot-memory service.
addMemoryEntry(UserID,CounterKey,Data)
Adds the given Data for the given User (by his/her identifier). The given User needs to exist first (see setUserData), the CounterKey is used to keep track of the number of entries according to a specific category, and finally the data is expected to be either a plain string, a plain list, or a list in the format [key1=value1,key2=value2,...].
A MemoryEntryStored
event will be sent when the given information has been stored.
Example: addMemoryEntry('someone', 'nonsense', 'blablabla')
.
getUserData(UserID,Key)
Retrieves the data stored for the given User (by his/her identifier) at the given Key.
If any data is present, a memoryData
percept will be sent.
Example: getUserData('someone', 'something')
.
getUserSession(UserID)
Creates a new session for the given User (by his/her identifier). This is currently used for post-analysis of the user data only.
Example: getUserSession('someone')
.
setUserData(UserID,Key,Data)
Sets the given (string or numeric) Data identified by the given Key for the given User (by his/her identifier). If the User does not exist, he/she is created. If something is already stored at the given Key, it is overwritten.
A UserDataSet
event will be sent when the given information has been stored.
Example: setUserData('someone', 'something', 'Hello, world!')
.
Browser
These are actions for connected browsers, e.g. the Pepper's tablet. The webserver service is always required for this.
renderPage(Html)
Render the given HTML code in the body of the page on the tablet. By default, the Bootstrap framework is loaded (including jQuery), and can thus be used to style elements. Any <button> element will automatically send its contents (see the answer
percept). In addition, giving the following classes to an element (e.g. a Div) have special effects:
chatbox
: shows a text-type input, from which the input is sent to the Dialogflow service upon submission.english_flag
: shows an English flag, which when clicked upon will send the setLanguage command 'en-US'.listening_icon
: shows a listening icon (in the form of a microphone), which shows a user when the microphone is open.speech_text
: shows a live transcript of the text currently recognised by the Dialogflow service (see thetranscript
percept as well).vu_logo
: shows a VU logo.
Tip: custom images (i.e. that don’t have a public URL) can be embedded using Base64 encoding.
Google Assistant
These are actions specifically for a connected Google Assistant devices (which must be done through the computer-google-assistant JAR). The dialogflow service is not required to be running in order to receive intents from a Dialogflow webhook (connected to the Google Assistant).
assistantShow(Text,Suggestions=[])
Show and say the given text on the Google Assistant. This must be a response to some (webhook) intent.
Through the secondExample: | |
| Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out). A Example: |
| Cancels any ongoing LED animation. |
| Stops any ongoing motion recording. A |
| Make the (Pepper) robot turn the given number of degrees (-360 to 360). A |
| Get the robot out of rest mode. This is the inverse of the rest action. An |
Audiovisual | These action work on any supported audio device (a robot, laptop, tablet, etc.) |
| Clear any audio that was preloaded on an audio device (using the A |
| Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see |
| Make the Dialogflow service send the audio of each fragment to the client (see the |
| Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling The result (once the audio file is preloaded) is a Example: |
| Directly play the given audio file (which can be either a local file or a remote url) on the audio device. A Example: |
| Play the preloaded audio file associated with the given A Example: |
| Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used. A Example: |
| The same as A Example: |
| Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a A Example: |
| For influencing the text-to-speech engine parameters on the Nao/Pepper. See http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-api.html#ALTextToSpeechProxy::setParameter__ssCR.floatCR for more details. A Example: Note: this does not always seem to fully work on Naos. Some workarounds: - In the say/sayAnimated actions, the text-to-speech output can be shaped using tags; see http://doc.aldebaran.com/2-5/naoqi/audio/altexttospeech-tuto.html#using-tags-for-voice-tuning Example: - On the robot itself, the default settings can be changed by updating the voiceSettings.xml file.
Example: <Setting name="defaultVoiceSpeed" description="Voice speed" value="150.0"/> |
| Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas). Example: |
| Opens up the selected camera and starts streaming the video, e.g. to a face recognition or emotion detection service, either until the Timeout (in seconds, possibly with decimals) has been reached or a A Example: |
| See |
| See |
| See |
| Instructs the face recognition and/or people detection service to send the next camera image (see |
Browser | These are actions for connected browsers, e.g. the Pepper's tablet. The webserver service is always required for this. |
| Render the given HTML code in the body of the page on the connected browser. For more information, see Tablets/Phones/Browsers. |
Google Assistant | These are actions specifically for a connected Google Assistant devices (which must be done through the computer-google-assistant JAR). The dialogflow service is not required to in order to receive intents from a Dialogflow webhook (connected to the Google Assistant). |
| Show and say the given text on the Google Assistant. This must be a response to some (webhook) intent. Through the second argument a list of strings can be passed that the assistant will show to the end-user as possible responses. A Example: |
| The same as A |
card (and thus starts saying |
the text too). Example: |
|
|
|
|
| The same as |
audio file to |
play (and a mandatory |
name for it). |
ImgUrl
|
MP3 file. Through the final argument a list of strings |
must be passed (it is not optional here) that the assistant will show to the end-user as possible responses. A |
media (and thus starts saying the text too, only after which the audio will be played). Example: |
|
|
|
|
assistantPlayMedia(Text, AudioName, AudioUrl, Suggestions)
The same as assistantShow(Text)
, but with an added audio file to play (and a mandatory name for it). AudioUrl
must be a HTTPS link to a MP3 file. Through the final argument a list of strings must be passed (it is not optional here) that the assistant will show to the user as possible responses.
A ShownOnAssistant
event will be sent when the device shows the media (and thus starts saying the text too, only after which the audio will be played).
Example: assistantShowCard('Listen to this!', 'Demo MP3', 'https://www.soundhelix.com/examples/mp3/SoundHelix-Song-1.mp3', ['Stop'])
.
Percepts
Percept
Description
answer(Answer)
The text from a button which was pressed in the browser (see renderPage
).
audioLanguage(LanguageKey)
A new audio language has been requested, possibly by something external like the browser (see setLanguage
and renderPage
).
Example: audioLanguage(en-US)
.
audioRecording(Filename)
A new recorded audio file is available; the filename is always hh-mm-ss.wav
and stored in the same folder as the currently running MAS2G (see enableRecording
and startListening
). Can be fed into one of the PlayAudio
actions.
Example: audioRecording('12-00-00.wav')
.
batteryCharge(Charge)
The current percentage of battery charge left in the robot.
Example: batteryCharge(100
|
Percepts
Percept | Description |
---|---|
| The text from a button which was pressed in the browser (see |
| A new audio language has been requested, possibly by something external like the browser (see Example: |
| A new recorded audio file is available; the filename is always Example: |
| The current percentage of battery charge left in the robot. Example: |
| Sent by the coronachecker service once some valid Dutch CoronaCheck QR code has been recognised in the video stream. |
| Provides information about the selected devices when the agent starts. Example: |
| An emotion was detected in the given image by the emotion detection service (see Example: |
| Either an event related to some action above, i.e. one of [BreathingDisabled, BreathingEnabled, ClearLoadedAudioDone, EarColourDone, EarColourStarted, EyeColourDone, EyeColourStarted, GestureDone, GestureStarted, HeadColourDone, HeadColourStarted, LanguageChanged, ListeningDone, ListeningStarted, MemoryEntryStored, PlayAudioDone, PlayAudioStarted, PlayMotionDone, PlayMotionStarted, RecordMotionStarted, SetIdle, SetNonIdle, SetSpeechParamDone, TextDone, TextStarted, TurnDone, TurnStarted, UserDataSet, WatchingDone, WatchingStarted], an event related to one of the robot's sensors, i.e. one of [BackBumperPressed, FrontTactilTouched, HandLeftBackTouched, HandLeftLeftTouched, HandLeftRightTouched, HandRightBackTouched, HandRightLeftTouched, HandRightRightTouched, LeftBumperPressed, MiddleTactilTouched, RearTactilTouched, RightBumperPressed] (see http://doc.aldebaran.com/2-5/family/robots/contact-sensors_robot.html), or an event originating from one of the services, i.e. one of [EmotionDetectionDone,EmotionDetectionStarted,FaceRecognitionDone,FaceRecognitionStarted,IntentDetectionDone,IntentDetectionStarted,MemoryEntryStored,PeopleDetectionDone,PeopleDetectionStarted,UserDataSet]. Example: |
| A face was recognised by the face recognition service. The identifier is a unique number for the given face, starting from 0. The percept will be sent continuously as long as the camera is open and the face is recognised. Example: |
| One or more devices of the robot are (too) hot. Example: |
| An Intent with the given name was detected by the Dialogflow service, possibly under the current context (see Note that when an Example: |
| |
| The rest-mode of the robot has changed, it’s either resting (awake=false) or it woke up (awake=true). See the Example: |
| The robot is plugged-in (charging=true) or not (charging=false). Example: |
| The audio given in the latest Example: |
| A response to a Example: |
| The result of a Example: |
personDetected
| Sent when the people detection service detects someone; the X and Y coordinates represent the (estimated) center of the person’s face in the image. The percepts will be sent continuously as long as the camera is open and |
someone is detected. | |
| A new picture file is available; the filename is always |
startWatching
and takePicture
). Can be fed into the renderPage
action (e.g. converted to base64).Example: picture('12-00-00.jpg')
.
posture(Posture)
goToPosture
action(see Example: |
posture('Stand')
.stiffness(Stiffness)
A number indicating the current average stiffness in the robot’s body. 0: less than 0.05 average, 1: between 0.05 and 0.95 average, 2: above 0.95 average. See the setStiffness
action.
| |
| The robot has taken the posture; see the Example: |
| Sent by the sentiment analysis service for each Example: |
| A quick direct indication of the text spoken by |
an end-user whilst the intent detection is running; the final text given in the Example: |
|
|