Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents

Init Parameters

Parameter

Description

server

The IP address of the server running the CBSR cloud (or the Docker version).

'localhost' by default.

dialogflow_key_file

The (absolute or relative) path to the Dialogflow authentication JSON file.

Example: 'exampleagent-d8916-45d4143bb490.json'

Warning: never push such an authentication file to a public repository.

dialogflow_agent_id

The name of the Dialogflow agent.

Example: 'exampleagent-d8916'

dialogflow_language

The language the Dialogflow agent should use.

'nl-NL' by default.

tts_key_file

The (absolute or relative) path to the Google TTS authentication JSON file. The file can coincide with the Dialogflow agent file

Example: 'exampleagent-d8916-45d4143bb490.json'

Warning: never push such an authentication file to a public repository.

tts_voice

The voice Google Text-to-Speech should use. Can be chosen from https://cloud.google.com/text-to-speech/docs/voices .

'nl-NL-Standard-A' by default.

play_tts_voice

The Text-to-Speech voice of the agent.

Can choose between google or robot. When tested on the computer speaker, google is always used.

robot by default.

Actions

In principle, all actions are of the durative type; an agent does not wait until their factual completion (but the corresponding events need to be used instead).

Action

Description

Animations

These actions currently only work on Nao/Pepper robots. Use the computer-robot JAR to emulate the expected responses to these actions locally if needed.

set_breathing(bool)

Starts/Stops breathing animations on the robot, based on the truth value of the parameter (1 enables breathing, 0 disables it)

A BreathingEnabled or BreathingDisabled event will be sent when this is done.

Example: set_breathing(1).

do_gesture(Animation)

Performs the given animation. See http://doc.aldebaran.com/2-5/naoqi/motion/alanimationplayer-advanced.html  for the available animations per robot.

A GestureStarted and GestureDone event will be sent before the animation plays and after it finishes respectively.

Example: do_gesture('animations/Stand/Gestures/Hey_1').

go_to_posture(Posture,Speed=100)

Make the robot go the given posture at the given speed (0-100; default when it's left out is 100% speed). See http://doc.aldebaran.com/2-5/family/pepper_technical/postures_pep.html#pepper-postures  and http://doc.aldebaran.com/2-8/family/nao_technical/postures_naov6.html#naov6-postures  for the available postures on Peppers and Naos respectively.

A posture percept will be sent reflecting any change in posture.

Example: go_to_posture('Stand', 50).

play_motion(RecordedMotion)

See the start_motion_recording action for more information; the data that comes out of such a recording can be fed into this action.

start_listening(Timeout,Context='')

A PlayMotionStarted and PlayMotionDone event will be sent before the recording is performed and after it has finished respectively.

The method can also play the XML-file at the given path (e.g. from Choregraphe), with optionally an emotion that will be used to modify the given animation.

The emotion can be one of: [fear, mad, supersad, alarmed, tense, afraid, angry, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, aroused, astonished, excited, delighted, happy, pleased, glad, serene, content, atease, satisfied, relaxed, calm].

A PlayMotionStarted and PlayMotionDone event will be sent before the recording is performed and after it has finished respectively.

Example: play_motion('Animation.xml').

rest

Make the robot go into rest mode. This is the inverse of the wake_up action.

An is_awake percept will be sent reflecting any change in the state of the robot.

set_ear_colour(Colour)

Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue.

An EarColourStarted and EarColourDone event will be sent before the ear LEDs change colour and after they have changed colour respectively.

Example: set_ear_colour('0x0000000A').

set_eye_colour(Colour)

Set the colour of the robot's eye LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB.

An EyeColourStarted and EyeColourDone event will be sent before the eye LEDs change colour and after they have changed colour respectively.

Example: set_eye_colour('rainbow').

set_head_colour(Colour)

Set the colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB.

A HeadColourStarted and HeadColourDone event will be sent before the head LEDs change colour and after they have changed colour respectively.

Example: set_head_colour('red').

set_idle()

Set the 'idle' mode of the robot. This can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead).

A SetIdle event will be sent when the robot went into an idle mode ('true' or 'straight').

Example: set_idle().

set_led_color(LedList,ColorList,Duration=0)

Sets the given list of LEDs to the given colours in the given duration (in milliseconds, the default of 0 means instantly).

A LedColorStarted and LedColorDone event will be sent when the color change has started and after it's done respectively.

Example: set_led_color(['LeftFaceLeds','RightFaceLeds'],['yellow','orange']).

set_non_idle

Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. 

A SetNonIdle event will be sent when the robot is back into autonomous head movement.

set_stiffness(JointList,Stiffness,Duration=1000)

Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out).

A SetStiffnessStarted and SetStiffnessDone event will be sent when the robot starts changing the stiffness and after it's done respectively.

Example: set_stiffness(['LArm', 'RArm'], 100).

start_led_animation(LedGroup,AnimType,ColorList,Speed)

On the given group of LEDs ([eyes, chest, feet, all]), starts an animation of the given type ([rotate, blink, alternate]) using the given colors at the given speed (in milliseconds).

A LedAnimationStarted and LedAnimationDone event will be sent when the animation has started and after it's done respectively.

Example: start_led_animation('eyes', 'blink', ['red', 'blue'], 100).

start_record_motion(JointList,Framerate=5)

Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out).

A RecordMotionStarted event will be sent once the motion recording starts.

Example: start_record_motion(['Head']).

stop_led_animation

Cancels any ongoing LED animation.

stop_motion_recording

Stops any ongoing motion recording. A robot_motion_recording percept will be sent as a result.

turn(Degrees)

Make the (Pepper) robot turn the given number of degrees (-360 to 360).

A TurnStarted and TurnDone event will be sent when the robot starts turning and after it's done respectively.

wake_up

Get the robot out of rest mode. This is the inverse of the rest action.

An isAwake percept will be sent reflecting any change in the state of the robot.

Audiovisual

These action work on any supported audio device (a robot, laptop, tablet, etc.)

clear_loaded_audio

Clear any audio that was preloaded on an audio device (using the loadAudio action).

A ClearLoadedAudioDone event will be sent once this has completed.

stop_recording

Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see enableRecording).

start_recording

Make the Dialogflow service send the audio of each fragment to the client (see the audioRecording percept as well). The audio is normalised to make it more suitable for playback.

load_audio(Path)

Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling playAudio.

The result (once the audio file is preloaded) is a loadedAudioID percept, which gives you an identifier to use in the playLoadedAudio action.

Example: load_audio('12-00-00.wav').

play_audio(Path)

Directly play the given audio file (which can be either a local file or a remote url) on the audio device.

A PlayAudioStarted and PlayAudioDone event will be sent when the audio starts playing (i.e. after it has been loaded on the audio device) and when it has finished playing respectively.

Example: playAudio('12-00-00.wav').

play_loaded_audio(Identifier)

Play the preloaded audio file associated with the given loadedAudioID on the audio device.

A PlayAudioStarted and PlayAudioDone event will be sent when the audio starts playing and when it has finished playing respectively.

Example: play_loaded_audio(1).

say(Text)

Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used.

A TextStarted and TextDone event will be sent when the device starts playing the text and after it has completed this respectively.

Example: say('Hello, world!').

say_animated(Text)

The same as say(Text), but on a Nao/Pepper the robot will automatically add some animations whilst saying this text.

A TextStarted and TextDone event will be sent when the audio device starts playing the text and after it has completed this respectively.

Example: say_animated('Hello. Goodbye!').

set_language(LanguageKey)

Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a flowLang is given in the init parameters, this language will be set accordingly; otherwise it is 'nl-NL' by default.

A LanguageChanged event will be sent when the text-to-speech engine has switched language (Dialogflow will use the given language for the next detection).

Example: set_language('en-US').

Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a stopListening action is called. If the given Timeout is 0, the microphone will remain open until stopListening is called. The optional Context in this action is fed to Dialogflow (see RecordMotion: JSON string format (see below)

Code Block
        {'robot': <'nao'/'pepper'>, 'compress_factor_angles': <int>, 'compress_factor_times: <int>
        'motion': {'joint1': { 'angles': [...], 'times': [...]}, 'joint2': {...}}}

Output via callback after stop_motion_recording (needs start_motion_recording) can be input for this function.

play_motion_file(file_path)

See play_motion. Instead of a JSON string input of the motion it expects a string with a path to a JSON file containing the motion.

rest()

Make the robot go into rest mode. This is the inverse of the wake_up action.

An is_awake percept will be sent reflecting any change in the state of the robot.

set_ear_colour(Colour)

Set the colour of the robot's ear LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB. On the Pepper, the ear LEDs can only be various shades of blue.

An EarColourStarted and EarColourDone event will be sent before the ear LEDs change colour and after they have changed colour respectively.

Example: set_ear_colour('0x0000000A').

set_eye_colour(Colour)

Set the colour of the robot's eye LEDs. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB.

An EyeColourStarted and EyeColourDone event will be sent before the eye LEDs change colour and after they have changed colour respectively.

Example: set_eye_colour('rainbow').

set_head_colour(Colour)

Set the colour of the robot's LEDs on the top of its head. This is only possible on the Nao. The Colour can be either a predefined single colour (white, red, green, blue, yellow, magenta, or cyan), a predefined combination (rainbow or greenyellow), or an RGB-colour given in hexadecimal: 0x00RRGGBB.

A HeadColourStarted and HeadColourDone event will be sent before the head LEDs change colour and after they have changed colour respectively.

Example: set_head_colour('red').

set_idle()

Set the 'idle' mode of the robot. This can be either 'true' (look straight ahead but slightly upwards) or 'straight' (look exactly straight ahead).

A SetIdle event will be sent when the robot went into an idle mode ('true' or 'straight').

Example: set_idle().

set_led_color(LedList,ColorList,Duration=0)

Sets the given list of LEDs to the given colours in the given duration (in milliseconds, the default of 0 means instantly).

A LedColorStarted and LedColorDone event will be sent when the color change has started and after it's done respectively.

Example: set_led_color(['LeftFaceLeds','RightFaceLeds'],['yellow','orange']).

set_non_idle

Disable the 'idle' mode of the robot. This means its head will move in the robot's autonomous mode, which is the default behaviour. 

A SetNonIdle event will be sent when the robot is back into autonomous head movement.

set_stiffness(JointList,Stiffness,Duration=1000)

Sets the stiffness of one or more of the robot's joints ([Head, RArm, LArm, RLeg, LLeg] on the Nao and [Head, RArm, LArm, Leg, Wheels] on the Pepper). The stiffness can be between 0 and 100 (i.e. 0% to 100%), and the duration of the change is given in milliseconds (1000ms, i.e. 1 second by default if left out).

A SetStiffnessStarted and SetStiffnessDone event will be sent when the robot starts changing the stiffness and after it's done respectively.

Example: set_stiffness(['LArm', 'RArm'], 100).

start_led_animation(LedGroup,AnimType,ColorList,Speed)

On the given group of LEDs ([eyes, chest, feet, all]), starts an animation of the given type ([rotate, blink, alternate]) using the given colors at the given speed (in milliseconds).

A LedAnimationStarted and LedAnimationDone event will be sent when the animation has started and after it's done respectively.

Example: start_led_animation('eyes', 'blink', ['red', 'blue'], 100).

start_record_motion(JointList,Framerate=5)

Start recording the robot's motion on the given joints or joint chains. See http://doc.aldebaran.com/2-8/family/nao_technical/bodyparts_naov6.html#nao-chains  for the Nao and http://doc.aldebaran.com/2-5/family/pepper_technical/bodyparts_pep.html  for the Pepper. The position of each joint will be recorded the given number of times per second (5 times per second by default if left out).

A RecordMotionStarted event will be sent once the motion recording starts.

Example: start_record_motion(['Head']).

stop_led_animation

Cancels any ongoing LED animation.

stop_motion_recording

Stops any ongoing motion recording. A robot_motion_recording percept will be sent as a result.

turn(Degrees)

Make the (Pepper) robot turn the given number of degrees (-360 to 360).

A TurnStarted and TurnDone event will be sent when the robot starts turning and after it's done respectively.

wake_up

Get the robot out of rest mode. This is the inverse of the rest action.

An isAwake percept will be sent reflecting any change in the state of the robot.

Audiovisual

These action work on any supported audio device (a robot, laptop, tablet, etc.)

speech_recognition(context, max_duration, callback, with_sentiment)

Uses Dialogflow to recognise a person’s speech.

context: Google's Dialogflow context label (str) - needs to be defined in the Dialogflow Agent

max_duration: maximum time to listen in seconds (int)

callback: callback function that will be called when a result (or fail) becomes available

with_sentiment: use the sentiment analysis service to analyse the received text (bool)

speech_recognition_shortcut(context, shortcuts, max_duration, callback,with_sentiment)

Uses Dialogflow to recognise short answers (e.g.: yes/no) in a person’s speech. This is useful because Dialogflow often listens for a longer time to make sure that a person has stopped talking. In the case of short answers, this causes a somewhat unnatural pause.

context: Google's Dialogflow context label (str) - needs to be defined in the Dialogflow Agent

shortcuts: list of short answers to look for in the user’s reply

max_duration: maximum time to listen in seconds (int)

callback: callback function that will be called when a result (or fail) becomes available

with_sentiment: use the sentiment analysis service to analyse the received text (bool)

clear_loaded_audio

Clear any audio that was preloaded on an audio device (using the loadAudio action).

A ClearLoadedAudioDone event will be sent once this has completed.

record_audio(duration, callback)

Records audio for a number of duration seconds. The location of the audio is returned via the callback function.

duration: number of seconds of audio that will be recorded (int)

callback: callback function that will be called when the audio is recorded.

stop_recording

Prevent the Dialogflow service from sending the audio it processes to the client (which is not done by default; see enableRecording).

start_recording

Make the Dialogflow service send the audio of each fragment to the client (see the audioRecording percept as well). The audio is normalised to make it more suitable for playback.

load_audio(Path)

Preload the given audio file (which can be either a local file or a remote url) on the audio device. This prevents the audio device from having to download the file when calling playAudio.

The result (once the audio file is preloaded) is a loadedAudioID percept, which gives you an identifier to use in the playLoadedAudio action.

Example: load_audio('12-00-00.wav').

play_audio(Path)

Directly play the given audio file (which can be either a local file or a remote url) on the audio device.

A PlayAudioStarted and PlayAudioDone event will be sent when the audio starts playing (i.e. after it has been loaded on the audio device) and when it has finished playing respectively.

Example: playAudio('12-00-00.wav').

play_loaded_audio(Identifier)

Play the preloaded audio file associated with the given loadedAudioID on the audio device.

A PlayAudioStarted and PlayAudioDone event will be sent when the audio starts playing and when it has finished playing respectively.

Example: play_loaded_audio(1).

say_text_to_speech(Text)

Use Google’s TTS service to play the given text. The voice and the language used is set at initialisation time.

A TextStarted and TextDone event will be sent when the device starts playing the text and after it has completed this respectively.

Example: say_text_to_speech('Hello, world!').

say(Text)

Use text-to-speech to make the audio device play the given text. The exact results depends on the device that is used.

A TextStarted and TextDone event will be sent when the device starts playing the text and after it has completed this respectively.

Example: say('Hello, world!').

say_animated(Text)

The same as say(Text), but on a Nao/Pepper the robot will automatically add some animations whilst saying this text.

A TextStarted and TextDone event will be sent when the audio device starts playing the text and after it has completed this respectively.

Example: say_animated('Hello. Goodbye!').

set_speech_param(param, value)

Set various speech parameters. These parameters can be pitchShift, doubleVoice, doubleVoiceLevel, doubleVoiceTimeShift, speed, defaultVoiceSpeed. Check http://doc.aldebaran.com/2-8/naoqi/audio/altexttospeech-api.html#ALTextToSpeechProxy::setParameter__ssCR.floatCR for the values these parameters can have

Example: set_speech_param('speed', 200) for making the robot speak two times faster.

set_language(LanguageKey)

Set the language to be used by the audio device's text-to-speech engine and Dialogflow's speech-to-text engine. By default, if a flowLang is given in the init parameters, this language will be set accordingly; otherwise it is 'nl-NL' by default.

A LanguageChanged event will be sent when the text-to-speech engine has switched language (Dialogflow will use the given language for the next detection).

Example: set_language('en-US').

start_listening(Timeout,Context='')

Opens up the selected microphone and starts streaming the audio, e.g. to Dialogflow, either until the Timeout (in seconds, possibly with decimals) has been reached or a stopListening action is called. If the given Timeout is 0, the microphone will remain open until stopListening is called. The optional Context in this action is fed to Dialogflow (see https://cloud.google.com/dialogflow/docs/contexts-input-output#input_contexts ). 

A ListeningStarted and ListeningDone event will be sent when the microphone is opened and when it has closed again respectively. There are, however, also specific events from services that depend on the microphone (see the event percept for more information).

Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/docs/contexts-input-output#input_contexts). 

A ListeningStarted and ListeningDone event will be sent when the microphone is opened and when it has closed again respectively. There are, however, also specific events from services that depend on the microphone (see the event percept for more information).

Note that when using Dialogflow, by default, you can do this at most 1000 times per 24 hours (see 'Standard Edition - Audio' at https://cloud.google.com/dialogflow/quotas).

Example: start_listening(2.5, 'answer_yesno').

start_looking(Timeout)

Opens up the selected camera and starts streaming the video, e.g. to a face recognition or emotion detection service, either until the Timeout (in seconds, possibly with decimals) has been reached or a stopWatching action is called. If the given Timeout is 0, the camera will remain open until stopWatching is called.

A WatchingStarted and WatchingDone event will be sent when the camera is opened and when it has closed again respectively. There are, however, also specific events from services that depend on the camera (see the event percept for more information).

Example: start_watching(0).quotas ).

Example: start_listening(2.5, 'answer_yesno').

subscribe_vision_listener(vision_type, callback, continuous)

Subscribes to the results of a type of vision. This function is needed in order to get the results of the vision type (e.g.: coordinates of a person in the image) via the callback.

vision_type: type of vision to subscribe to. Can be any of the types FACE, PEOPLE, EMOTION, CORONA

callback: callback function that will be called with the vision result

continuous: whether the service continuously returns vision results, or stops after the first one. Blocking if not continuous (bool)

unsubscribe_vision_listener(vision_type)

Unsubscribes from the type of vision.

vision_type: type of vision to unsubscribe from. Can be any of the types FACE, PEOPLE, EMOTION, CORONA

subscribe_event_listener(event, callback, continuous)

Subscribes to the results of a type of events

event: type of event to subscribe to (str)

callback: callback function that will be called with the vision result

continuous: whether the service continuously returns vision results, or stops after the first one. Blocking if not continuous (bool)

unsubscribe_event_listener(event)

Unsubscribes from a type of events.

event: type of event to unsubscribe from

wait(duration, callback)

Make the robot wait for a certain time

duration: time for which to wait (int)

callback: callback function that will be called when the robot finishes waiting

stop_listening

See start_listening; force-closes the microphone (if it was open).

stop_talking

See say and say_animated; aborts the current text-to-speech being executed (if any).

stop_looking

See start_watching; force-closes the camera (if it was open).

take_picture

Instructs the face recognition and/or people detection service to send the next camera image (see startWatching) to the client. See the picture percept for more information.

Browser

These are actions for connected browsers, e.g. the Pepper's tablet. The webserver service is always required for this.

browser_show(Html)

Render the given HTML code in the body of the page on the connected browser. For more information, see Tablets/Phones/Browsers.

...

Events

PerceptEvent

Description

on_audio-_language(LanguageKey)

A new audio language has been requested, possibly by something external like the browser (see setLanguage and renderPage).

Example: on_audio_language('en-US').

on_audio_recording(Filename)

A new recorded audio file is available; the filename is always hh-mm-ss.wav and stored in the same folder as the currently running MAS2G (see enableRecording and startListening). Can be fed into one of the PlayAudio actions.

Example: on_audio_recording('12-00-00.wav').

on_battery_charge_change(Charge)

The current percentage of battery charge left in the robot.

Example: batteryCharge(100).

on_corona_check_passed

Sent by the coronachecker service once some valid Dutch CoronaCheck QR code has been recognised in the video stream.

on_emotion_detected(Emotion)

An emotion was detected in the given image by the emotion detection service (see startWatching).

Example: on_emotion_detected('happy').

on_event(Event)

Either an event related to some action above, i.e. one of [BreathingDisabled, BreathingEnabled, ClearLoadedAudioDone, EarColourDone, EarColourStarted, EyeColourDone, EyeColourStarted, GestureDone, GestureStarted, HeadColourDone, HeadColourStarted, LanguageChanged, ListeningDone, ListeningStarted, MemoryEntryStored, PlayAudioDone, PlayAudioStarted, PlayMotionDone, PlayMotionStarted, RecordMotionStarted, SetIdle, SetNonIdle, SetSpeechParamDone, TextDone, TextStarted, TurnDone, TurnStarted, UserDataSet, WatchingDone, WatchingStarted], an event related to one of the robot's sensors, i.e. one of [BackBumperPressed, FrontTactilTouched, HandLeftBackTouched, HandLeftLeftTouched, HandLeftRightTouched, HandRightBackTouched, HandRightLeftTouched, HandRightRightTouched, LeftBumperPressed, MiddleTactilTouched, RearTactilTouched, RightBumperPressed] (see http://doc.aldebaran.com/2-5/family/robots/contact-sensors_robot.html ), or an event originating from one of the services, i.e. one of [EmotionDetectionDone,EmotionDetectionStarted,FaceRecognitionDone,FaceRecognitionStarted,IntentDetectionDone,IntentDetectionStarted,MemoryEntryStored,PeopleDetectionDone,PeopleDetectionStarted,UserDataSet].

Example: event('TextDone').

on_face_recognized(Identifier)

A face was recognised by the face recognition service. The identifier is a unique number for the given face, starting from 0. The percept will be sent continuously as long as the camera is open and the face is recognised.

Example: on_face_ecognized(12).

on_hot_device_detected(DeviceList)

One or more devices of the robot are (too) hot.

Example: on_hot_device_detected(['LLeg','RLeg']).

on_audio_intent(Intent,Params,Confidence,Text,Source)

An Intent with the given name was detected by the Dialogflow service, possibly under the current context (see startListening). Note that the intent can be an empty string as well, meaning no intent was matched (but some speech was still processed into text). The Params are an optional key-value list of all recognised entities in the intent. The Confidence is indicated between 0 (no intent detected) and 100 (completely sure about the match). The Text is the raw text that was generated by Dialogflow from the speech input. Finally, the source is one of ['audio', ‘chat’, ‘webhook’].

Note that when an IntentDetectionDone event is sent, but no intent percept has arrived at that time, no (recognisable) speech was found by Dialogflow.

Example: on_audio_intent('answer_yesno', [], 100, "Yes", 'audio').

is_awake(Awake)

The rest-mode of the robot has changed, it’s either resting (awake=false) or it woke up (awake=true). See the wakeUp and rest actions.

Example: is_wake(true).

is_chargin(Charging)

The robot is plugged-in (charging=true) or not (charging=false).

Example: is_charging(true).

on_robot_motion_recording(Recording)

The result of a startMotionRecording action, which can be fed into playMotion.

Example: on_robot_motion_recording([…]).

on_person_detected(X,Y)

Sent when the people detection service detects someone; the X and Y coordinates represent the (estimated) center of the person’s face in the image. The percepts will be sent continuously as long as the camera is open and someone is detected.

on_picture(Filename)

A new picture file is available; the filename is always hh-mm-ss.jpg and stored in the same folder as the currently running MAS2G (see startWatching and takePicture). Can be fed into the renderPage action (e.g. converted to base64).

Example: on_picture('12-00-00.jpg').

on_posture_changed(Posture)

The robot has taken the posture; see the goToPosture action.

Example: on_posture_changed('Stand').

on_text_sentiment(Sentiment)

Sent by the sentiment analysis service for each transcript; either positive or negative

Example: on_text_sentiment('positive').

on_text_transcript(Text)

A quick direct indication of the text spoken by an end-user whilst the intent detection is running; the final text given in the intent percept might be different than the multiple transcripts that may be received first!

Example: on_text_transcript("Hey").

...