Dialogflow

The dialogflow service allows using the Google Dialogflow platform within your application (for either microphone or browser https://socialrobotics.atlassian.net/wiki/spaces/CBSR/pages/1470267449). Dialogflow is used to translate human speech into intents (intent recognition). In other words, not only does it (try to) convert an audio stream into readable text, it also processes this text into an intent (possibly with some additional parameters). For example, an audio stream can translate to the string "I am 15 years old", which is in turn converted to the intent 'answer_age' with parameter 'age=15'.

In order to create a Dialogflow agent, visit https://dialogflow.cloud.google.com and log-in with a Google account of choice. Use the ‘Create Agent' button at the left top to start your first project. For our framework to be able to communicate with this agent, the project ID and a keyfile are required. Press the settings icon next to your agent's name at the left top to see the Project ID. Click on the Project ID itself in order to generate the project, and then, in order to get the corresponding JSON keyfile, follow the steps given here.

The main items of interest are the Intents and the Entities. An intent is something you want to recognise from an end-user; here we will show you an example of an intent that is aimed at recognising someone’s name.

When creating an intent, you can name it anything you like; we go with 'answer_name' here. Below 'Action and parameters', you should give the name of the intent that will actually be used in your program. Here, we also make that 'answer_name'. Moreover, it is useful to set a context for the intent. A context is set by the requester in order to indicate that we only want to recognise this specific intent, and not another one. Usually, in a social robotics application, the kind of answer we want to get is known. We match the name of the (input)context with the name of the intent, and thus make it 'answer_name' as well. By default, Dialogflow makes the context 'stick' for 5 answers; we can fix this by changing the 5 (at the output context) to a 0. Now we arrive at the most important aspect of the intent: the training phrases. Here you can give the kinds of input strings you would expect; from these Dialogflow learns the model it will eventually use. You can make a part of the phrase into a parameter by double-clicking on the relevant word and selecting the appropriate entity from the list. It will then automatically appear below ‘Action and parameters' as well; the ‘parameter name’ there will be passed in the result (we use 'name’ here). The system has many built-in entities (like 'given name'), but you can define your own entities as well (even through importing CSV files). Our complete intent example thus looks like this (note: using sys.given-name is usually preferred):

In order to use this intent in an, we need to set the language, project ID (agent name), and the keyfile. See the speech_recognition_example.py (in the Python API) for an example of the above intent included in a Python application. Make sure to set your own agent name and keyfile(path)! 

Note: there is a rare bug where sometimes Dialogflow will suddenly only respond with ‘UNAUTHENTICATED’ errors. Restarting Docker and/or your entire machine seems to be the only way to resolve this.