Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
We have taken you by the hand thus far and walked you through the code you were asked to produce step-by-step. We will still be providing useful information for you to learn more about developing conversational agents, but now we will change gears a bit and leave more for you to figure out yourself, too. Remember that you can find useful information and links on the Project Background Knowledge pageprevious pages.
Dialogflow
Conversational patterns set expectations about what actors participating in a conversation will do, but users often will not quite meet these expectations and make moves that do not fit into the active pattern. The conversational agent will need to be able to handle such “unexpected” moves from users. “Unexpected” here means that these moves do not fit into the currently active conversational pattern, not that such moves should not be expected from users at all. Two intents that we should expect to match with what a user could say are an appreciation intent and an intent to check what the agent is capable of. You should add the following intents to your Dialogflow agent, making sure that you use the intent labels specified below:
Add an
appreciation
intent to match user expressions of appreciation or gratitude. The intent should, for example, match with a “thank you” phrase.Add a
checkCapability
intent that enables a user to inquire about what your agent can do. The intent should, for example, match with a phrase such as “What can you do?”.
Note |
---|
As before, it is up to you to add sufficiently many training phrases that will cover the range of different phrases a user could use for expressing an intent. |
Fallback intent
Another “unexpected” intent does not have its origin with the user but rather is a result of misunderstandings that arise due to speech recognition problems. If, in our case, Dialogflow is not able to transcribe what a user says and classifies it as a known intent (one of the intents created for your Dialogflow agent), it will classify what the user says as a default fallback intent. Or, in other words, the default fallback intent is matched when your agent does not recognize an end-user expression. Check out https://cloud.google.com/dialogflow/es/docs/intents-default#fallback. You do not need to add a fallback intent, as it is already available when you create your Dialogflow agent.
Info |
---|
When you inspect the default fallback intent in your agent, you will see that the action name associated with this intent is |
The default fallback intent is a special intent for several reasons. It is matched if Dialogflow cannot match user input with any of the other intents known by the agent. But Dialogflow will also match with this intent if user input matches with training phrases that you provide for the fallback intent. You can add training phrases to the fallback intent that act as negative examples to make sure these phrases are not matched with any other intent. There may be cases where end-user expressions have a slight resemblance to your training phrases, but you do not want these expressions to match any normal intents.
Because phrases that are completely unrelated to the topic of our recipe recommendation agent will be classified as a fallback intent when they do not even vaguely resemble any of the training phrases that you add to the intents you create, it is not necessary to add such phrases as negative examples. Instead, it is more useful to think of phrases that a user might say and are similar to some of the training phrases used for your Dialogflow agent that should not be matched with any of your agent’s intents. It is, however, not that easy to come up with such phrases as the cooking domain is very extensive, and it is not completely clear what a recipe recommendation should be able to understand and/or handle. Perhaps it is best to include, as a design decision, some phrases that the agent clearly will not be able to handle (e.g., because of the limitations of its database). An example that comes to mind is a user saying something like “I don't want anything that has a lot of calories”. It will not be easy to handle such a request because the information that is available about calories related to the recipe in the database is very limited at best. However, arguably, even if the agent is not able to handle the request very well it should at least be able to understand the request (the Dialogflow agent should be able to make sense of it) and provide an appropriate response such as an apology, for example, that it is unable to process a request like this. Perhaps a better example would be a user saying “I am interested in rabbits” in the sense of “I have an interest in rabbits”. These statements are somewhat similar to expressing a preference about a recipe (“I’d like a recipe with rabbit”) but their meaning is quite unrelated to requests about recipes. In any case, it’s up to you to make your mind up about what kind of training phrases should be used for the fallback intent.
...
Info |
---|
The user can cause an |
Prolog and Patterns
Repair
When a conversational agent does not understand a user, it needs to have the conversational competence to deal with the situation. Here, we will make two important distinctions related to the type of misunderstanding. The first case is that the agent is not able to match what a user says with an intent and the Dialogflow agent matches with a fallback intent. The second case is quite different. Here the Dialogflow agent can make sense of what a user says and matches it with one of the intents created for that agent. The conversational agent, however, has trouble handling the intent within the conversational context as defined by the active conversational pattern. In other words, the intent does not fit into that pattern and the question is how the agent should deal with this. We provide two repair mechanisms using (somewhat special) patterns again for the agent to deal with each of these situations. In both of these situations
Responding to a fallback with a paraphrase request
An example of a user expression that will not be recognized by our Dialogflow agent is the following:
...
Don’t forget to add textual responses in the responses.pl
for the paraphraseRequest
intent. You can use the Responses section of the fallback intent in your Dialogflow agent for inspiration.
Responding to an out-of-context user intent
Now suppose that the user said something that Dialogflow can match with one of the agent’s intents but that intent does not fit into the active conversational pattern. An example of handling a situation like that is the following:
...
It will also require you to still add a lot more code for handling the other patterns that we have (though you might also figure out that doing that work does not make sense for all possible pattern contexts and out-of-context intents; in a greeting context, for example, your agent might simply not care that much and define text(c10, contextMismatch(_), "")
). Moreover, you should not forget to update your code for generating responses for the contextMismatch(Intent)
intent when you add more patterns and intents to your Dialogflow agent. The pay-off, however, will be that your conversational agent will be much more clear, useful, and appreciated by its users.
Appreciation
A simple example of a pattern where a user first expresses their appreciation and the agent receives this well is the following:
...
In Moore and Arar’s taxonomy, this classifies as a b42
sequence closer appreciation pattern. Implement this pattern in patterns.pl
. You should use the intent labels: appreciation
and appreciationReceipt
. Add phrases the agent can use for expressing the receipt of the user’s appreciation in the responses.pl
file.
Checking capabilities
When a user wants to know what your agent can do for it, i.e., check what capabilities it has, the agent should be able to provide an appropriate reply. The key challenge here is to fill in the ___ in the example below. What would be a good response to such a general request for information from a user? The capability check should give a user enough guidance to understand how to talk to the agent or, even better, ideally also to ask more specific questions about its capabilities, for example, “Tell me more about the recipe features you know about” (cf. Moore and Arar, 2019).
...
Info |
---|
A similar design choice for specifying the response applies to the |
Visuals
You can update the visuals based on what you think will help the user the most. Think about how you can support the implemented capability visually.
Test it Out
Before we added the repair patterns above, the agent got stuck if it misunderstood something and was not able to proceed with the conversation. With the changes made now, you should be able to say something random, the agent should indicate that it does not understand, and you should be able to continue with the ongoing, active pattern.
...