Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
We have taken you by the hand thus far and walked you through the code you were asked to produce step-by-step. We will still be providing useful information for you to learn more about developing conversational agents, but now we will change gears a bit and leave more for you to figure out yourself, too. Remember that you can find useful information and links on the Project Background Knowledge page.
Dialogflow
Conversational patterns set expectations about what actors participating in a conversation will do, but users often will not quite meet these expectations and make moves that do not fit into the active pattern. The conversational agent will need to be able to handle such “unexpected” moves from users. “Unexpected” here means that these moves do not fit into the currently active conversational pattern, not that such moves should not be expected from users. Two intents that we should actually expect to match with what a user could say are an appreciation intent and an intent for checking what the agent is capable of. You should add the following intents to your Dialogflow agent, making sure that you use the intent labels specified below:
...
Throughout the project, you should keep checking the validation page for issues and update the fallback intent by adding negative examples when they come to mind (e.g., when you add more training phrases for other intents).
Prolog and Patterns
Repair
When a conversational agent does not understand a user, it needs to have the conversational competence to deal with the situation. Here, we will make two important distinctions related to the type of misunderstanding. The first case is that the agent is not able to match what a user says with an intent and the Dialogflow agent matches with a fallback intent. The second case is quite different. Here the Dialogflow agent is able to make sense of what a user says and matches it with one of the intents created for that agent. The conversational agent, however, has trouble handling the intent within the conversational context as defined by the active conversational pattern. In other words, the intent does not fit into that pattern and the question is how the agent should deal with this. We provide two repair mechanisms using (somewhat special) patterns again for the agent to deal with each of these situations.
...
We call the second move of the user an out of context move. The agent could expect many intents to fit here but a greeting such as the user expression in the example is out of context and does not fit. In principle, there can be many intents that simply do not fit the context of the conversation. The agent should respond to such out of context intents with a response that indicates there is a mismatch with the context (at least from the agent’s perspective). We call the last agent move therefore a contextMismatch
. The pattern that we want to implement here as a repair mechanism consists of the out of context user intent as first move and the contextMismatch
as second move. The identifier we use for this pattern is b13
. This pattern is somewhat different from other patterns as we only know for the first [Actor, Intent]
pair that the Actor
must be instantiated with user
but there is no specific intent we can use to instantiate the Intent
parameter. For this reason, as it should be possible to instantiate this paramater with in principle any intent, we should keep this variable and we will not instantiate it at all. To complicate things even further, to enable the agent to shape its response specifically based on the Intent
that is out of context, we want to pass this parameter on to our contextMismatch
move. Providing a generic type of response as in the example is very dissatisfying given that the conversational agent has so much more information it can use to shape its response. To make this possible, we will simply add the parameter to the intent label and use contextMismatch(Intent)
for the second agent move.
Add a Prolog rule for the
b13
pattern to thepatterns.pl
file usingIntent
andcontextMismatch(Intent)
as the intent labels for the first and second dialog move.
Make a contextMismatch response in responses.pl
.
paraphrase requests are determined depending on the context of the conversation (the pattern that is currently active). Under the paraphrase request section in responses.pl, you should create ‘text/3’, which specifies the active pattern, the agent response name, and what the agent should say. Fill all these in.
...
You should make sure that the
Intent
is not equal to the default fallback intent by adding this as a condition to the rule.
Info |
---|
We have already hinted at the special status of the |
As usual we need to add a response for the agent for the contextMismatch(Intent)
. A simple approach would be to respond with the generic “I am not sure what that means in this context”. You can do thus by adding a simple text/2
fact for the intent to the responses.pl
file.
The downside of responding with such a generic resply is that it leaves the user wondering what is wrong. It would be better to tailor the response to the specific out of context intent, and make use of the currently active pattern at top-level for providing more conversational context. If we take all that into account, then we can shape a response to express more specifically what we would have liked the user to have done. Instead of the text/2
predicate that we used thus far to add responses for agent intent labels, it is also possible to use a text/3
predicate for creating differentiated responses that take conversational context into account. The idea thus is to add rules with text(PatternID, contextMismatch(Intent), Txt)
as head where we instantiate PatternID
with top level patterns (the ones we added to the agent's agenda) to the responses.pl
file. We provide one example to illustrate the approach for the conversational context a50recipeSelect
:
Code Block |
---|
text(a50recipeSelect, contextMismatch(Intent), Txt) :-
recipesFiltered(Recipes), length(Recipes, L), L>0,
convertIntent(Intent, IntentString),
string_concat("I'm not sure I got what you said. ", IntentString, Txt1),
string_concat(Txt1, ", but I was expecting you to add or remove recipe preferences.", Txt). |
The idea of this rule is to combine a number of things: (1) to apologise for not quite getting what the user said (the agent being “confused”), (2) acknowledging that the agent could make something out of what the user said (by paraphrasing the intent that was matched), and (3) to express an expectation of what the user should have done (to make the agent understand). In an a50recipeSelect
context we expect users to contribute to the recipe selection process by indicating their recipe preferences. We used this expectation to design a specific response for the end of this formula for responding to out of context intents. This provides for a generic recipe for responding to an out of context intent that can be implemented for any combination of context (pattern ID) and the out of context intent.
It does require a lot more work, of course. For one thing, we need to map all intents that could be out of context intents to expressions that we can use in our definition. We provided some of that work for you below for the intents that we have seen thus far.
Code Block |
---|
%%% Converting intent to response strings
convertIntent(appreciation, "You were expressing an appreciation").
convertIntent(checkCapability, "You asked what I can do for you").
convertIntent(greeting, "You were saying Hi").
convertIntent(requestRecommendation, "You were asking me to pick a recipe for you").
convertIntent(recipeRequest, "You were asking for a specific recipe"). |
It will also require you to still add a lot more code for handling the other patterns that we have (though you might also figure out that doing that work does not make sense for all possible pattern contexts and out of context intents; in a greeting context, for example, your agent might simply not care that much and define text(c10, contextMismatch(_), "")
). Moreover, you should not forget to update your code for generating responses for the contextMismatch(Intent)
intent when you add more patterns and intents to your Dialogflow agent. The pay-off, however, will be that your conversational agent will be much more clear to, useful for, and appreciated by its users.
Appreciation
A simple example of a pattern where a user first expresses its appreciation and the agent receives this well is the following:
...
In Moore and Arar’s taxonomy, this classifies as a c30
pattern for a general capability check. Implement this pattern in the patterns.pl
file. You should use the intent labels checkCapability
and describeCapability
. Define the agent’s response in the responses.pl
file.
Visuals
Nothing we ask you to do here for this capability. It’s up to you.
...