Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel2
outlinefalse
typelist
printablefalse

We have taken you by the hand thus far and walked you through the code you were asked to produce step-by-step. We will still be providing useful information for you to learn more about developing conversational agents, but now we will change gears a bit and leave more for you to figure out yourself, too. Remember that you can find useful information and links on the Project Background Knowledge page.

Dialogflow

Conversational patterns set expectations about what actors participating in a conversation will do, but users often will not quite meet these expectations and make moves that do not fit into the active pattern. The conversational agent will need to be able to handle such “unexpected” moves from users. “Unexpected” here means that these moves do not fit into the currently active conversational pattern, not that such moves should not be expected from users at all. Two intents that we should actually expect to match with what a user could say are an appreciation intent and an intent for checking to check what the agent is capable of. You should add the following intents to your Dialogflow agent, making sure that you use the intent labels specified below:

...

Another “unexpected” intent does not have its origin with the user , but rather is a result of misunderstandings that arise due to speech recognition problems. If, in our case, Dialogflow is not able to transcribe what a user says and classify classifies it as a known intent (one of the intents created for your Dialogflow agent), it will classify what the user says as a default fallback intent. Or, in other words, the default fallback intent is matched when your agent does not recognize an end-user expression. Check out https://cloud.google.com/dialogflow/es/docs/intents-default#fallback. You do not need to add a fallback intent, as it is already available when you create your Dialogflow agent.

...

Because phrases that are completely unrelated to the topic of our recipe recommendation agent will be classified as a fallback intent when they do not even vaguely resemble any of the training phrases that you add to the intents you create, it is not necessary to add such phrases as negative examples. Instead, it is more useful to think of phrases that a user might say and are similar to some of the training phrases used for your Dialogflow agent that should not be matched with any of your agent’s intents. It is, however, not that easy to come up with such phrases as the cooking domain is very extensive, and it is not completely clear what a recipe recommendation should be able to understand and/or handle. Perhaps it is best to include, as a design decision, some phrases that the agent clearly will not be able to handle (e.g., because of the limitations of its database). An example that comes to mind is a user saying something like “I don't want anything that has a lot of calories”. It will not be easy to handle such a request because the information that is available about calories related to the recipe in the database is very limited at best. However, arguably, even if the agent is not able to handle the request very well it should at least be able to understand the request (the Dialogflow agent should be able to make sense of it) and provide an appropriate response such as an apology, for example, that it is unable to process a request like this. Perhaps a better example would be a user saying “I am interested in rabbits” in the sense of “I have an interest in rabbits”. These statements are somewhat similar to expressing a preference about a recipe (“I’d like a recipe with rabbit”) but their meaning is quite unrelated to requests about recipes. In any case, it’s up to you to make your mind up about what kind of training phrases should be used for the fallback intent.

Tip

When talking about the similarity of (training) phrases, other issues might come to mind. Using training phrases for different intents, for example, that are quite similar will confuse your Dialogflow agent. To avoid such issues, Dialogflow provides a feature called https://cloud.google.com/dialogflow/es/docs/agents-validation. You should enable this validation featurefor your Dialogflow agent and when you have done this, check out the agent validation page. You will see at least one intent issue warning: "There are no negative examples in the agent. Please add examples into 'Default Fallback Intent' intent."

Throughout the project, you should keep checking the validation page for issues and update the fallback intent by adding negative examples when they come to mind (e.g. , when you add more training phrases for other intents).

Prolog and Patterns

Repair

When a conversational agent does not understand a user, it needs to have the conversational competence to deal with the situation. Here, we will make two important distinctions related to the type of misunderstanding. The first case is that the agent is not able to match what a user says with an intent and the Dialogflow agent matches with a fallback intent. The second case is quite different. Here the Dialogflow agent is able to can make sense of what a user says and matches it with one of the intents created for that agent. The conversational agent, however, has trouble handling the intent within the conversational context as defined by the active conversational pattern. In other words, the intent does not fit into that pattern and the question is how the agent should deal with this. We provide two repair mechanisms using (somewhat special) patterns again for the agent to deal with each of these situations.

...

An example of a user expression that will not be recognized by our Dialogflow agent is the following:

U: Have you read the The Hobbit?

A: What do you mean?

...

Don’t forget to add textual responses in the responses.pl for the paraphraseRequest intent. You can use the Responses section of the fallback intent in your Dialogflow agent for inspiration.

Responding to an out-of-context user intent

Now suppose that the user said something that Dialogflow can match with one of the agent’s intents but that intent does not fit into the active conversational pattern. An example for of handling a situation like that is the following:

...

We call the second move of the user an out-of-context move. The agent could expect many intents to fit here but a greeting such as the user expression in the example is out of context and does not fit. In principle, there can be many intents that simply do not fit the context of the conversation. The agent should respond to such out-of-context intents with a response that indicates there is a mismatch with the context (at least from the agent’s perspective). We call the last agent move therefore a contextMismatch. The pattern that we want to implement here as a repair mechanism consists of the out-of-context user intent as the first move and the contextMismatch as the second move. The identifier we use for this pattern is b13. This pattern is somewhat different from other patterns as we only know for the first [Actor, Intent] pair that the Actor must be instantiated with user but there is no specific intent we can use to instantiate the Intent parameter. For this reason, as it should be possible to instantiate this paramater parameter with in principle any intent, we should keep this variable and we will not instantiate it at all. To complicate things even further, to enable the agent to shape its response specifically based on the Intent that is out of context, we want to pass this parameter on to our contextMismatch move. Providing a generic type of response as in the example is very dissatisfying given that the conversational agent has so much more information it can use to shape its response. To make this possible, we will simply add the parameter to the intent label and use contextMismatch(Intent) for the second agent move.

  • Add a Prolog rule for the b13 pattern to the patterns.pl file using Intent and contextMismatch(Intent) as the intent labels for the first and second dialog move. You should make sure that the Intent is not equal to the default fallback intent by adding this as a condition to the rule.moves.

Info

We have already hinted at the special status of the b13 pattern. In a sense, it is a “catch-all” pattern that matches with any intent as the first move and does not represent a specific common conversational pattern. Because of its generic form, the agent could apply this pattern to any user expression, but that would not be very useful. The application of this special pattern therefore is regulated in a different way differently by the dialog manager in the updateSession.mod2g module. When you inspect this file, you will see that the b13 pattern is used as the last option for processing a (user) intent. This last option corresponds to the case where an intent that is recognized is an out-of-context intent.

As usual, we need to add a response for the agent for the contextMismatch(Intent). A simple approach would be to respond with the generic “I am not sure what that means in this context”. You can do thus this by adding a simple text/2 fact for in the intent to the responses.pl file for that intent.

The downside of responding with such a generic resply reply is that it leaves the user wondering what is wrong. It would be better to tailor the response to the specific out-of-context intent, and make use of the currently active pattern at top-level for providing more conversational context. If we take all that into account, then we can shape a response to express more specifically what we would have liked the user to have done. Instead of the text/2 predicate that we used thus far to add responses for agent intent labels, it is also possible to use a text/3 predicate for creating differentiated responses that take conversational context into account. The idea thus is to add rules with text(PatternID, contextMismatch(Intent), Txt) as head where we instantiate PatternID with top-level patterns (the ones we added to the agent's agenda) to the responses.pl file. We provide one example to illustrate the approach for the conversational context a50recipeSelect:

...

The idea of this rule is to combine a number of several things: (1) to apologise apologize for not quite getting what the user said (the agent being “confused”), (2) acknowledging to acknowledge that the agent could make something out of what the user said (by paraphrasing the intent that was matched), and (3) to express an expectation of what the user should have done (to make the agent understand). In an a50recipeSelect context, we expect users to contribute to the recipe selection process by indicating their recipe preferences. We used this expectation to design a specific response for the end of this formula for responding to out-of-context intents. This provides for a generic recipe for responding to an out-of-context intent that can be implemented for any combination of context (pattern ID) and the out-of-context intent.

It does require a lot more work, of course. For one thing, we need to map all intents that could be out-of-context intents to expressions that we can use in our definition. We provided some of that work for you below for the intents that we have seen thus far.

...

It will also require you to still add a lot more code for handling the other patterns that we have (though you might also figure out that doing that work does not make sense for all possible pattern contexts and out-of-context intents; in a greeting context, for example, your agent might simply not care that much and define text(c10, contextMismatch(_), "")). Moreover, you should not forget to update your code for generating responses for the contextMismatch(Intent) intent when you add more patterns and intents to your Dialogflow agent. The pay-off, however, will be that your conversational agent will be much more clear to, useful for, and appreciated by its users.

...

A simple example of a pattern where a user first expresses its their appreciation and the agent receives this well is the following:

...

In Moore and Arar’s taxonomy, this classifies as a b42 sequence closer appreciation pattern. Implement this pattern in patterns.pl. You should use the intent labels: appreciation and appreciationReceipt. Add phrases the agent can use for expressing the receipt of the user’s appreciation in the responses.pl file.

...

When a user wants to know what your agent can do for it, i.e., check what capabilities it has, the agent should be able to provide an appropriate reply. The key challenge here is to fill in the ___ in the example below. What would be a good response to such a general request for information of from a user? The capability check should give a user enough guidance to understand how to talk to the agent or, even better, ideally also to ask more specific questions about its capabilities, for example, “tell “Tell me more about the recipe features you know about” (cf. Moore and Arar, 2019).

...

In Moore and Arar’s taxonomy, this classifies as a c30 pattern for a general capability check. Implement this pattern in the patterns.pl file. You should use the intent labels checkCapability and describeCapability. Define the agent’s response in the responses.pl file.

Visuals

Nothing we ask you to do here for this capability. It’s up to you.

Test it Out

Say random stuff like “the sky is blue” in all steps and see how your agent responds and if it freezes or if you are allowed to continue.

...

Info

A similar design choice for specifying the response applies to the describeCapability as to the contextMismatch(Intent) intent, although for somewhat different reasons. As Moore and Arar, 2019, also argue, a long presentation of the agent’s capabilities in a conversational interaction does not work in practice. In other words, specifying a long text for the agent’s response is not very suitable for a conversational agent. A more conversational approach would be to refine the response and take the context where a user asks what the agent can do into account. You thus could also consider using the text/3 predicate that allows specifying context and design responses that are different for each of these contexts when a user asks for help.

Visuals

You can update the visuals based on what you think will help the user the most. Think about how you can support the implemented capability visually.

Test it Out

Before we added the repair patterns above, the agent got stuck if it misunderstood something and was not able to proceed with the conversation. With the changes made now, you should be able to say something random, the agent should indicate that it does not understand, and you should be able to continue with the ongoing, active pattern.

We do not provide a detailed testing script here anymore but leave it up to you to systematically test your agent. You could say random stuff like “the sky is blue” at any step during the conversational interaction and evaluate whether your agent responds and you can continue afterward with the conversation.

Info

All done?

Proceed with Capability 5: Filter Recipes by https://socialrobotics.atlassian.net/wiki/spaces/PM2/pages/2216001572/Designing+and+Developing+Your+Agent#Agent-Capability-5%3A-Filter-Recipes-by-Ingredients.