Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel2
outlinefalse
typelist
printablefalse

Dialogflow

Add the following intents, following the same procedure as you did for the ‘greeting’ intent. Use the intent names as specified below:

...

appreciation

  1. If a user expresses appreciation or gratitude, for example, ‘thanks’.

checkCapability

...

We have taken you by the hand thus far and walked you through the code you were asked to produce step-by-step. We will still be providing useful information for you to learn more about developing conversational agents, but now we will change gears a bit and leave more for you to figure out yourself, too. Remember that you can find useful information and links on the Project Background Knowledge page.

Dialogflow

Conversational patterns set expectations about what actors participating in a conversation will do, but users often will not quite meet these expectations and make moves that do not fit into the active pattern. The conversational agent will need to be able to handle such “unexpected” moves from users. “Unexpected” here means that these moves do not fit into the currently active conversational pattern, not that such moves should not be expected from users at all. Two intents that we should expect to match with what a user could say are an appreciation intent and an intent to check what the agent is capable of. You should add the following intents to your Dialogflow agent, making sure that you use the intent labels specified below:

  1. Add an appreciation intent to match user expressions of appreciation or gratitude. The intent should, for example, match with a “thank you” phrase.

  2. Add a checkCapability intent that enables a user to inquire about what your agent can do. The intent should, for example, match with a phrase such as “What can you do?”.

Note

Fallback Intent

...

As before, it is up to you to add sufficiently many training phrases that will cover the range of different phrases a user could use for expressing an intent.

Fallback intent

Another “unexpected” intent does not have its origin with the user but rather is a result of misunderstandings that arise due to speech recognition problems. If, in our case, Dialogflow is not able to transcribe what a user says and classifies it as a known intent (one of the intents created for your Dialogflow agent), it will classify what the user says as a default fallback intent. Or, in other words, the default fallback intent is matched when your agent does not recognize an end-user expression. Check out https://cloud.google.com/dialogflow/es/docs/intents-default#fallback. You do not need to add a fallback intent, as it is already available when you create your Dialogflow agent. A fallback intent is triggered if a user's input is not matched by

Info

When you inspect the default fallback intent in your agent, you will see that the action name associated with this intent is input.unknown. That will also be the intent label that the MARBEL agent receives when a default fallback intent is matched with user input.

The default fallback intent is a special intent for several reasons. It is matched if Dialogflow cannot match user input with any of the other intents known by the agent, or if it matches the training phrases you input for this intent.It is not necessary to add . But Dialogflow will also match with this intent if user input matches with training phrases that you provide for the fallback intent. You can add training phrases to the fallback intent that act as negative examples to make sure these phrases are not matched with any other intent. There may be cases where end-user expressions have a slight resemblance to your training phrases, but you do not want these expressions to match any normal intents.

Because phrases that are completely unrelated to the topic of our recipe recommendation agent will be classified as a fallback intent when they do not even vaguely resemble any of the training phrases that you add to the intents you create, as these would come back as an unknown intent. Thus, what you should do is think of related phrases that would be close to an intent but should not match with it. For example, “I do not like mushrooms” is food related and refers to an ingredient entity ‘mushroom’. Yet, you might argue that it should not be matched to any of the intents you specify because you do not want users to discuss their likes and dislikes with your agent. Alternatively, you could also argue to include ‘I like’ and ‘I dislike’ as training phrases for filtering by something. But that is a design decision for you to make! The Fallback Intent should be updated as you add more intents throughout the project.

Prolog and Patterns

B Patterns for Repair and Conversation Enrichment

A good handling of repair is vital to making your agent robust to misunderstanding. There are a few common ways in which misunderstanding occurs, that need to be addressed by proper repair patterns.

b12 The agent does not understand the user's’s utterance

Example:

U: Have you read the Hobbit?

A: what do you mean?

Your user has just said something odd or does not match the current situation, and so your Dialogflow cannot identify the user's intention and goes to a default fallback intent. In this pattern, your agent can try to come to an understanding by asking for a rephrasing with ‘paraphraseRequest’. Add a b12 pattern using the defaultFallback intent and paraphraseRequest response.

In responses.pl: paraphrase requests are determined depending on the context of the conversation (the pattern that is currently active). Under the paraphrase request section in responses.pl, you should create ‘text/3’, which specifies the active pattern, the agent response name, and what the agent should say. Fill all these in. 

Code Block
% Intent: paraphrase request

text(c10, paraphraseRequest, ""). % we don't care exactly what user said. 
                                        we got some response.

text(a50recipeSelect, paraphraseRequest, " ") :- % this should be used 
                                if the number of filtered recipes is greater than 0
                                
text(a50recipeSelect, paraphraseRequest, " ") :- % this should be used 
                                if the number of filtered recipes is  0                                
	
text(a50recipeConfirm, paraphraseRequest, "").

text(c43, paraphraseRequest, "").

b13 Out of Context. 

The user said something that Dialogflow can identify with the user’s intent, but your agent is not at a stage where that intent is appropriate.

Example:

A: what recipe would you like to cook?

U: goodbye

A: not sure what that means in this context.

Think about what feature in Prolog you could use to symbolize that there is some intent identified, regardless of what particular intent it is. The agent should respond with a ‘contextMismatch’ response. 

Make a contextMismatch response in response.pl.

b42 Appreciation 

Exampleit is not necessary to add such phrases as negative examples. Instead, it is more useful to think of phrases that a user might say and are similar to some of the training phrases used for your Dialogflow agent that should not be matched with any of your agent’s intents. It is, however, not that easy to come up with such phrases as the cooking domain is very extensive, and it is not completely clear what a recipe recommendation should be able to understand and/or handle. Perhaps it is best to include, as a design decision, some phrases that the agent clearly will not be able to handle (e.g., because of the limitations of its database). An example that comes to mind is a user saying something like “I don't want anything that has a lot of calories”. It will not be easy to handle such a request because the information that is available about calories related to the recipe in the database is very limited at best. However, arguably, even if the agent is not able to handle the request very well it should at least be able to understand the request (the Dialogflow agent should be able to make sense of it) and provide an appropriate response such as an apology, for example, that it is unable to process a request like this. Perhaps a better example would be a user saying “I am interested in rabbits” in the sense of “I have an interest in rabbits”. These statements are somewhat similar to expressing a preference about a recipe (“I’d like a recipe with rabbit”) but their meaning is quite unrelated to requests about recipes. In any case, it’s up to you to make your mind up about what kind of training phrases should be used for the fallback intent.

Tip

When talking about the similarity of (training) phrases, other issues might come to mind. Using training phrases for different intents, for example, that are quite similar will confuse your Dialogflow agent. To avoid such issues, Dialogflow provides a feature called https://cloud.google.com/dialogflow/es/docs/agents-validation. You should enable this validation featurefor your Dialogflow agent and when you have done this, check out the agent validation page. You will see at least one intent issue warning: "There are no negative examples in the agent. Please add examples into 'Default Fallback Intent' intent."

Throughout the project, you should keep checking the validation page for issues and update the fallback intent by adding negative examples when they come to mind (e.g. when you add more training phrases for other intents).

Prolog and Patterns

Repair

When a conversational agent does not understand a user, it needs to have the conversational competence to deal with the situation. Here, we will make two important distinctions related to the type of misunderstanding. The first case is that the agent is not able to match what a user says with an intent and the Dialogflow agent matches with a fallback intent. The second case is quite different. Here the Dialogflow agent can make sense of what a user says and matches it with one of the intents created for that agent. The conversational agent, however, has trouble handling the intent within the conversational context as defined by the active conversational pattern. In other words, the intent does not fit into that pattern and the question is how the agent should deal with this. We provide two repair mechanisms using (somewhat special) patterns again for the agent to deal with each of these situations.

Responding to a fallback with a paraphrase request

An example of a user expression that will not be recognized by our Dialogflow agent is the following:

U: Have you read The Hobbit?

A: What do you mean?

In this case, the user expression clearly does not match with any of the intents that we created for our Dialogflow agent and the agent will not understand it (a fallback intent will be received from the Dialogflow agent). In Moore and Arar, 2019’s taxonomy, this is a b12 pattern. In this pattern, the agent responds to a misunderstanding by asking the user to rephrase, i.e. it makes a paraphraseRequest move. Retrieve the intent label that you should use for the first user move by inspecting the Action section of the default fallback intent in your Dialogflow agent (and make sure this label is of type atom).

  • Add a b12 pattern to the patterns.pl file.

Don’t forget to add textual responses in the responses.pl for the paraphraseRequest intent. You can use the Responses section of the fallback intent in your Dialogflow agent for inspiration.

Responding to an out-of-context user intent

Now suppose that the user said something that Dialogflow can match with one of the agent’s intents but that intent does not fit into the active conversational pattern. An example of handling a situation like that is the following:

A: What recipe would you like to cook?

U: Hey there.

A: I am not sure what that means in this context.

We call the second move of the user an out-of-context move. The agent could expect many intents to fit here but a greeting such as the user expression in the example is out of context and does not fit. In principle, there can be many intents that simply do not fit the context of the conversation. The agent should respond to such out-of-context intents with a response that indicates there is a mismatch with the context (at least from the agent’s perspective). We call the last agent move therefore a contextMismatch. The pattern that we want to implement here as a repair mechanism consists of the out-of-context user intent as the first move and the contextMismatch as the second move. The identifier we use for this pattern is b13. This pattern is somewhat different from other patterns as we only know for the first [Actor, Intent] pair that the Actor must be instantiated with user but there is no specific intent we can use to instantiate the Intent parameter. For this reason, as it should be possible to instantiate this parameter with in principle any intent, we should keep this variable and we will not instantiate it at all. To complicate things even further, to enable the agent to shape its response specifically based on the Intent that is out of context, we want to pass this parameter on to our contextMismatch move. Providing a generic type of response as in the example is very dissatisfying given that the conversational agent has so much more information it can use to shape its response. To make this possible, we will simply add the parameter to the intent label and use contextMismatch(Intent) for the second agent move.

  • Add a Prolog rule for the b13 pattern to the patterns.pl file using Intent and contextMismatch(Intent) as the intent labels for the first and second dialog moves.

Info

We have already hinted at the special status of the b13 pattern. In a sense, it is a “catch-all” pattern that matches with any intent as the first move and does not represent a specific common conversational pattern. Because of its generic form, the agent could apply this pattern to any user expression, but that would not be very useful. The application of this special pattern therefore is regulated differently by the dialog manager in the updateSession.mod2g module. When you inspect this file, you will see that the b13 pattern is used as the last option for processing a (user) intent. This last option corresponds to the case where an intent that is recognized is an out-of-context intent.

As usual, we need to add a response for the agent for the contextMismatch(Intent). A simple approach would be to respond with the generic “I am not sure what that means in this context”. You can do this by adding a simple text/2 fact in the responses.pl file for that intent.

The downside of responding with such a generic reply is that it leaves the user wondering what is wrong. It would be better to tailor the response to the specific out-of-context intent, and make use of the currently active pattern at top-level for providing more conversational context. If we take all that into account, then we can shape a response to express more specifically what we would have liked the user to have done. Instead of the text/2 predicate that we used thus far to add responses for agent intent labels, it is also possible to use a text/3 predicate for creating differentiated responses that take conversational context into account. The idea thus is to add rules with text(PatternID, contextMismatch(Intent), Txt) as head where we instantiate PatternID with top-level patterns (the ones we added to the agent's agenda) to the responses.pl file. We provide one example to illustrate the approach for the conversational context a50recipeSelect:

Code Block
text(a50recipeSelect, contextMismatch(Intent), Txt) :-
	recipesFiltered(Recipes), length(Recipes, L), L>0,
	convertIntent(Intent, IntentString),
	string_concat("I'm not sure I got what you said. ", IntentString, Txt1),
	string_concat(Txt1, ", but I was expecting you to add or remove recipe preferences.", Txt).

The idea of this rule is to combine several things: (1) to apologize for not quite getting what the user said (the agent being “confused”), (2) to acknowledge that the agent could make something out of what the user said (by paraphrasing the intent that was matched), and (3) to express an expectation of what the user should have done (to make the agent understand). In an a50recipeSelect context, we expect users to contribute to the recipe selection process by indicating their recipe preferences. We used this expectation to design a specific response for the end of this formula for responding to out-of-context intents. This provides for a generic recipe for responding to an out-of-context intent that can be implemented for any combination of context (pattern ID) and the out-of-context intent.

It does require a lot more work, of course. For one thing, we need to map all intents that could be out-of-context intents to expressions that we can use in our definition. We provided some of that work for you below for the intents that we have seen thus far.

Code Block
%%% Converting intent to response strings
convertIntent(appreciation, "You were expressing an appreciation").
convertIntent(checkCapability, "You asked what I can do for you").
convertIntent(greeting, "You were saying Hi").
convertIntent(requestRecommendation, "You were asking me to pick a recipe for you").
convertIntent(recipeRequest, "You were asking for a specific recipe").

It will also require you to still add a lot more code for handling the other patterns that we have (though you might also figure out that doing that work does not make sense for all possible pattern contexts and out-of-context intents; in a greeting context, for example, your agent might simply not care that much and define text(c10, contextMismatch(_), "")). Moreover, you should not forget to update your code for generating responses for the contextMismatch(Intent) intent when you add more patterns and intents to your Dialogflow agent. The pay-off, however, will be that your conversational agent will be much more clear, useful, and appreciated by its users.

Appreciation 

A simple example of a pattern where a user first expresses their appreciation and the agent receives this well is the following:

U: Thanks

A: You're welcome.

Your user says thank you and shows some ‘appreciation’ and your agent should respond appropriately with an ‘appreciationReceipt’.

In patterns.pl specify the B42 pattern. Furthermore, go into responses.pl and create a corresponding text fact.

C pattern for Checking Capabilities

c30 General Capability Check

Your In Moore and Arar’s taxonomy, this classifies as a b42 sequence closer appreciation pattern. Implement this pattern in patterns.pl. You should use the intent labels: appreciation and appreciationReceipt. Add phrases the agent can use for expressing the receipt of the user’s appreciation in the responses.pl file.

Checking capabilities

When a user wants to know what your agent does. Create a pattern for this. agent can do for it, i.e., check what capabilities it has, the agent should be able to provide an appropriate reply. The key challenge here is to fill in the ___ in the example below. What would be a good response to such a general request for information from a user? The capability check should give a user enough guidance to understand how to talk to the agent or, even better, ideally also to ask more specific questions about its capabilities, for example, “Tell me more about the recipe features you know about” (cf. Moore and Arar, 2019).

Example:

U: what What can you do?

A: At the moment I can ____. 

In Moore and Arar’s taxonomy, this classifies as a c30 pattern for a general capability check. Implement this pattern in the patterns.pl file. You should use the intents/predicates 'checkCapability' (for the user) and 'describeCapability' (for the agent).

In responses.pl find text(describeCapability, ""). and fill in the atom with an appropriate response.

Visuals

Nothing we ask you to do here for this capability. It’s up to you.

Test it Out

Say random stuff like “the sky is blue” in all steps and see how your agent responds and if it freezes or if you are allowed to continue.

Before the repairs, the agent froze if it did not understand and could not proceed. Now intent labels checkCapability and describeCapability. Define the agent’s response in the responses.pl file.

Info

A similar design choice for specifying the response applies to the describeCapability as to the contextMismatch(Intent) intent, although for somewhat different reasons. As Moore and Arar, 2019, also argue, a long presentation of the agent’s capabilities in a conversational interaction does not work in practice. In other words, specifying a long text for the agent’s response is not very suitable for a conversational agent. A more conversational approach would be to refine the response and take the context where a user asks what the agent can do into account. You thus could also consider using the text/3 predicate that allows specifying context and design responses that are different for each of these contexts when a user asks for help.

Visuals

You can update the visuals based on what you think will help the user the most. Think about how you can support the implemented capability visually.

Test it Out

Before we added the repair patterns above, the agent got stuck if it misunderstood something and was not able to proceed with the conversation. With the changes made now, you should be able to say something random, it says the agent should indicate that it does not understand, and you should be able to continue with the ongoing, active pattern.

We do not provide a detailed testing script here anymore but leave it up to you could then continue the patternto systematically test your agent. You could say random stuff like “the sky is blue” at any step during the conversational interaction and evaluate whether your agent responds and you can continue afterward with the conversation.

Info

All done?

Proceed with https://socialrobotics.atlassian.net/wiki/spaces/PM2/pages/2216001572/BuildingDesigning+and+Developing+Your+Agent#Agent-Capability-10%3A5%3A-AllowFilter-forRecipes-Filterby-RemovalIngredients.