Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

We have taken you by the hand thus far and walked you through the code you were asked to produce step-by-step. We will still be providing useful information for you to learn more about developing conversational agents, but now we will change gears a bit and leave more for you to figure out yourself, too. Remember that you can find useful information and links on the Project Background Knowledge page.

Dialogflow

Conversational patterns set expectations about what actors participating in a conversation will do, but users often will not quite meet these expectations and make moves that do not fit into the active pattern. The conversational agent will need to be able to handle such “unexpected” moves from users. “Unexpected” here means that these moves do not fit into the currently active conversational pattern, not that such moves should not be expected from users. Two intents that we should actually expect to match with what a user could say are an appreciation intent and an intent for checking what the agent is capable of. You should add the following intents to your Dialogflow agent, making sure that you use the intent labels specified below:

  1. Add an appreciation intent to match user expressions of appreciation or gratitude. The intent should, for example, match with a “thank you” phrase.

  2. Add a checkCapability intent that enables a user to inquire about what your agent can do. The intent should, for example, match with a phrase such as “What can you do?”.

As before, it is up to you to add sufficiently many training phrases that will cover the range of different phrases a user could use for expressing an intent.

Fallback intent

Another “unexpected” intent does not have its origin with the user, but rather is a result of misunderstandings that arise due to speech recognition problems. If, in our case, Dialogflow is not able to transcribe what a user says and classify it as a known intent (one of the intents created for your Dialogflow agent), it will classify what the user says as a default fallback intent. Or, in other words, the default fallback intent is matched when your agent does not recognize an end-user expression. Check out https://cloud.google.com/dialogflow/es/docs/intents-default#fallback. You do not need to add a fallback intent, as it is already available when you create your Dialogflow agent.

When you inspect the default fallback intent in your agent, you will see that the action name associated with this intent is input.unknown. That will also be the intent label that the MARBEL agent receives when a default fallback intent is matched with user input.

The default fallback intent is a special intent for several reasons. It is matched if Dialogflow cannot match user input with any of the other intents known by the agent. But Dialogflow will also match with this intent if user input matches with training phrases that you provide for the fallback intent. You can add training phrases to the fallback intent that act as negative examples to make sure these phrases are not matched with any other intent. There may be cases where end-user expressions have a slight resemblance to your training phrases, but you do not want these expressions to match any normal intents.

Because phrases that are completely unrelated to the topic of our recipe recommendation agent will be classified as a fallback intent when they do not even vaguely resemble any of the training phrases that you add to the intents you create, it is not necessary to add such phrases as negative examples. Instead, it is more useful to think of phrases that a user might say and are similar to some of the training phrases used for your Dialogflow agent that should not be matched with any of your agent’s intents. It is, however, not that easy to come up with such phrases as the cooking domain is very extensive, and it is not completely clear what a recipe recommendation should be able to understand and/or handle. Perhaps it is best to include, as a design decision, some phrases that the agent clearly will not be able to handle (e.g., because of limitations of its database). An example that comes to mind is a user saying something like “I don't want anything that has a lot of calories”. It will not be easy to handle such a request because the information that is available about calories related to the recipe in the database is very limited at best. However, arguably, even if the agent is not able to handle the request very well it should at least be able to understand the request (the Dialogflow agent should be able to make sense of it) and provide an appropriate response such as an apology, for example, that it is unable to process a request like this. Perhaps a better example would be a user saying “I am interested in rabbits” in the sense of “I have an interest in rabbits”. These statements are somewhat similar to expressing a preference about a recipe (“I’d like a recipe with rabbit”) but their meaning is quite unrelated to requests about recipes. In any case, it’s up to you to make your mind up about what kind of training phrases should be used for the fallback intent.

When talking about similarity of (training) phrases, other issues might come to mind. Using training phrases for different intents, for example, that are quite similar will confuse your Dialogflow agent. To avoid such issues, Dialogflow provides a feature called https://cloud.google.com/dialogflow/es/docs/agents-validation. You should enable this validation feature for your Dialogflow agent and when you have done this, check out the agent validation page. You will see at least one intent issue warning: "There are no negative examples in the agent. Please add examples into 'Default Fallback Intent' intent."

Throughout the project, you should keep checking the validation page for issues and update the fallback intent by adding negative examples when they come to mind (e.g., when you add more training phrases for other intents).

Prolog and Patterns

B Patterns for Repair and Conversation Enrichment

A good handling of repair is vital to making your agent robust to misunderstanding. There are a few common ways in which misunderstanding occurs, that need to be addressed by proper repair patterns.

b12 The agent does not understand the user's’s utterance

Example:

U: Have you read the Hobbit?

A: what do you mean?

Your user has just said something odd or does not match the current situation, and so your Dialogflow cannot identify the user's intention and goes to a default fallback intent. In this pattern, your agent can try to come to an understanding by asking for a rephrasing with ‘paraphraseRequest’. Add a b12 pattern using the defaultFallback intent and paraphraseRequest response.

In responses.pl: paraphrase requests are determined depending on the context of the conversation (the pattern that is currently active). Under the paraphrase request section in responses.pl, you should create ‘text/3’, which specifies the active pattern, the agent response name, and what the agent should say. Fill all these in. 

% Intent: paraphrase request

text(c10, paraphraseRequest, ""). % we don't care exactly what user said. 
                                        we got some response.

text(a50recipeSelect, paraphraseRequest, " ") :- % this should be used 
                                if the number of filtered recipes is greater than 0
                                
text(a50recipeSelect, paraphraseRequest, " ") :- % this should be used 
                                if the number of filtered recipes is  0                                
	
text(a50recipeConfirm, paraphraseRequest, "").

text(c43, paraphraseRequest, "").

b13 Out of Context. 

The user said something that Dialogflow can identify with the user’s intent, but your agent is not at a stage where that intent is appropriate.

Example:

A: what recipe would you like to cook?

U: goodbye

A: not sure what that means in this context.

Think about what feature in Prolog you could use to symbolize that there is some intent identified, regardless of what particular intent it is. The agent should respond with a ‘contextMismatch’ response. 

Make a contextMismatch response in response.pl.

b42 Appreciation 

Example:

U: Thanks

A: You're welcome.

Your user says thank you and shows some ‘appreciation’ and your agent should respond appropriately with an ‘appreciationReceipt’.

In patterns.pl specify the B42 pattern. Furthermore, go into responses.pl and create a corresponding text fact.

C pattern for Checking Capabilities

c30 General Capability Check

Your user wants to know what your agent does. Create a pattern for this. 

Example:

U: what can you do?

A: At the moment I can …. 

You should use the intents/predicates 'checkCapability' (for the user) and 'describeCapability' (for the agent).

In responses.pl find text(describeCapability, ""). and fill in the atom with an appropriate response.

Visuals

Nothing we ask you to do here for this capability. It’s up to you.

Test it Out

Say random stuff like “the sky is blue” in all steps and see how your agent responds and if it freezes or if you are allowed to continue.

Before the repairs, the agent froze if it did not understand and could not proceed. Now you should be able to say something random, it says it does not understand and you could then continue the pattern.

  • No labels