Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
minLevel1
maxLevel2
outlinefalse
typelist
printablefalse

We have taken you by the hand thus far and walked you through the code you were asked to produce step-by-step. We will still be providing useful information for you to learn more about developing conversational agents, but now we will change gears a bit and leave more for you to figure out yourself, too. Remember that you can find useful information and links on the Project Background Knowledge page.

Dialogflow

Conversational patterns set expectations about what actors participating in a conversation will do, but users often will not quite meet these expectations and make moves that do not fit into the active pattern. The conversational agent will need to be able to handle such “unexpected” moves from users. “Unexpected” here means that these moves do not fit into the currently active conversational pattern, not that such moves should not be expected from users at all. Two intents that we should expect to match with what a user could say are an appreciation intent and an intent to check what the agent is capable of. You should add the following intents to your Dialogflow agent, making sure that you use the intent labels specified below:

...

Another “unexpected” intent does not have its origin with the user but rather is a result of misunderstandings that arise due to speech recognition problems. If, in our case, Dialogflow is not able to transcribe what a user says and classify classifies it as a known intent (one of the intents created for your Dialogflow agent), it will classify what the user says as a default fallback intent. Or, in other words, the default fallback intent is matched when your agent does not recognize an end-user expression. Check out https://cloud.google.com/dialogflow/es/docs/intents-default#fallback. You do not need to add a fallback intent, as it is already available when you create your Dialogflow agent.

...

Throughout the project, you should keep checking the validation page for issues and update the fallback intent by adding negative examples when they come to mind (e.g. when you add more training phrases for other intents).

Prolog and Patterns

Repair

When a conversational agent does not understand a user, it needs to have the conversational competence to deal with the situation. Here, we will make two important distinctions related to the type of misunderstanding. The first case is that the agent is not able to match what a user says with an intent and the Dialogflow agent matches with a fallback intent. The second case is quite different. Here the Dialogflow agent can make sense of what a user says and matches it with one of the intents created for that agent. The conversational agent, however, has trouble handling the intent within the conversational context as defined by the active conversational pattern. In other words, the intent does not fit into that pattern and the question is how the agent should deal with this. We provide two repair mechanisms using (somewhat special) patterns again for the agent to deal with each of these situations.

...

  • Add a Prolog rule for the b13 pattern to the patterns.pl file using Intent and contextMismatch(Intent) as the intent labels for the first and second dialog moves. You should make sure that the Intent is not equal to the default fallback intent by adding this as a condition to the rule.

Info

We have already hinted at the special status of the b13 pattern. In a sense, it is a “catch-all” pattern that matches with any intent as the first move and does not represent a specific common conversational pattern. Because of its generic form, the agent could apply this pattern to any user expression, but that would not be very useful. The application of this special pattern therefore is regulated differently by the dialog manager in the updateSession.mod2g module. When you inspect this file, you will see that the b13 pattern is used as the last option for processing a (user) intent. This last option corresponds to the case where an intent that is recognized is an out-of-context intent.

...

Info

A similar design choice for specifying the response applies to the describeCapability as to the contextMismatch(Intent) intent, although for somewhat different reasons. As Moore and Arar, 2019, also argue, a long presentation of the agent’s capabilities in a conversational interaction does not work in practice. In other words, specifying a long text for the agent’s response is not very suitable for a conversational agent. A more conversational approach would be to refine the response and take the context where a user asks what the agent can do into account. You thus could also consider using the text/3 predicate that allows specifying context and design responses that are different for each of these contexts when a user asks for help.

Visuals

Nothing we ask you to do here for this capability. It’s up to youYou can update the visuals based on what you think will help the user the most. Think about how you can support the implemented capability visually.

Test it Out

Before we added the repair patterns above, the agent got stuck if it misunderstood something and was not able to proceed with the conversation. With the changes made now, you should be able to say something random, the agent should indicate that it does not understand, and you should be able to continue with the ongoing, active pattern.

...