...
Info |
---|
The information provided here should be sufficient for you to complete the project. For those of you who are interested and want to learn more about conversational patterns and the related coding scheme that we use here (e.g. C4, etc.), see: Moore, R. J., Arar, R. (2019), Conversational UX Design: A Practitioner's Guide to the Natural Conversation Framework. ACM. |
Team Contribution
During this week, as a complete team you will select three recipes for the conversational agent from cookingrecipes.pl
, to work out further as subteams in pairs for recipe selection, visual support and recipe instruction. In addition, you will set up the initial version of the cooking assistant, by creating a Dialogflow agent (See here). The
You should organize your team and decide who will work on the separate subtasks of Recipe Selection, Visual Support, and Recipe Instruction. In this week, the Recipe Selection team will focus on the start of the conversation, enabling the assistant to greet and to have the user select one of the three recipes, while the Recipe Instruction team will focus on the subsequent instruction of the three recipes and enable a capability check by the user. The Visual Support team will incorporate visuals to enrich the conversation. More details about the separate tasks of the three subteams pairs are given below.
Recipe Selection
This section entails the tasks to be completed by The tasks for the Recipe Selection team . It will mainly involve in week 1 concern the steps to have allow the user to select a certain recipe from the knowledge base provided as cookingrecipes.pl
. For the recipes, multiple websites are available. For this course, we aim more or less straightforward The recipes used for this course are one-pot recipes that can be made in a linear, step-by-step fashion . Therefore, simpler recipes are extracted from this website and are available in the file cooking.pl
. that you can find on Tasty. Although not yet the focus for this week, you can get some ideas on how to implement the recipe selection task here in the Cooking Assistant space.
Opening the conversation by greeting
The agent should open the conversation with a greeting, and by sharing its name (self-identify). To achieve this, add the corresponding C1 opening patterns in the
patterns.pl
file and choose a name for your chatbot (insert the anagentName/1
fact into the agent’s belief base in the init module).Check out the
patterns.pl
file as it contains additional instruction details. An opening pattern should be selected when the session history is still empty. Make sure that the agent self-identifies by the corresponding pattern only when anagentName/1
behavioral parameter is set has been instantiated.You should also introduce a greeting intent in your Dialogflow agent to allow the user to respond to the agent's greeting. Click here for instructions on how to create an intent in your Dialogflow Agent.
...
The initial version of the GOAL agent will immediately start instructing a pasta recipe, while it should only start instructing a recipe after a user request. In the coming weeks you will enable the user to choose for a recipe by specifying particular recipe features. For now, we will assume that the user knows about the different recipes to pick from, and directly mentions the name of a recipe of choice. The capability check, allowing the user to ask what the agent can do, may also be used to have the agent mention the available recipesYou may also for now have the agent suggest particular recipes to the user, upon request or as part of the greeting sequence.
The first thing to do is to make sure the agent is able to recognize recipe names. To recognize which recipe the user is talking about, in your Dialogflow agent create an entity
recipe
and add the (shorthand) names of the recipes fromcookingrecipes.pl
(possibly with different variations of the recipe names). Also, add an intentrecipeRequest
to your Dialogflow agent and integrate these recipe names as entities.Now add a pattern
a50recipeName
to thepatterns.pl
file of your GOAL agent. Again, additional instructions are given in the comments of the file. As a final step, add the pattern to the agent's agenda. Make sure the agent's memory is initially empty, and check if the pattern is performed by testing your agent.
...
A recipe can have multiple features, lile like their name, ingredients, taste, country of origin, and cooking techniques. Your task here is to think of the three features you want to use to select a recipe (to be incorporated in week 2 and 3).
The best way to do this is to do a role-playing, where one person is the cooking assistant, and the other person is the user. The user will inquire into the recipes to choose from the 60 available ones, by asking for characteristics (like recipes that are a the main dish , or contain fish). Note that it does not necessarilly has necessarily have to result in one of the three recipes that were chosen to work out further. You may check the Cooking Assistant space on confluence for an elaboration of possible recipe features and how they are used in traditional websites to facilitate recipe selection.
Repeat these role-playing conversations a couple of times, and document well how the conversation was conducted. This will not only give inspiration for the recipe features to include in your project, but may also reveal some typical conversation patterns to be used in the design of the agent (in
patterns.pl
) in the subsequent weeks.
Recipe Instruction
This section enlists the tasks to be completed by the Recipe Instruction team. The main focus is to work with instructions related to a selected recipe.
Enable Instructions of Recipes
The initial version of the cooking assistant spits out all the recipe steps at once. This should be changed to allow the user to indicate he or she has completed the step, which at a later point should also allow the user to ask queries about a step. To make this happen, we will make a small change to the given a30recipeStep
pattern by adding a conversational step (user intent) to it. For this:
Add an intent you should call
recipeContinuer
to Dialogflow that is able to recognize phrases such as move on to next step, continue, simply next, or other similar phrases that the user might use to indicate a step has been completed.After adding this intent to your Dialogflow agent, add the step to the
a30recipeStep
pattern in the GOAL agent code. Your agent will now wait for the user's response before it continues with the next recipe step.
Enable Capability Check
The agent should be able to respond to user requests for more information about what it can do. To make this happen, add a C3 pattern to the patterns.pl
file. In response to a capability check, the agent should indicate what it can do, and for example make clear which recipes it can instruct by mentioning some of them (typically the three you have chosen). As can be seen in the a30recipeStep pattern in patterns.pl
, the agent is triggered by the user intent recipeContinuer
. You need to implement such an intent in your Dialogflow agent to make this work. See this page for more information.
Enable End of Recipe
The initial version of the cooking assistant does not indicate that the final step of a recipe has been reached. It does already have a finalStep/3
predicate that is used as a marker for the final step. Modify the text generated for the finalStep intent in the text.pl
file using the built-in string_concat/3
predicate to indicate to the user that this is the last step of the recipe.
...
The agent should close the conversation with a farewell or a last topic check. To this end, add corresponding C4 patterns in the patterns.pl
file. A closing pattern should be selected when all patterns in the agenda have been performed. Make sure that the farewell pattern only is performed when the lastTopicCheck/0
behavioral parameter is not set.
Visual Support
Here, the main focus is to think of The assignment related to visual support is about how visuals can provide support to decide about for choosing a recipe or improve the improving understanding of a recipe instruction.
Visuals
...
to
...
display chosen recipe
You should design the visuals with respect to the recipe name and the instructions. The agent can thereby This should enrich the information communicated to the user through text and images displayed on the screen.
...
A first implementation is to display the name and an image related to the chosen three recipes
...
Repeat the steps for the recipe instructions as well. For further details, see Week 2.
Example Conversation
The first version of your chatbot should be able to conduct a conversation with the user, which is like the example you can see in Figure below.
...
, using HTML. In the initial GOAL code, the pasta recipe is displayed as an image in the initialization of the agent (see dialog_init.mod2g
), whenever the agent says something (see dialog_generation.mod2g
) and whenever the agent is waiting for the user to say something (see dialog_update.mod2g
). This is obviously not a very useful way to do this. Make the PastaAglioPage rule in html.pl
more generic by displaying the recipe of choice:
Find images for the recipes that you have chosen to work out further.
Define img cards as was done for PastaAglio.
Define a rule by which the page is rendered based on the currently chosen recipe.
For recipes for which you have not added an image, you may display the name of the recipe.
Make sure that the statements in
dialog_generation.mod2g
,dialog_init.mod2g
anddialog_update.mod2g
are updated as well, so that they will not always render the PastaAglio image.
Week - 1 Deliverable
For your Week-1 submission, you need to submit a .doc file
...
The introduction of your team and your conversational agent.
...
weekly progress report in .doc format (See Guidelines).