Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Welcome to the bot evaluation section!

In the 2023: End Report you will see a section asking how you tested and evaluated your bot. Throughout this course, you should continuously and thoroughly engage in conversations and analyze the functionality of your bot. As you improve it and add things you should perpetually re-test your conversational agent. We recommend that each team member should try and test other sections' parts.

You should use your bot a ton(at least 10 different conversations per person). Test everything to see if it is working i.e. make sure you trigger all possibilities, patterns, pages, filters, intents etc. 

Then, analyze no-match conversations in Training. Go to the Training page in Dialogflow and filter conversations at the top left, this video provides a more thorough explanation of how to do this: Use Dialogflow Analytics & Training to Improve Your Chatbot (2021). Due to us using SIC Analytics part does not apply.

Things we think you should keep in mind during testing and should be included in the Testing Section of your End Report:

  • Capabilities of your bot

  • what do you think is most important to test in this phase?

  • Test set up

  • What an example conversation should look like

  • What did you test and how

  • mismatch conversation analysis

  • how did your test go: good and bad( focus on the bad and why it went wrong)

  • how could one fix problems. will you improve it before turn in or is it not feasible?

  • during use, what kind of extensions do you think could be useful or improve the performance

    • how could one even further extend

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.