Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Welcome to the bot evaluation system testing section!

In the 2023: End Report you will see a section asking how you tested and evaluated your bot. Throughout this course, you should continuously and thoroughly engage in conversations and analyze the functionality of your botSystem testing comprises a set of checks that you as developers of the agent do to identify how well the agent and its components operate when exposed to diverse input. This is done in a continuous manner when developing your bot, by simply engaging in conversations with the agent and analyzing its functionality. You should use your bot a ton (at least 10 different conversations per person), and try and vary the conversations you have. You should also try and trigger each pattern, page, intent, and filter to check if it all works. As you improve it and add things you should perpetually re-test your conversational agent. We recommend that each team member should try and test other sections' parts.

Part of your end report will be about how you did this (i.e. do you know how to trigger each intent, page, etc.).

In the 2023: End Report you will see a section asking how you tested and evaluated your bot.

You should use your bot a ton(at least 10 different conversations per person). Test everything to see if it is working i.e. make sure you trigger all possibilities, patterns, pages, filters, intents etc. 

...