4 DO’s and 3 DON’Ts for Chatbot Testing Strategies

advertisement

A quick summary of 7 important DO’s and DON’Ts when designing a chatbot testing strategy. We are continuously seeing teams ignoring those actually rather simple rules.

In german we say rome was not built in a day — same applies for your chatbot training data. A robust chatbot is built by multiple iterations, training and testing cycles and by ongoing monitoring and performance tuning: CODE, TEST, DEPLOY, REPEAT

Without measuring performance with real user conversations, you will never know if your chatbot is really working for your users.

Most teams are tempted to use 100% of the available data for training. Do not do this. You won’t know if your training data works if you use parts of the training data additionally for testing. Rule of thumb is to use 80% of data for training and 20% for testing.

If and only if the amount of available data is very small you may try K-Fold validation to get some insights about the quality of your data.

Again the 80/20 rule — 20% is the work spent for the comfort zone, 80% of the work is testing and bugfixing. 20% of your users will follow the happy path, 80% will break out. Prepare for this.

Automated regression testing is superior for finding defects that you know can happen. It won’t help to find defects you don’t know about. Spend some time with explorative (=manual) testing: try to bring your chatbot to its limits and beyond.

1. How Chatbots and Email Marketing Integration Can Help Your Business

2. Why Chatbots could be the next big thing for SMEs

3. My Journey into Conversation Design

4. Practical NLP for language learning

❌ DON’T: ignore the need to re-test after training

You can never know what effect adding some training data on one end of your fine-tuned NLU model will have on the other end, until you try out. Do a full regression test of your NLU model every single time you make changes.

One of the most human-like bevaviour is to scroll up the conversation history in the chatbot window and resume from a previous step. Most chatbots out there will fail this challenge if not prepared accordingly.

Here are suggestions to address the DO’s and DON’Ts.

Testing is a crucial part of the development process. There is no such thing as a single testing phase when bringing a chatbot to life. Testing has to be part of the team’s daily business, just like coding, design and monitoring.

Awesome picture showing continuous testing

For chatbots as for software products in general, there is more than unit tests coded by the programmers.

Botium Test Project Types
  • Regression Testing — Identify flaws in the conversation flow before going to production
  • NLP Testing — Improve your chatbot understanding
  • E2E Testing — Verifying the end-user experience
  • Voice Testing — Understand your users on voice channels
  • Performance Testing — Ensure your chatbot is responsive under high load
  • Security Testing — Making your chatbot secure
  • Monitoring — Get notified when problems arise

Without the right tools you will be lost. With Botium Box you are prepared for the challenges of getting them in place and integrate them into your chatbot development lifecycle.

Get your free Botium Box Mini instance here

This content was originally published HERE

advertisement

Be the first to comment on "4 DO’s and 3 DON’Ts for Chatbot Testing Strategies"

Leave a comment

Your email address will not be published.


*