Why trust?
Trust is an important aspect of any interaction, in this article, we will present the core factors that might impact the trust your users have towards your chatbot and come with tips on how to design chatbots your customers trust, and love to use.
There are many factors that impact the trust we have in the systems we are interacting with. In robotics, we often look towards the Hancock et al. [2011] classification. This presents three impact categories: the robot(or system), the people, and the environment. Out of these, the robot's performance has shown to be the most important aspect to consider when optimizing for trust. It is also the one thing easily within our control!
Another study focused specifically on chatbots, found that the overall factors that impact trust are either those concerning the chatbot or the factors concerning service context and environment surrounding the chatbot [Følstad et al, 2018].
So, what does this mean? Well, research shows that the performance of our chatbots will have the greatest impact on users’ trust. Now how do we optimise this?
How to build trustworthy chatbots?
Trust can be broken when the abilities of the chatbot do not meet the expectations of the user. We therefore approach trust optimisation in a hierarchical manner.
- Manage expectations: First the chatbot needs to ensure that the expectations people have towards it will meet its abilities. This is why chatbots often give examples or guidance in its first message to the users. The chatbot should also be able to inform the users of its abilities when prompted.
- Answer first: Next, once the dialogue begins, it is important that the chatbot is actually able to answer the users’ questions. This is becoming rapidly improved following the release of ChatGPT, and when implemented correctly can help minimize the chatbot fallback rate.
- Solve problems: Once the dialogue is solid, the fun begins. We can now focus on solving small and repetitive tasks for the users.
Once these needs are met, we can start to play around with any other factors that might improve the trust people have in chatbots.
How we approach this in Kindly?
In Kindly, many of our chatbots have already advanced far in the help they give users.
1. We have always strived to manage and exceed users' expectations. One way to manage expectations is by giving them a heads-up before they start chatting away and by keeping system transparency in mind.
2. Kindly GPT help people constantly, both customers and employees. With more advanced solutions rolling out, some of our chatbots now answer questions about specific products through smart webhooks, all while keeping the fallback rate low! You can read more about Kindly GPT here: https://www.kindly.ai/product-page/kindly-gpt.
3. Many of our chatbots are smart enough to help actively solve peoples' problems. Our chatbots are already helping with tasks such as tracking packages for customers of Kicks, and ending rides with Voi. This last feature stepped in to assist Voi when they experienced a sudden outage, when the bot still allowed users to end their rides through the chatbot. This allowed their support team to continue their day, stress free.
We have in this article given you a short insight into three key aspects to consider when designing trustworthy chatbots: Expectation management, answer optimisation and implemented functionality.
We strive to make our chatbots better every single day.
This more than anything underlines the core mission of Kindly:
Creating the world’s most loved AI chatbot
About the author:
Birthe is a PhD candidate currently writing up her thesis on trust in human-robot interaction, with a focus on system failure, repair and transparency. All her published work can be found here.
References
Følstad, A., Nordheim, C. B., & Bjørkli, C. A. (2018). What makes users trust a chatbot for customer service? An exploratory interview study. In Internet Science: 5th International Conference, INSCI 2018, St. Petersburg, Russia, October 24–26, 2018, Proceedings 5 (pp. 194-208). Springer International Publishing.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human factors, 53(5), 517-527.