Trust and security are at the center of most discussions when organizations modernize with new technologies like AI, IoT and ML. Leaders know that in order to optimize return on their technology investments, they must obtain stakeholder trust. Chatbots are a great way to gain operational efficiency, boost customer experience and modernize your workplace but before you deploy, consider these guidelines to ensure you design a bot that your users can trust.
In 2016 Gartner had predicted that “Conversational AI-first” will supersede “cloud-first, mobile first” would become the most important, high-level imperative for enterprise architecture and tech innovation leaders in the next decade and that prediction is being realized. Conversational agents are well past the hype cycle and have proliferated in the last few years owing, amongst several other factors, to the maturity of chatbot platforms like the Microsoft Bot Framework. And so have customer expectations rocketed. The rapid advances in NLP (Natural Language Processing) like Language Understanding service allows applications to understand what a person wants even when expressed in their own words, thereby allowing for more natural chatbot conversations. But whatever the type of chatbot - social engagement bots, workflow automation bot, information discovery bot, productivity bot or decision support systems- customers expect bots to engage in light-hearted conversations and failing to do so could come across as monotonous and boring. In an era where content is king, context is queen. A bot that has a personality, is contextually and socially aware, and also serves its primary use case, has a greater trust level with the users. Thankfully, innovation leaders can fulfill this modern need for chatbots that respond to common small talk in a consistent tone, thanks to tools like Project Personality Chat.
But with great power comes great responsibility. The design and implementation of conversational agents must be evaluated for risks and potential harm. The risks involved can range from misunderstanding a user’s intent to engaging in contentious topics. From our experience in this space, we can vouch that the successful adoption of such conversational systems depends not only on the technology used, the data source powering the bot and the conversational experience. How much the user “trusts” the bot is a major driver. And this trust is built on a couple of factors as covered by Microsoft’s AI Principles such as transparency, reliability, safety, fairness, diversity and privacy. This understanding of what the modern chatbot user wants combined with the tools now available to account for those parameters when building conversational agents, responsible AI chatbots are far from a dream now.
Building Transparent Bots
The bot is not here to win an imitation game so there’s no need to fool your users into thinking they are talking with a human when they aren’t. Instead, you should carefully design your bot to reveal its identity, without undermining the user’s trust nor the conversational experience. It is important to understand that a bot can be personalized to represent your brand’s unique voice without being too personal (read chatty) with the user and soliciting personal information. Text to Speech services can even enable chatbots to talk back to the user, converting text to audio in near real time with the choice of over 75 default voices.; even creating new custom voice models for a unique and recognizable brand voice tuned to specific recordings. And as these speech synthesis capabilities become more sophisticated, it will be even more important to reinforce transparency in bot design to avoid a breach of trust with your user base.
To design with transparency, think of it like this: your chatbot needs a job description. Reveal the purpose of the bot upfront. Set user expectations for the bot’s capabilities. Use the bot’s help card or welcome card effectively. More importantly, understand the bot’s limitations and reveal not only what the bot can do but also what it canNOT and enabling hand off to a human when it can’t.
Building Reliable Bots
The ability to reach out to a human, in case the bot cannot accommodate the user query, is vital. Human handoff points can be triggered from a simple command like “I want to talk to a human” or live tracking sentiment scores on user’s utterances. This is particularly important when dealing with chatbots that have consequential use cases like in healthcare to make sure there are people involved to provide judgement, expertise and empathy.
In either case, it is helpful to implement a feedback mechanism for the user to be able to provide feedback to the bot. And despite how much we’ve designed for reliability, its worthwhile to decide on the reliability metrics. The closer the bots error rate is to zero, the better it is. But practically, factors like the domain and the user base will determine this and hence it pays to set an acceptable error rate. This will ultimately tie into the success criteria for the bot.
Building Traceable Bots
Because anything that can go wrong, will go wrong. More importantly, you cannot improve what you cannot measure. Measurements can range from the chatbot performance to user satisfaction through sentiment analysis. Sentiment analysis is even more reliable given the fact that humans inherently treat bots differently and they are less likely to mask their frustrations with a bot than a human. The ability to track, trace and automatically report on these metrics can be easily enabled by Bot Analytics, an extension of App Insights.
Building Accessible Bots
Build inclusive and accessible bots which recognize exclusion, learn from diversity and solve for ability constraints. Ensure that the bot can be used by people with disabilities just as effectively as people without. Doing so will help users who require color contrast, rely on screen readers, navigate UI using only keyboards, etc. Microsoft’s Inclusive Design toolkit provides a great framework for integrating accessibility into your design considerations.
Building Respectful Bots
Acknowledge the limitations of your bot, and make sure your bot sticks to what it is designed to do. For example, a travel bot should avoid engaging in sensitive topics such as race, gender, religion and politics. Restrict the bot’s conversations to its scope so it does not entertain open conversations that could entail higher investment or expose the bot to potential social risks.
Enforcing a code of conduct prohibits the bot from engaging in hate speech, bullying etc. Speech to Text does exactly that - converts spoken audio to text but can be tailored to vocabulary or speaking styles. Hence the need to censor the content to and from the bot. Content Moderation services include offensive text classifiers and can help protect your bot and its users from abuse.
Translator services like Machine translation systems, use machine learning to translate large amounts of text to any supported languages. While these better enable the bot to service in culturally diverse locales – it is important to be respectful of cultural norms.
Building Secure Bots
QnA Maker service helps create a question and answer repository from semi-structured content like FAQ (Frequently Asked Questions) documents or URLs and product manuals. But you should ensure that the training data is cleaned, including: scrubbing the data for grammar, ensuring it doesn’t contain any PII, etc. Chatbot deployments in Azure are secured to ensure that malicious agents’ attacks on chatbot environments that host data and services, are thwarted. This includes securing the APIs. Public facing Bot APIs are reviewed and secured to protect from malicious usage.
Chatbots are unique with interfaces and conversational patterns of interaction that can consume a good deal of consumer information, particularly personal information, if not monitored and regulated. Legal frameworks like GDPR are demanding the respect of user privacy. But even beyond legal compliance and regulations, as an ethical principle, a responsible chatbot should not demand or store any more information than what’s required to serve its purpose. When in doubt, focus your chatbot design on the ability to provide the user with complete and secure access to the system, even at the cost of compromising on the experience, in the event of a conflict.
Building Fair Bots
Humans are innately biased. And it’s only natural that these biases are inherited by the systems we build. Therefore, a conscious effort to include a diverse team and monitor training data for bias is important. Ensuring that the data consumed by the bot is reviewed and representative of a diverse audience increases chances of fairness in its results.
Based on what we have learned both through our own cross-company work focused on conversational agents and by listening to our clients, we believe that for powerful and delightful conversational experiences, it is expected that people who design and implement these agents possess not just the technical skills but also the ethical acumen. Because we have a moral responsibility to ensure that our creations are fair and safe, reliable and transparent, respectful and accessible.
“Artificial Intelligence brings great opportunity, but also great responsibility. We’re at that stage with AI where the choices we make need to be grounded in principles and ethics – that’s the best way to ensure a future we all want.” – Microsoft CEO, Satya Nadella.
If you’d like to know more about best practices for building trustworthy bots, Valorem Reply’s team is here to help. Reach out to us at marketing@valorem.com to be connected to our chatbot experts.