Explore how your team can use Hiver.
Book your demo now.

  • Assign, track, & collaborate on emails across teams
  • Run a multi-channel help desk within your inbox
  • Track support analytics and build custom reports
Trusted by 10,000+ teams globally

Schedule your
personalized demo

Hi there! 👋

Thanks for your interest in Hiver! Please help us with the following details for a personalised demo.

Blog
>
Customer Experience
>
How to stop chatbots from upsetting your customers?

Enhance chat support with Hiver's Chatbot

Request a Demo

Table of contents

How to stop chatbots from upsetting your customers?

Dec 11, 2024
    |    
9 min read
    |    

Table of contents

Remember Tay? If you were actively tweeting in 2016, you would have probably heard about Microsoft’s experimental chatbot – Tay, a moniker short for Thinking About You, going rogue on social media. 

Screengrab of Tay chatbot when she was introduced to the world. She's very happy, positive and excited.
How it started
Screengrab of Tay chatbot when she started making sexist and racist comments less than 24 hours of her launch
How it ended

Tay was an intricate and complex project, with many nuances that Microsoft’s AI engineers hadn’t considered before launching it. An essential part of being human is our ability to restrict thoughts from turning into actions. Consider this.

Do you sometimes have a human urge to intensely cuddle a tiny, cute, fluffy kitten or a puppy? Have you heard people say things like, “Awww…. it’s so cute, I could eat it!”. 

They’re not wrong. 

The phenomenon of positive emotions giving way to negative actions is called ‘cute aggression’. But most of us are in a state of emotional homeostasis – a happy middle spot that stops us from feeling and acting on those extreme emotions. 

Instilling this complicated behavioral attribute in AI is a challenge. 

And it’s not just Conversational AI bots that can alarm customers with their unpredictable responses. Simple rule-based systems can also adversely affect your customer’s experience if not programmed correctly. Implementing Conversational AI bots requires careful consideration to ensure that customer interactions are smooth and free from misunderstandings.

So let’s dive into ways your chatbot could be upsetting your customers and figure out ways to prevent it:

Table of Contents

It won’t allow you to talk to an agent

Taken aback with us rooting for human agents right at the start? Aren’t chatbots supposed to reduce the agent workload? They are. 

Artificial Intelligence tools like Amazon’s Alexa and Apple’s Siri are doing a great job understanding human requests using machine learning and natural language processing. But most business bots may lack context when a customer reaches out in an emergency. 

A mock patient tests a medical chatbot programmed to offer emotional support.
A mock patient tests a medical chatbot programmed to offer emotional support
Customer reaching out to Paypal's customer support during an emergency.

Customers might also want human agents to get accurate answers to precise questions and don’t want to rely on a rule-based bot. 

Let’s imagine a use case where you’re fatally allergic to peanuts. You want to know if the tacos you’re ordering from the take-out restaurant’s app contain peanuts. A human agent could check with the chef and get back to you on that doubt. But the app’s bot won’t allow you to connect to a human agent in the first place. 

How to fix this:

It’s okay for a chatbot not to know how to respond. In such scenarios, program your bot so that it apologizes for not being able to process a request and offers to connect them to a human. 

It’s similar to an agent transferring you to the finance department if you approach him for, let’s say, complex payment-related queries. There’s nothing wrong with not knowing.

Treat your bots like humans. It can help you add some perspective while mapping the journeys of potential users.

It forces you to rephrase your questions

Imagine asking a bot what the weather will be like on the coming weekend.

You type, “What will the weather be like on Sunday?”. Seems simple right?

But your weather bot replies, “I’m sorry I didn’t get that.”

So you keep rephrasing your question by typing:

  • On Sunday, what will the weather be like?
  • Sunday’s weather: what can we expect?
  • Is Sunday going to be a sunny day or a rainy one?
  • What kind of weather can we expect on Sunday?
  • Sunday’s weather: what will it be like?

There could be ten more versions to this question. You’re infuriated at this point. All you wanted to know was Sunday’s weather. 

In a last-ditch attempt, you type, “Send Sunday weather forecast.” Your bot finally predicts it to be partly cloudy. 

The trigger to get the answer was ‘weather forecast.’

How to fix this:

So, the first step is to forecast (no pun intended) what questions your target customers are likely to ask. 

Then take into account different versions of that question. Sit down with your Sales, Marketing, and CX teams and list all possible variations of the questions that might appear in the customer-bot journey. 

Example of a poor chatbot experience. User is unable to find the trigger keyword to get the answers she wants.

Identify the popular keywords your customers might use and feed them to your bot to avoid any rephrasing issues in the future.

Also, use options when you’re facing dead ends. For example, the chatbot software can relay the message, “I’m sorry, I didn’t get you. May I suggest the following options?” and then list out options close to what the user was trying to say. 

Example of a poor chatbot experience from Sephora. Company should proactively list out options instead letting customers figure it out.

The issue with the image above is that Sephora should have proactively listed out the options for customers instead of having them figure out what to ask.

Proactively offering suggestions to customer queries is better than requiring the user to rephrase the same question. Manually typing out the same question in different ways is annoying.

You’re stuck in a loop

A very common chatbot fail is getting stuck in a loop and being unable to carry the conversation forward. You should design and plot your chatbot’s flow like an easy, natural-sounding conversation, just like hanging out with a friend. Unless your friends are annoying like this particular bot.

An example of a poor chatbot experience. In this instance, teh customer is stuck in a loop and is unable to carry the conversation forward.

How to fix this:

There are ways to avoid this:

  • Brainstorm to map pain-points

Sit down with your Sales, Marketing, and Customer Service teams and actively brainstorm over the actual purpose of your chatbot.

Is it to generate leads? Should it be used as a self-service customer care portal? Is it to place an order? What’s the objective you’re trying to achieve with your chatbot? 

Identify all the possible customer pain points and think of simple decision trees that can stem from your IF/THEN queries. Filter the basic pain points solvable by bots from the complex ones that require human interaction. 

An example of how to design a chatbot conversation using decision trees and IF/THEN queries.

Remember, your chatbot doesn’t have to take on difficult questions. They could be answering simple questions like your address, working hours, or whether you’re open on weekends. It truly depends on the size of your business, the number of human agents you have on board, and the objective for your chatbot, among other factors.

  • Delve deep into your buyer personas

You’re looking to help actual people. Identify what queries your customer would ask when she visits your website. Interview your customers about what questions they had for your sales team or agents before they zeroed down on your product. 

No two customer journeys are necessarily the same. Your customers might be approaching your product with different questions. One would have visited your website to know if your CRM works for personnel management, and another would have liked to know if your CRM has an automated voice call facility.

  • Let it redirect your user to other options

If nothing works, push your chatbot to always end the conversation with other helpful suggestions. Your bot can ask users to access your FAQ page, reach out to your team via email and talk to an agent.

It doesn’t identify sentiments

How would you usually respond when someone tells you their grandmother passed away? 

An ideal response would be something along the lines of “I’m truly sorry for your loss.” 

Even if you’re not offering condolences, you wouldn’t be chirpy about the sad news, right? 

If only we could find some way to explain the concept of death and mourning to this cheerful bot.

To be fair, it’s not a bot’s fault for sending inappropriate responses. AI needs a fair amount of data to be trained. And not just any kind of data. You want your system to recognize the VoC (Voice of Customer) – not yours, not your programmers’, nor of your subject matter experts.

How to fix this:

You must invest sufficient time (and then some!) in introducing your customer’s mannerisms and utterances to your customer service chatbot. Some of these include:

  • Semantic ambiguity

Train your chatbot for semantic ambiguities. Consider this sentence, “The professor’s appointment was shocking.” Does this mean the professor had an appointment with someone shocking? Or does it mean that appointing this professor was shocking? 

Another example could be, “I don’t use glasses.” Is this person referring to eyewear or glass tumblers? 

  • Informal conversations

The way we speak and the way we draft a formal email are entirely different. When a user reaches out to a chatbot, he’ll usually adopt an informal tone. Without being trained for context, your AI bot will not understand a sentence like, “Mabel adores her dog; Kelly does too.” Does Kelly love Mabel’s dog or her own?

An example of a poor chatbot experience. In this instance, the bot is not able to understand informal conversations.

Remember when you were a kid, and you’d ask your teacher’s permission to visit the loo? You’d say, “Miss, can I go to the loo?” and she’d snap back and say, “I don’t know Robert, can you?”. 

Since you were asking for her permission, the question she was looking for was, “Miss, may I go to the loo?”

The syntactical nature of the English language can be overwhelming, and getting a bot to be on the same page with you is challenging.

The only way to achieve these advancements is to introduce your bot to more data. And not just any data. You need to clean your data and classify it to be more accurate and relevant. If you are struggling to access customer data due to privacy policies, use a synthetic data generator to unlock vast amounts of synthetic text from customer conversations, even from voice transcripts!

It’s stubborn

Imagine you’re selling a CRM solution. A potential customer has narrowed her options to your platform and your close competition. She visits your website. Before starting the chat, your virtual assistant asks her to type in her official email ID. She doesn’t want to give away her organizational mail ID yet. 

It rejects her Gmail ID and other IDs from her personal accounts. It won’t allow her to ask questions about your CRM solution unless she discloses her official mail ID nor does it let her connect to a live agent. She gives up and moves on to check your competitor’s website, where information is presented to her instantly.

An example of a poor chatbot experience. In this instance, the chatbot is stubborn about not moving ahead with the conversation unless the user replies to the question.

Rigidity in chatbot automation makes for annoying customer experiences, blow up any chance of customer satisfaction, and decreases customer retention.

How to fix this:

It would be great to have as few mandatory fields to fill in, if possible.

But if you really need the prospect’s official mail ID, as quoted in the example above, specify the number of attempts the bot should make to present this question to the user. 

If the prospect ignores the question after two attempts, then let her move ahead with the rest of the designed journey.  

A basic thumb rule to chatbot user experience is that it should allow everyone to access your brand and actively help those who are trying to use your product or service. 

How do you know if your chatbot sucks?

Continuous monitoring of key metrics on your customer service software can help you track your chatbot’s performance. To avoid bot rot, here are some parameters that you need to be on the lookout for:

  • User satisfaction

Set up surveys at the end of the conversation. Ask questions like, “Was I able to solve your problem?” or “How would you rate your chat experience from 1 – 5?”

  • Total Users

This figure shows how many website visitors use your bot. Your bot application can give you this number. If it’s too low, consider using a more engaging language. Don’t forget to check if your website widgets are interfering with your bot. 

  • Chatbot accuracy

Relevant to businesses working with NLP-powered AI bots, this metric explores two questions:

1. Did your bot understand the user’s questions?

2. Did your bot accurately resolve your user’s question?

We can understand this by the number of fallback messages (“I’m sorry, can you rephrase that?”) triggered during conversations. If 100 out of a 1000 conversations had fallback messages, then you can calculate the accuracy as follows:

1000 – 100=900 (Accurate conversations)

900/1000 = 0.9 which converted into percentage is 90%

So your chatbot’s accuracy is 90%.

  • Goal completion

This metric measures how successfully your bot resolved your customers’ queries.

  • Human takeover rate

How often do humans take over the conversation from your bot? If the rate is high, you should return to the drawing board and reconsider your bot’s flowchart.

Set healthy expectations

It’s important to note that high expectations from your bot will inevitably set you up for disappointment and bad experiences. 

Bots that sound robotic and rigid can be frustrating, but it helps if we also acknowledge how complicated humans are in the first place and how complex languages are to begin with. 

A more reasonable line of thought with training bots would be to focus on their progress than strive for perfection. 

Speaking of progress, are you looking to bolster your customer support strategy? Check out Hiver’s live chat feature that helps your team offer real-time, hassle-free support to website visitors.

Shobhana has been recognized as a 'Top Customer Support Voice' by LinkedIn. Her expertise lies in creating well-researched and actionable content for Customer Experience (CX) professionals. As an active member of popular CX communities such as CX Accelerator and Support Driven, she helps professionals evaluate tools for their support team and keeps a keen eye on emerging industry trends.

Deliver personalized customer support at scale

Free forever. No credit card required.
CTA image
Subscribe