EPISODE 19

Rob Dwyer on Why “This Call May Be Recorded” Was Always a Lie

Listen on

Listen on

About This Episode

Meet the Guest

Rob Dwyer has spent 15 years in customer experience across SaaS, BPO, and enterprise environments. He has led training and quality programs, managed agent performance across global operations, and worked directly with contact center teams on AI-driven conversation intelligence and QA systems.

 

Today, he is a Sr. Technical Account Manager at Level AI, where he works with enterprise teams implementing AI in their support operations. Before that, he was the VP of Customer Engagement at Happitu.

 

Rob is a three-time ICMI Top 25 Thought Leader and the host of Next in Queue, a podcast with more than 200 episodes covering CX, contact center operations, and the practical realities of AI adoption.

“This call may be recorded for training and quality purposes.” Most of those calls never get reviewed.

In this episode of Experience Matters, Sarah Caminiti, Head of Business Transformation at SupportNinja, and Rob Dwyer, Sr. Technical Account Manager at Level AI, talk through why spot-checking calls is no longer a defensible quality strategy, and what replaces it.

AI-powered QA gives teams coverage across every customer conversation. The shift is not just about visibility. It changes what that visibility is used for. Managers stop hunting for individual mistakes and start identifying patterns. Which behaviors do the strongest agents use consistently? Where is the knowledge base producing the wrong answer? Which customer signals keep appearing week after week?

Those questions were always worth asking. Now they are answerable.

What Rob and Sarah cover in this episode:

  • QA rubrics and where they break down: Vague criteria do not help agents improve. AI forces teams to define exactly what good looks like, which turns out to be something most programs had never done clearly.
  • Coaching from patterns, not individual calls: Full coverage shifts the goal from reviewing specific interactions to identifying consistent behaviors across all of them. That is where real coaching happens.
  • Culture and how it shapes the way full coverage lands: If QA has always been used as a stick, more coverage just means more sticks. The tool does not change that; only leadership does.
  • Trust and accountability in customer-facing AI: When AI behaves unexpectedly, teams need to understand why. Without that visibility, trust cannot be built.

Listen to the full conversation on Spotify, Apple, or YouTube.

Episode Transcript

[00:00:05] Sarah Caminiti:
Hello everybody. I’m Sarah Caminiti, and I am so excited to be here today.

I know you’re probably wondering what I’m doing here, but I am your guest host today, and I could not have asked for a better person to be hanging out with.

The last time I saw this person, we were in the Upside Down. And now we get to come back to the right side up and talk about why experience matters on the Experience Matters podcast.

So, Rob Dwyer, welcome.

[00:00:43] Rob Dwyer:
It’s so nice to see you again. This is really just a pleasure.

And I understand you’re the Madonna of the podcast world, so that’s exciting too.

[00:00:51] Sarah Caminiti:
I mean, I have heard that a couple of times. Not by myself saying it. Definitely by others, for sure, who I have not paid.

But thanks for bringing that up. I really appreciate that, Rob.

[00:01:04] Rob Dwyer:
Yeah. You’re welcome.

[00:01:07] Sarah Caminiti:
Thanks. I knew I was going to have a friend with me today.

So, just to give a quick intro, because you are also the… I’m trying to think of a one-word-named male equivalent. Bono?

[00:01:19] Rob Dwyer:
Sure. Let’s.

[00:01:21] Sarah Caminiti:
Rob Dwyer is the Bono of podcasting, so I don’t really know if I even need to do an introduction. But I will, because I like talking about you.

For everybody tuning in, Rob has spent over 15 years in customer experience. He has seen it all across SaaS, enterprise, and BPO environments.

Today, he is at Level AI, helping contact centers use AI to level up on quality and performance. He is also a three-time ICMI-recognized CX thought leader and the host of one of my all-time favorite podcasts, Next in Queue.

So, it’s safe to say this isn’t his first rodeo. I know this to be true because I’ve gotten to hang out with him in recorded settings multiple times.

But fun fact, I am really excited that someone else set up this intro so this would be a surprise for me when I pulled up the doc. At one point in your career, Rob, you helped fix an issue on Snoop Dogg’s wireless phone account.

So, I need to start there before we get into AI and other things.

[00:02:21] Rob Dwyer:
Yeah. Let’s see. “Change of plan” was the case they gave me, I think. And it really isn’t nearly as exciting as it sounds.

But yeah, I supported one of the big wireless providers here in the States, and at one point I spoke with someone in the Broadus family. I was like, “Is that the Calvin Broadus?” And they were like, “Yes. Yes, it is.”

And you know what? It turns out that even celebrities just have regular cell phone plans like the rest of us. Sometimes they need to make changes. That’s what I got out of that.

[00:03:15] Sarah Caminiti:
Yeah, that’s a hard lesson learned. They don’t have some elite wireless program where you get a gold-package SIM card situation. They’re just with the rest of us folk.

[00:03:28] Rob Dwyer:
Yeah. It’s pretty basic.

And this was back in the day when unlimited texting was not necessarily a thing. You might have a plan with 200 text messages. Crazy to think about now.

[00:03:45] Sarah Caminiti:
That would get eaten up so fast.

[00:03:48] Rob Dwyer:
Yeah. It’s pretty amazing how quickly things have changed when it comes to cell phones.

[00:03:55] Sarah Caminiti:
If we actually pause and think about the evolution just within our 15 years in our careers, or even a little beyond that, I’m getting these flashes of my first cell phone.

It was a Nokia brick situation, but it was like a dance party because it had these lights on the side that were all flashing. It almost felt like an offshoot of Simon Says or Bop It because it had all these colors coming off it.

You could set your ringtone and then coordinate the colors, so it could be this whole disco. Which was great, because I would lose it so much.

But why did they get rid of that?

[00:04:43] Rob Dwyer:
I don’t know. Your first cell phone sounds way more exciting than mine.

Mine did not do any of that. It was also kind of a brick, with the hard antenna that stuck out, and it had a flip. But you didn’t talk into the flip. It was just the cover for the keys.

So it was a flimsy piece of plastic that covered the keys. And it could store 100 phone numbers. That was the primary selling point.

Yes. One hundred phone numbers you could put in there. If you were 101, it was like, “Sorry, I don’t have your number.”

[00:05:28] Sarah Caminiti:
No. Or you would have to spend the next nine hours sifting through all 100 to figure out who that one person is who’s getting kicked.

[00:05:37] Rob Dwyer:
Yep. Who’s getting kicked?

I didn’t have that many friends, Sarah, so let’s just say I never used up all the phone numbers.

[00:05:45] Sarah Caminiti:
You had so many. People were paying you for access to your contact list.

[00:05:53] Rob Dwyer:
Sure.

[00:05:56] Sarah Caminiti:
But yeah, things change.

And when you’re in it, you don’t realize how rapid the evolution is. Even in the time we’ve known each other, the technology for recording a podcast has evolved drastically.

But think about our jobs. Think about how much our jobs, and the jobs that we work with outside of our actual jobs, have evolved because of technology, curiosity around technology, and companies finally thinking about how important great technology is within the CX space.

You’re not just needing to use a spreadsheet as your answer for everything anymore.

In your career, especially over the last couple of years, now that you’re working for an AI company, what do you think has been the biggest leap you’ve been seeing?

[00:07:28] Rob Dwyer:
I see the way AI really transforms how we can operate.

And I know people get sick of hearing about AI. There are complaints and concerns, and some of those are legitimate concerns about how AI is being used in general.

But in the CX world, when AI is applied correctly and judiciously, it can truly make a huge difference in the way people operate.

Coming from my background, I led training and quality in a contact center for a long time. I always think of it as a joke when you hear that recording: “This call is being recorded for training and quality purposes.”

Your chances of that call actually being listened to were slim to none.

Today, we are using those call recordings in ways we never would have in the past. AI can do things at scale that we never would have paid people to do.

That’s fascinating to me. Whether it’s analyzing voice of the customer at scale, doing quality assurance at scale, or helping drive better training outcomes at scale, there are so many unique applications that help people be more efficient in their jobs.

I use it myself to be more efficient in my job, and I love that.

[00:09:13] Sarah Caminiti:
I couldn’t agree more.

One of the things that has made me most excited, and honestly joyful, about the evolution of AI and the conversation around it is that the strategic work, the really difficult and thought-provoking work we’ve always been doing, is finally getting elevated.

Frontline teams are some of the most intelligent human beings a company will ever be lucky enough to have. And now their work is actually getting seen, appreciated, and acknowledged.

With the way AI can organize and share information, and coming from your previous background into this space, I mean, your company has the word AI in the name of it.

[00:10:04] Rob Dwyer:
It does.

[00:10:06] Sarah Caminiti:
There’s no getting around that.

Have you noticed a shift in what the customer experience is like in these different settings?

[00:10:27] Rob Dwyer:
Yeah. It depends.

It depends on how you’re utilizing AI. It depends on how you looked at customer experience to begin with, and whether or not it was something you really took seriously or paid a lot of attention to.

It’s important for people to recognize that AI tools are just that: tools to accomplish certain tasks.

They do not replace strategic thinking. They do not replace strategic initiatives. They do not replace culture or how we think about customers. AI will not do that for us. That’s up to the organization to develop and think about in a particular way.

The companies I’ve worked with that were already thinking about customer experience and constantly trying to dig into data to provide better outcomes are now moving at a much faster pace because they’re using AI to help them move faster.

Other companies, quite honestly, are probably not changing a whole lot. Even if they’re introducing AI tools, they’re not really using them to accomplish anything more than what they could have accomplished with non-AI tools.

So I think the biggest thing to consider is: what am I trying to accomplish, and why am I trying to accomplish it?

[00:12:17] Sarah Caminiti:
I’m so glad you called that out.

In my new role at SupportNinja, I get to go into companies and find ways for them to leverage AI in different ways, and educate them on how to do it the right way.

A company I’m working with right now hasn’t launched yet, so we’re doing all the prep work to get them ready. What you said is so true and so important.

The companies that care about customer experience and are thoughtful about customer experience carry that into how they use AI.

The work I’m doing right now with this company that has never been a company before, where they don’t have their language or all these things defined yet, those conversations are happening now.

A lot of that is because that’s what we’re going to use to effectively train the AI so that it does things the way they want it to, the way they need it to, speaks the way they want it to speak, and approaches things the way they want to approach things.

The preparation that goes into even thinking about introducing an AI tool, let alone managing, maintaining, and evolving with that AI tool, does not get discussed enough.

And as consumers, we see the effects. We feel the effects of poorly laid-out AI strategies. We physically have a reaction when it’s happening. Yet no one is teaching people in a way they want to listen to about how to do it the right way.

I feel like, inside our space, we’re very lucky. We are a group of people in the CX world who love to share. We are not gatekeepers. We want people to be successful.

Our entire purpose in our careers is to help. So helping others in the industry is one of our favorite things to do. I know it’s true for you, and it’s true for me.

We’re constantly talking about ways to make things better and talking about our struggles. But that doesn’t always seep out into the world.

So with your new role, what kind of conversations are you having around AI with your customers?

[00:15:02] Rob Dwyer:
When we talk to customers, they’re often already embracing AI because they’re our customers. The question becomes: what more can we do?

My company does more than the bare minimum when it comes to AI. We are a full CX stack of AI products.

For us, it’s about understanding your business needs, where your current challenges are, and whether we have tools that can help you overcome those challenges. It’s also about helping you think through some of the ways you maybe haven’t thought about yet.

As a vendor, whether you’re talking with existing customers or potential customers, it’s critical to talk through use cases where other people have seen success.

We have customers with the exact same setup. They have all the same products. But one customer is using them in a wildly different way than another customer.

So we talk about how our customers are achieving success using those tools.

Some are more focused on voice of the customer. Some use a product set we have called AI Workers. There are a number of different ones, but you talk to them in plain language to get data back from all the data you’ve been putting in.

For example, you can say, “I want to know the major pain points customers are expressing this week.” You can do that, and then you can hone it down to get very specific feedback.

But all of that requires conversations with customers about what’s going on in their world and what’s changing day to day.

Sometimes that involves different stakeholders coming and going within the company. People move on, sometimes by choice and sometimes not by choice. So you have to recognize, “Okay, who is this cohort I’m working with?”

Or it could be, “We’re ready to take the next step forward. We’ve been using A and B, but now we’re ready to embrace C. How do we do that?”

Most of the time, it’s about understanding problems.

The next piece that I feel is really important is setting realistic expectations for AI. People get scared and say, “AI is going to be wrong.”

But we never say, “Our people are going to be wrong, and I’m scared because my people are going to be wrong.” And they are. We’re people.

Yes, AI sometimes will not be 100% accurate, just like people are rarely 100% accurate.

I like to help people work through that fear and understand that it’s not about being 100% accurate. It’s about having good information so you can move forward in a positive direction.

It’s about helping customers achieve whatever they’re trying to achieve on their own terms. That’s what we’re trying to help customers do.

[00:18:45] Sarah Caminiti:
That makes a lot of sense. And that’s a really valid point about humans getting it wrong as well.

But I do think what needs to be constantly reminded to folks outside of our industry, especially those leveraging AI for customer-facing purposes, is the need to monitor what’s happening with your AI.

This is not something where, just because things can be wrong and then work themselves out, no one should be paying attention.

I’m sure you see this as much as I do. The conversation is typically around headcount and reducing headcount. But the conversation should actually be about what that headcount should look like in an AI environment.

You may have to rethink how frontline workers spend their time or what their job title is. Holy cow, there are opportunities for career growth.

But someone has to be paying attention. Multiple people have to be paying attention.

If there is a change to the knowledge base, if product isn’t communicating that something is going on, if the website has a little tweak and that’s not reflected in documentation, all of it trickles back into knowledge and knowledge management.

Yes, AI can get it wrong. But the difference is, if AI gets it wrong and nobody catches it, the AI just learned. Then it will keep leaning into that because nobody corrected it.

One of the questions I always ask when I’m talking to new AI companies that reach out for demos or my opinion is: how is your company thinking about QA?

Because there are ups and downs, and often no one is paying attention. It just goes into the Upside Down, and it’s never going to get old.

But someone has to be paying attention.

When you’re evaluating tools for AI, or evaluating opportunities to enhance the AI experience for your customers, how are the conversations going around who is going to be maintaining this?

[00:21:40] Rob Dwyer:
You just touched on a lot of things.

It is critical that leaders approach AI thoughtfully. I think there’s this misconception that AI is going to reduce headcount. You mentioned that maybe it can, but a lot of the headcount reductions I see in the industry are just AI washing.

It’s not AI reducing headcount. It’s: we were going to reduce headcount, and now we’re going to say AI is the reason.

AI can help you accomplish a lot of things you wouldn’t have accomplished before without changing headcount at all. That could mean delivering a better customer experience, doing things faster, hitting SLAs you couldn’t hit before during volume spikes, or simply supporting industries that could never get enough people to begin with.

It’s not always, “We want to reduce headcount.” Sometimes it’s, “We never could get the headcount we needed, and now we’re able to support our customers better.”

So there’s that piece.

The quality and monitoring piece is really no different than with people. We don’t put agents through training, throw them onto the phones or chats or emails, and then say, “That’s it. We’re done.”

We pay attention to their performance. We monitor it for quality and training purposes.

We have to do the same thing with any customer-facing AI. We need to run it through quality just like we would with people.

And there are other things that go on top of that, because AI doesn’t have the same kind of context people have. When AI comes in, it’s like a little baby. It’s not a fully formed human, and it never will be.

So you’ll find edge cases where you go, “Oh, we need to guard against that.” And we can. But those edge cases will continue to pop up, just like edge cases pop up with humans.

You never expect humans to behave a certain way sometimes, and then they do. And you’re like, “Oh, we should probably do something about that.”

Quick story aside, Sarah.

[00:24:35] Sarah Caminiti:
Please.

[00:24:37] Rob Dwyer:
One time, I worked at a company that shall not be named, and an agent got so upset with a customer. They happened to have the customer’s credit card information, and they ordered pizzas delivered to the call center on that customer’s credit card.

Now, dumb, right? Just dumb.

We’re going to discover who did that very quickly, and very quickly, you’re not going to have a job anymore. Bad idea.

But it’s one of those things where you go, “I never would have imagined that a person would do that.”

So now, all of a sudden, we need to have a policy against that. Or we need to have procedures to safeguard against that.

We are still learning that with AI in a lot of ways, just like we’ve been learning with people in a lot of ways.

As customers behave differently, we’ll learn how to do things differently. They will present crazy edge cases to us now and then, and we’ll figure it out.

It is absolutely critical, to your point, that we are constantly monitoring.

There are other things we need to look at from an AI perspective that we don’t necessarily look at from a person perspective. Off the top of my head, latency is one.

We don’t worry about latency with people, although we kind of do. We look at things like dead air and response time in chats. We don’t call it latency, but that’s what it is.

From a technology standpoint, if you’re working with a voice bot, you absolutely want to monitor that as well.

There’s different terminology out there, but we need to look at the whole experience very similarly.

[00:26:39] Sarah Caminiti:
Yeah, it’s true.

Going back to QA, because I do think this is such an exciting piece of the AI jungle.

One thing we struggled with, and I’ll keep this quick soapbox moment brief, is that as leaders, especially in call center environments, we were tasked with QA-ing two, maybe three calls in a period of time. And that would be your gauge of performance.

I love the look on your face because I’m struggling to say those words out loud with any conviction. It’s ludicrous.

But with AI, you now have this opportunity to analyze more. Not only sentiment, which is different than QA in my perspective. Sentiment analysis could show negative sentiment because someone is angry about a refund, and that has nothing to do with the agent.

Those are two different things.

But now you get more context. You have the QA piece: how is this agent doing? How are they leveraging the knowledge base? How are they doing with speed and tone?

Then you have all this other stuff, especially in settings where there’s an observability piece. You’re connecting the dots between things like bad audio and what that does to the customer experience. Holy cow, yes, that makes so much sense.

There’s so much data available for a single interaction that you can actually get a fuller understanding of who this person is that you are supporting, celebrating, and inviting into your company to work with the customer.

How can we best support them? What’s good? What’s bad?

What have you been seeing in terms of overwhelm with this kind of data?

Data probably wouldn’t scare us too much. We’d probably be yearning for more. But if you’re going from being comfortable looking at two interactions to looking at thousands of interactions, how do you think people are feeling?

[00:29:27] Rob Dwyer:
I love this question.

There are a couple of things happening here.

If you run a really good quality program today, you have to be very specific about what you’re actually looking for. That was not the case before.

I went through this exercise with lots of QA rubrics, and you look at them and think, “What does this mean? This seems really subjective. I don’t know what this means, so I don’t know how agents know what this means.”

When you start asking AI to give you feedback, now all of a sudden it’s like, “No, no, no. We need to be really specific about what we’re talking about.”

We need to get very granular about what we’re looking for.

So today, if we’re using AI to help us with QA, we do a better job of helping agents understand what we actually want them to do. Maybe we didn’t do a very good job of that before.

That’s number one.

Number two, I think there’s a mistake in trying to look at every single interaction.

When you’re only doing two or three, then yes, that’s what you do. You focus on those two or three interactions and talk about those.

But if I’m evaluating 100% of your interactions, I don’t care about specific interactions unless there’s a very specific behavior that comes up in one of them that isn’t common across all interactions.

In general, the challenge before, as a coach managing a team of agents, was identifying on a regular basis: Sarah, you’re one of maybe 20 direct reports. I need to know what you’re good at, where your opportunities are, and what I need to prioritize with you to help you move the needle the fastest and farthest.

That’s a lot to ask of a person who is also dealing with time-off requests, someone down the aisle who hasn’t showered in three days and maybe has an odor issue, a dirty chair over here, somebody eating at their desk and getting crumbs on the keyboard over there.

You’re doing all these things, and then, by the way, you also have to be strategic and figure out what Sarah needs help with and what I can celebrate with her.

AI can help you do that in seconds. Before, it would take hours, and I still wasn’t getting a full picture. I was getting little peeks at your performance. And by peeks, I don’t mean “peak” as in high. I mean I was peeking in and seeing one moment.

When coaches stop focusing on the really granular, discrete interactions and start looking at broader trends of behavior, they can understand what to celebrate based on the last coaching session and where to move the needle next.

AI is great at helping you do that.

As long as we don’t overwhelm agents with information about every single interaction, and as a supervisor I’m not thinking about every single interaction but thinking about larger performance, we don’t have to worry as much about overwhelming people.

We can focus on what’s important instead of all the noise that potentially got in our way in the past.

[00:33:52] Sarah Caminiti:
It’s the noise.

With the right approach to leveraging QA AI effectively on your team, you’ll be able to be strategic in ways you didn’t give yourself enough credit for before.

If you have 20 direct reports, you’re a therapist, a babysitter, a cleaning crew, and you’re supposed to be strategic.

If you’re only taking a peek into someone’s performance, and you know that person is anxious and gets nervous every time these conversations come up, you don’t know if what you’re looking at is truly reflective of them.

So what do managers do in those situations? If I were in that position, I’d probably look at more. I’d keep going because I’d want to make sure this is as easy and true as possible for the person, because I care about their success.

But now you’ve spent four hours combing through calls. You’re probably getting close to the same grasp on things. Maybe there’s some randomness in there, but who knows? It’s still a needle in a haystack.

Four hours and five or six calls still isn’t giving you the full picture.

Now you have tools that can say, “Here are the trends.”

I love the word trends because I think it’s one of those anchor words when it comes to AI. It’s a pattern recognizer. That’s what it is.

Let it recognize patterns you don’t have the brain space or capacity to recognize because it would require you to clone yourself 15 times to do it.

But that mindset shift from “they’re only going to look at two” to “oh my God, they’re going to look at all of it” can feel huge.

How do you recommend leaders introduce this change so it doesn’t feel so Big Brother, and instead feels celebratory and helpful?

[00:36:30] Rob Dwyer:
I love that question.

Again, it’s all about the approach and how you culturally approach agents in your business.

If QA has traditionally been a stick, now it’s just going to be a lot of sticks. If it has been a carrot, then now you’ve got a whole salad. It becomes a complete meal instead of just the carrot, which sounds more appealing to most people.

So it boils down to culture and how you approach things.

Leadership needs to ensure, and when I say leadership, I mean all the way at the top because that’s where culture is set from, that everyone understands this is going to be used to help equip agents to be the best versions of themselves.

It’s not, “I’m trying to catch you doing something wrong.”

It’s, “I’m trying to catch you doing something right and identify what I can do to help you be even better going forward.”

That should be the mindset of coaches, supervisors, team leaders, and managers, regardless of what tools we’re using.

If that’s not the mindset, it doesn’t matter what tools you use. You’re going to constantly experience agent churn and turnover. You’ll constantly be fighting that battle instead of figuring out how to make experiences better for customers.

[00:38:18] Sarah Caminiti:
This is why I love having you around. That was such a good answer.

I have just one more question. I mean, I have 9,000 more questions, but I have to tone it down a little because I’d like Hiver to ask me back to guest host this podcast and not have it be 18 hours of just Rob and me, even though that would be a party and a half.

I want to talk about trust and judgment a little more and zoom out.

Hiver does these really great research reports, and one of the ones they just came out with found that nine out of 10 support leaders remain cautious about fully allowing AI to represent their brand in customer conversations.

We’ve talked about this in bits and pieces, but from your perspective, what is really behind that hesitation? And what needs to happen before leaders truly trust AI in customer-facing moments?

[00:39:08] Rob Dwyer:
I think what’s behind it is accountability, or the lack thereof.

At some point, when we hire people and put them through training, we trust them enough to go talk to our customers and represent our brand.

That training is probably not very long, and they probably know only a tenth of what they need to know to be truly effective at their jobs.

But we can also hold them accountable when they really mess up. If they intentionally provide the wrong information, we know we have someone to hold accountable.

And because we monitor everything, we know where they’re getting their information. We might be recording their screen, so we know which knowledge base article they used.

The scary thing for leaders when it comes to AI is that with a lot of systems, it feels like a black box. They don’t know how AI reached a conclusion or why it communicated in a certain way. There’s no real way to hold that AI accountable.

So it’s incumbent on technology vendors building in this space to provide that train-of-thought evidence and show why a particular virtual agent or bot, or whatever you call it in your business, did a certain thing.

If you can do that, it creates credibility. It helps leaders understand why something is behaving in a particular way.

If the AI grabbed the wrong knowledge base article because we didn’t update that knowledge base and forgot about it, now I know I need to fix that. That’s what I would have done with an agent.

If they grabbed old knowledge, we’d say, “Oh, that’s still there. We need to fix that.”

If we don’t provide that context around why, it feels like a black box. We don’t know why it did what it did, and then we get frustrated.

That frustration breeds distrust. That’s the key to solving it.

[00:42:04] Sarah Caminiti:
There are so many good points in there.

The one I want to make sure to highlight is that not all AI vendors are the same.

The ones that are building systems for people who actually understand CX, understand the importance of how things are organized, and understand how to figure out what’s going on, those are the ones worthy of the investment.

Those are the ones worthy of partnering with, learning from, and doing it the right way.

Be curious and mindful when you’re looking for AI vendors. Do you feel nervous integrating this into your tech stack? Trust your gut. Your gut is valuable.

It’s so important to know where these AI answers are coming from because AI can expose problems. It can be like a fresh pair of eyes when someone joins the company and you ask them to go through the knowledge base and point out what doesn’t make sense.

It’s telling you what doesn’t make sense. That can be overwhelming if you don’t have the headcount to maintain it, but then that’s worthy of a conversation.

If your AI tool is giving you a certain amount of value, but you don’t have the bandwidth to maintain the knowledge base in the way you need to, have that conversation.

Like we said at the beginning of the call, it’s so much more nuanced than we give it credit for and allow ourselves to say out loud.

So thank you for flagging that.

[00:43:58] Sarah Caminiti:
To sum up this conversation: careers need to evolve in the CX space.

The people on the front line are likely the most qualified to evolve into AI-focused roles because they understand customer experience. They understand what good customer experience looks like. They understand all the different ways customers ask questions.

Putting them in spaces where they get to train AI to work the right way is such a golden opportunity.

AI is not going anywhere. Strategy is not going anywhere. Culture is also not going to change just because you have new tools.

Tools are not going to fix things. Tools are going to amplify things, whether they’re good or bad.

Think about what your foundation is as you lean into all of these tools that are starting to pop up and change the way we approach things and help us be more strategic.

What is your foundation? Do you have the setup to continue nurturing that foundation as new tools come in? What are you trying to solve?

You don’t need to bring AI into everything.

Rob stated it beautifully in the beginning: you have to figure out what the problem is, and then find the tool for the problem.

Don’t do AI because that’s not even a sentence. You don’t “do AI,” but people really like to say that.

Is there anything else I missed, Rob, that we touched on today, aside from Snoop Dogg?

[00:45:41] Rob Dwyer:
I just want to thank you for making this happen.

It’s been too long since we’ve had the opportunity to chat. And thanks to Hiver for making this happen.

It’s very cool to be part of something new, always. I’m glad we got to do this together. It’s been fun.

[00:46:03] Sarah Caminiti:
Me too. We’ve got to schedule more time to hang.

Everyone always learns so much from you, Rob. You are such a treasure in this community, and I’m so grateful for your time, your friendship, your knowledge, and your guidance.

Thank you, thank you, thank you. I hope you have a great rest of your day.

[00:46:22] Rob Dwyer:
Thanks, Sarah.

[00:46:24] Sarah Caminiti:
Bye. Thanks for tuning in to Experience Matters.

 

Don’t see an App you need? No problem! Build
your own with Hiver’s API or let us know.

Thanks for subscribing!

based on 2,000+ reviews from

Get Hiver's Chrome extension for Gmail to start your 7-day free trial!

Step 1

Add Hiver’s extension to your Gmail from the Chrome Webstore

Step 2

Log in to the extension to grant necessary permissions

Step 3

Enjoy your 7-day free trial of Hiver

The modern AI-powered
customer service platform

Not ready to install Hiver’s Gmail extension?

That’s okay. Would you be open to try Hiver’s standalone web-based customer 

service platform, which does not require downloading the Gmail extension?

Thank you for your interest!

The web app is currently under development—we’ll notify you as soon as it’s live.

In the meantime, you can get started with your 7-day free trial by downloading our Gmail extension.

The modern AI-powered
customer service platform

“Our clients choose us over competitors due to our speed and quality of communication. We couldn’t achieve this without Hiver”

Fin Brown

Project Manager

Getitmade@2x

Get in touch with us

Fill out the form and we’ll get back to you.

demo popup graphic

Get a personalized demo

Connect with our customer champion to explore how teams like you leverage Hiver to: