Go back

Redefining Tech Support: Marti Clark Gives An Inside Look into Google’s IT Support Revolution

Listen on :

IOSSpotifyGoogleAmazon

In this episode of Experience Matters, we delve into the world of IT operations and support with Marti Clark, a seasoned professional from Salesforce (ex-Google). Marti shares her experience managing the transition of 200,000 employees back to the Google office post-pandemic, highlighting the challenges and innovative solutions involved in the process. 

Marti’s journey from managing external customer interactions with AdWords to focusing on internal IT support showcases her adaptability and commitment to excellence in customer service, offering listeners a wealth of insights into the nuances of customer support in both external and internal contexts.

She also discusses the role of marketing when it comes to engaging customers and fostering loyalty. Clark dives deep into the concept of ‘marketing with a purpose,’ urging businesses to align their objectives with their customers’ values.

Key Takeaways From The Episode

  • The Intersection of Marketing and CX: Marti explores the critical role of integrating marketing strategies with customer experience initiatives to drive brand loyalty and customer satisfaction.
  • Leveraging Data for Personalization: She emphasizes the importance of using data to understand customer needs and preferences, enabling businesses to deliver personalized experiences that resonate with their audience.
  • Building a Customer-Centric Culture: Marti discusses the importance of fostering a customer-centric culture within organizations, highlighting how it can lead to improved customer loyalty and business success.


Marti Clark is a Senior Program Manager at Salesforce. Before that, she distinguished herself at Google for over 16 years spearheading several crucial projects. She led pivotal initiatives, notably orchestrating the return of 200,000 employees to the office, streamlining operations, and enhancing IT strategies to adapt to the changing landscape. Marti helped automate internal processes, significantly reducing ticket backlogs and improving operational efficiency. 

Her strategic insights were instrumental in optimizing Google’s asset management and integrating the aspect of user experience into long-term planning. Marti’s leadership in transitioning teams to remote work, optimizing IT support, and fostering a culture of inclusion and diversity underscores her expertise in stakeholder engagement, risk assessment, and cross-functional team leadership.

Marti Clark

Senior Program Manager at Salesforce (ex-Google)

Author Bio

Episode Transcript

Niraj Ranjan Rout: Our guest today is Marti Clark, who has been a Google Maven for the last 17 years. Marti started working at Google as an AdWords strategist, where she focused on the quality and customer experience for the AdWords service, which is key to Google. After doing that for roughly six years, Marti moved on to internal IT operations and support. Over a decade, she worked on very interesting and impactful initiatives. The most notable one is her work on getting more than 200,000 people back to the office after the pandemic ended. I’m really looking forward to talking to you, Marti, and learning from your experience. Please welcome Marti.

Marti Clark: Thank you so much. It’s such a pleasure to be here.

Niraj: Where are you based, Marti, and how are you doing today?

Marti: I am based in Ann Arbor, Michigan, and I’m doing pretty well today. We got our first snowfall.

Niraj: Oh, that’s great. I’m in Los Gatos, almost the southernmost end of the Bay Area, about an hour away from San Francisco. Let’s get started by talking about your work on getting over 200,000 people back into the office without any major IT incidents. How long did this take? How did you plan for this, and how did you work with the multiple agencies and parties who were joint stakeholders in this exercise?

Marti: It’s no easy feat to bring back that many people to the office after being out for about two years. Technology had aged, and new technology had come in that hadn’t quite interfaced with the office yet. The planning actually started in January of 2021, and we returned starting in April of 2022. It was over a year of collecting insights. We had small openings in some offices as risk levels got better, and we took that feedback. There were multiple agencies here that we worked with, including facilities, legal, supply chain support—they all kind of worked together and collaborated. That was probably why it was so successful, due to the strong collaboration and sharing of information.

Marti: One of the key things we did to make sure we were solving the right problems, because no one was in the office, we didn’t know what was going to break, really. So we had to kind of dig in and predict or figure out based on the small sample sizes we had. I launched a program called RTO Insights, and what we did here was mix both the operational metrics using ML to detect sudden increases in volume. We took user experience data from research that had been done across the organization, not just within IT. And then we took anecdotal feedback from our frontline techs that were helping people, as well as feedback from our CSAT survey and escalations, to see where there might be a potential problem that we could get ahead of.

Marti: One example is monitors. People think monitors are simple, but it turned out that a firmware update had prevented many of our Macs, which was a good portion of the fleet, from being able to interact with those monitors. So by detecting that earlier, we were able to switch them out, have alternative solutions ready, update a lot of technology, and anticipate the problems people were going to have when they came back, ensuring they had the right information in front of them. We then continued to collect that information because it wasn’t just that first day, but really it took about a month for people to truly settle into their new office surroundings. And so we continued to have that feedback coming in so that we could be proactive in fixing things before people came in.

Niraj: Makes sense. It essentially involved a lot of anticipating and predicting what might go wrong, what might go right, and what surprises might be thrown at you, making sure that you worked with everyone involved to mitigate everything in advance before any surprises that still managed to creep through the process and jump at you when you least expected it.

Marti: Yes, I would love to say everything was anticipated, but for the first time in my entire career, we were able to foresee even the problems that came up. They weren’t major by any means, but maybe a director complaining about something or a shipment that was late because of supply chain issues. We knew those things were going to happen, so we had that information in place, and communication from tech all the way up to VP was really strong, ensuring that we understood what was coming and how to deal with it, because some problems just happen. You have to be okay with the fact that something’s going to break, something’s going to go wrong, but to have a mitigation plan, to have your team understand what they should be doing when that case happens is really important.

Niraj: We’d love to know more about the role that technology played in this entire exercise, in helping you anticipate what might go wrong and preparing to mitigate it in time.

Marti: Yes, that’s intriguing.

Marti: Google likes to build a lot of their own technology, so it wasn’t a particular software that’s out on the market. But I think the biggest thing that helped us during that return to office was using machine learning to better understand and detect anomalies in our support data. Google’s internal IT support team gets over a million issues a year. Being able to sift through that, because there’s so many, it’s sometimes difficult to pick out that one nugget that’s going to go wrong. So machine learning allowed us to use anomaly detection to do that. We also leveraged our Quant UX team to understand at scale what was driving people’s experience. We looked a lot at the amount of effort that their technology was causing them and worked to target those type of things. We also used qualitative research to better understand how people interact with technology. As our entire workplace had changed, it was no longer Monday through Friday in the office.

Marti: Now there was this, I’m at home some days, I’m in the office other days. What does that look for in terms of when something breaks? So really gathering that information. And then I would say there was a lot of data ensuring that we were collecting the right data, that we were able to integrate it correctly and use that as a way to forecast what our volumes might be like, where we might have bigger problems than others, and putting pop-up in-person support for a week there so that we could help with that transition. And I think that was a huge part of all of this, was to be able to take that data, both operational and user behavior, and putting it together to better anticipate.

Niraj: Makes sense. In my opinion, the only other exercise that might have rivaled this exercise in scale would be tens of thousands of people going remote immediately, at very short notice, when the pandemic hit all of us.

Marti: Yes, going remote was probably one of the pivotal moments in our life that we’ll remember for a lot of people. In the US, the pandemic started in March, but for the world, actually it started in December when our offices in China began to shut down. So we started to see the problems that were going to happen because of the security Google has on their engineers. A lot of workstations have to stay on property. And so we started to see, like, oh no, if we’re shutting down offices for a period of time, how do our engineers stay productive if their machine goes down or they somehow can’t connect to it? We can’t do all of that remotely. So we quickly set up a program where we had specific people poised to come on site and do low task work. Then we realized there were actually some really hard, difficult things to handle there. So we set up a program where people could ask us to basically check their machine to see why they couldn’t connect to it. And we had to set that up in about a week as offices started shutting down.

Marti: And so I think the biggest lesson from all of that was learning prioritization. There’s 100 things that you want to do when you set up an operation ensuring you have good playbooks, that your frontline support knows what to do, that you’re tracking what the issues are so that you can better reduce them. But in that type of timeframe, our most important thing was making sure that we were able to get people on-site and that they were protected with their masks, gloves, and all of that. Then, the user also had a way to contact us and give us the information that we needed. Once, we also had to make sure that these techs could even legally go on-site because governments were shutting down, we had to work with security in some regions because they were the only people allowed on the property. So we translated directions on how to turn a computer off or reseat a cord in about five or six different languages because we needed people to get to these computers so that they could keep running. We’re talking about the computers that support google.com, our ad revenue business, and all of the other big things that you see from Google. These engineers had to be able to keep those running.

Marti: Also, once we got kind of the base down and the prioritization the most high priority, I really worked with our techs, our frontline techs, to be able to build out that process and program. I think one of the biggest mistakes people can often make is thinking that they know everything and not involving the people who actually have to do the work. And doing that was extraordinary. They were able to simplify the process, better collect the information, and we ended up automating thousands of tickets because we started to see patterns and continue to evolve that. And what we found was, even as we transitioned back to the office, people still use this service because they were either fully remote or they were hybrid, and they weren’t in that day. So we transitioned it into business as normal after that.

Niraj: Interesting, right?

Marti: Absolutely, it does. Right. And one kind of takeaway for me here is there’s so much that happened in that short three, four, five month window, which led to learnings that are relevant beyond what was happening at that point of time. There’s so much that you might have built or learned that probably you and your team continue to use over a period of time, which is very interesting from that intense three, four, five month period that we all went through.

Niraj: Absolutely.

Marti: One of the most important things in that process was also thinking about being able to know that you’re not looking for the best solution, you’re looking for the strongest solution. You’re looking for what you can do in order to provide your service. It might be a little clunky and being willing to go back and refine it as you go.

Niraj: Yeah, I think it essentially taught all of us new ways to do things, right?

Marti: Yes, absolutely.

Niraj: Another thing that I was really looking forward to talking to you about is your shift from working primarily with external customers. Right. When you were handling AdWords to primarily handled internal customers when you moved to IT operations and support. Right. How did that go? How are the contours of the problems different? Can you talk to us about the nuances of that?

Marti: I loved working with our external users. I met some of the most interesting people out there in all kinds of different walks of life and businesses. But it was really hard when you have people who are being abusive or mean or just downright rude to your frontline techs. And there wasn’t a lot you could do externally. We did build a tool and a process for abusive advertisers, but it was really frustrating sometimes. So moving into internal support, it was nice to know that there was someone you could reach out to, should someone be rude or disrespectful. And for me, that was a huge shift, that there’d be consequences for how people interact. But for the most part, internally, one of the things that surprised me the most was in any customer service organization; typically, you see a U shape, right? You have your really satisfied people, your really negative people, and then you have the people, like download. Right. Internally, it was a ski slope. 90% of people receiving tech support at Google were satisfied with their experience. 

Marti: And at one point we were like two or 3% were dissatisfied and there was a hunk in the middle. And that was shocking to me. But it was also a problem because while they loved the service and it is a wonderful service, it was difficult for us to advocate for any kind of additional headcount or for systems to be fixed or for our technology to be improved because we were providing such an amazing service to people. And so what we did was started to think about, okay, how can we still let people express their love and admiration for our team, but still point out all the things that are broken so that we can get that support? We also introduced a metric called customer effort score. This is something that has been around for quite a while, but still kind of slow to get in there. And what it helped us do is to see really how much effort people were spending trying to solve their IT issue. And that allowed us to better target specifically where we needed to improve things. And I think, again, it was a little easier internally because we could reach out to those people directly. We could talk to our colleagues and have that open conversation a little bit easier than you can with an external service. There’s still UX done and such like that, but there’s a lot more process needed for that. Love to talk a bit more about this.

Niraj: So when you implemented customer effort score, right, what is it that you learned instantly, and what actions could you take based on the learnings that you had from customer effort score? And how did your journey go in terms of improving the customer effort score?

Marti: Yeah, so customer effort, we started asking people that alongside the satisfaction question to see kind of if, okay, you’re satisfied, but things are taking effort. And what we found was there was a gap in specific areas of the user journey, because we also asked that effort question in terms of the five or six steps in major steps in the user journey, discovering the problem, contacting us, escalating those type of things. And what we found was that people were satisfied with our self-service, but it took them a lot of effort to use it. So while we saw on the surface, oh, our self-service is going great, our process is working, people are getting what they need. But in reality, while it is better than some self-service, it was still causing them a lot of effort to solve their problems. So we actually invested a lot in improving and changing how that user or that self-service content is created, so that it was more useful for people making things common, problems show up more. And that definitely helped. We did a regression analysis on the user journey, different pieces of it, to see which most influenced the overall effort, and found that it was primarily self-service and escalations. Escalations being anytime I can’t get the person I’m talking to first to solve my problem, and I have to talk to a manager or get passed on to another team or whatever else.

Marti: Those two pieces, when those go wrong, break the user journey the most for us. I will say Googlers internally are different in a lot of ways than you would have in the general market. And I think part of that is when you’re doing external support, you’re supporting a wide range of people with diverse backgrounds and businesses and such, versus when you’re supporting people internally, the company, while it hires diverse people in terms of race and gender and socioeconomics and experience, they’re all working at Google. They’re all high, very talented go-getters, kind of. So they’re going to expect a lot of things different. And it was surprising to see, for me at least, that self-service was such a huge driver because people wanted to solve things. They loved our techs, they thought they were great, but at the end of the day, they just wanted to hurry up and solve their problem. And that was one of the biggest takeaways from customer effort, was people just want things fixed quickly. A lot of people think that quality is having a long interaction with someone or really diving into the issue and such. And really at the end of the day, they just want to get back to work, they want to get their job done. And that might be a fundamental difference from doing external support.

Niraj: Right, where people might have a slightly different expectation from customer support.

Marti: Yeah, I think that it depends on the business, it depends on the person. I taught a lot of customer service trainings in my role, and one of the things that I always say is I like Zappos. They’ve really branded themselves as this kind and fun, and they’ll ask you how your day is and such. And I tell them, I just want my shoes. I’ve got friends, I don’t want to have a conversation, but that’s my opinion. Right. And someone else might really enjoy that interaction. They’re having a crappy day and having someone be nice and cheerful can really help them. So I think that you really have to look at your user base and look at what the data is telling you, because it’s not just about the user behavior data like customer effort, but it’s also about the operational data as well. Are you having repeat customers? You might have the fastest resolution time on the books, but if 50 or 60% of the people are coming back and asking the same question, you’re really not doing a lot, and that’ll show up in your effort score as well.

Niraj: I think one takeaway here for me is the importance of measuring effort score. And I think a lot of companies still don’t measure effort score. A lot of evolved customer support teams, even now, don’t measure effort scores. Great.

Marti: So going further, right, something that you just very briefly touched upon, love to talk to about how your team scaled in your decade at IT support and operations. Right. How did you bring onboard new people? How do you train them consistently? Upskill them? Right. So I’d love to know more about that.

Marti: Yeah, it really starts with the hiring and making sure you’re getting people who are coming into the role know what to expect. I think that was a big problem in the beginning, is that folks thought they were coming in, for one thing. Oh, I’m coming into Google. I’m going to become an engineer right away, and this is going to be great. And in reality, there’s a lot of hard work you have to do to get there. So it really started with hiring and really making an effort to go places where people don’t normally look for talent. Certain conferences or groups that might have folks that are undiscovered, as I like to say, and really making sure that in that hiring process that we have a consistent way that we’re hiring. Unconscious bias can creep in in so many different ways, whether liking someone or not liking someone, if you don’t have a standard. And so there was a really strong standard in terms of what we were looking for in terms of customer service specifically and technology experience, as well as the desire or willingness to learn new things once they come on board. The techs got a consistent training. Everyone went prior to the pandemic, everyone came to Mountain View and got trained on the same material. And even after the pandemic, as people are starting in their own offices, the content is similar, the idea being giving everyone the same kind of introduction into the company, what we value. And then it really comes down to ensuring that managers are consistent in the way that they manage and reward their team. I think that if you have managers who are really forcing fast resolution time or making sure that the wait time is really low, but not really caring about quality or the way that they’re working with users, then that starts to erode the culture.

Marti: And so there were a lot of discussions with managers in terms of what was expected when performance time came up. They did trainings, they did all kinds of really making sure that things were consistent and that we were all kind of talking the same language. I also had a lot of trainings for the managers. We did a training at Disney, we did some on some off sites, and we had managers get involved in the escalations so that they could understand when things went wrong, what was happening there, as well as having. Encouraging them to work with their techs on the specific tickets when they start, so they can see how that tech is working. And again, it just comes back to consistency and having a standard of what’s expected makes sense.

Niraj: Which brings me to the last question, which is very connected to what you just said. Right. What were you measuring as primary KPIs when you were going through all of this? Right. And in this journey of scaling, how did the KPIs change? Because a lot of these KPIs would connect to a lot of things that you just mentioned.

Marti: Yeah, I was thinking a lot about that question, thinking back from when I first started to now and what has changed. And really a lot of the KPIs are similar in the sense of time to resolution, first-time resolution, time on a chat, things like that. But what I used to always say is, give me a metric, I’ll show you how to game it. Right. Like any one metric is going to not tell you the full story. So I think what evolved wasn’t necessarily introducing new KPIs, but rather looking at these KPIs in conjunction with each other. So while we saw that response times might be really great in a particular region, we might also see that their user effort scores are not good, their quality scores are missing a lot, and use those two together to have a better sense of how the business is doing. So you can have a full range view of the user experience as well as bringing in metrics and data from the actual tools. I think that was another change as well.

Niraj: So that makes a lot of sense. Right. So probably the learnings are not in working with new KPIs, but discovering the interconnectedness of the KPIs and the interdependencies. That is probably where a lot of these nerd links lie.

Marti: Yeah. And that’s where a lot of technology tends to fail, is trying to connect these data points together. You might know really well what your user effort score is, but if that’s going into the wrong database that the rest of your operational metrics are, how do you tie those together to get the complete picture?

Niraj: Makes sense. That’s all I had for today. Marty, great talking to you. I think lots of learnings for me personally and for the audience too.

Marti: Well, thank you so much for having me. It’s been a pleasure.

Niraj: Thank you, Marti.