top of page

Mastering B2B Customer Experience: A Candid Talk with Jeff Lee, CX Manager

Updated: Jun 7, 2023

Uncover The Meaning Of B2B CX. Address Blind Spots In Stakeholders Management.

What does Customer Experience mean for B2B software products? How do leading companies measure and improve customer experience?

I heard these questions when working with businesses. I’ve seen these questions multiple times on product and startup slack communities. So, I spoke with Jeff Lee, Customer Experience Manager, about his learning in managing CX in B2B SaaS. Here is what I learned.

You can watch it here.

Who is Jeff?

Jeff is a Customer Experience Manager at Twilio, a B2B SaaS company. I emphasize the insights gleaned from Jeff's professional experiences rather than the specific practices at Twilio. This is because Jeff shared his career lessons and was not officially representing Twilio's stance or procedures.

Jeff worked in Customer Success for 5 years before joining the Customer Experience team and growing to lead parts of it.

What Did I Want To Learn?

I wanted to hear about Jeff’s work in customer experience, including measuring it, processes, and stakeholders. I started with the big-picture and then got into the specifics. The broad themes I covered were around:

  1. Career experience

  2. Working with stakeholders

  3. Processes

  4. Tools

Q: How is your CX team organized? What is your role in it?

Jeff: The CX team has changed a lot over time.

Earlier, it was split in half, with one-half working on infrastructure, survey, or data flow. The other half derived insights from NPS, touchpoint, or CSAT scores and reading raw comments.

The teams’ function has changed with the maturation of the program. We've onboarded two data scientists because of feedback from stakeholders. Stakeholders mentioned needing something to believe and act on CX feedback from the team so the data scientists help prove the importance. So, data scientists were one of the steps to reach that goal of persuading different teams to act on something.

We have one researcher who also collaborates with stakeholders, like user experience. There is one engineer who focuses on keeping the drip surveys running, making sure we continue to hear from customers, making changes that we need to do, keeping channels open to listening to our customers, and keeping up with the business changes.

Customer Experience Scores

I’ll explain more about some of the scores mentioned above.

NPS: The most well-known measure of customer experience, although with a bunch of flaws. Companies use Net Promoter Score (NPS) to understand customers’ satisfaction with a product or service. The score is derived from asking: how likely are customers to recommend the company to others, rated on a scale of 0 to 10. From the responses, the percentage of detractors (rate 6 or less), is subtracted from the percentage of promoters (rating of 9 or 10). This difference gives the NPS.

C-SAT: Customer Satisfaction Score (C-SAT) is used to measure customer satisfaction with a product or service. The score is derived by asking customers to rate their satisfaction on a scale from 1 to 5 (or 10) on an interaction. A higher score indicates a higher level of satisfaction. The average of these scores is then calculated to provide the overall C-SAT score, offering a snapshot of customer sentiment at a given point in time.

CSAT measures satisfaction whereas NPS measures recommendation and adds a calculation on top of the measurement. NPS is about the overall experience whereas CSAT is about the interaction.

Touchpoint Experience Score (TXS): Although CSAT might be used for any part of a customer journey, TXS is more likely to be used when a customer interacts with your company’s employees (or chatbots). This includes customer support and sales.

Drip surveys as a way to hear from customers

One way of measuring NPS is to email 100,000s of your customers once a quarter to fill out the NPS questionnaire.

But that is a high outbound volume. You get all feedback at once and none for next few months. The outcome might not be accurate due to recency bias and overall might be skewed in a direction. Your customers may not want to be contacted ever quarter. You might get more responses than the minimum you need. But, you may get fewer than you want next quarter because customers don’t want to answer every quarter.

Another way is to implement it like a drip email campaign. You email a subset of customers every quarter and evenly spread out your email through 90 days of the quarter instead of one day of the quarter. You aim for a random and representative sample each quarter. You constrain yourself to not email the same customer every quarter. I call this a drip survey. This can be used to measure NPS and see its change over time.

Q: How did your experiences lead you to a Customer Experience role?

Jeff: In any CX role, it is imperative to understand the customer and their journey(s). I began my Twilio career by picking up the company phone to answer questions from prospects and new customers. Then, I helped SMB customers to onboard onto Twilio and deploy it for their live production needs. Later, as a Customer Success Manager, I supported large enterprises to use Twilio.

So my path at Twilio mirrored the customer journey of going from prospects to new customers to small businesses to large enterprises. I'd like to think that I've grown along with them!

By helping customers with their needs at every phase of the journey, I experienced most of the customer-facing processes firsthand. I saw where we served our customers well and where we fell short. Over time, you recognize patterns, and I wanted to make things better not just for individual customers but for groups. That's what led me to a role in CX!

Q: What was a recent challenge you faced in your role?

Jeff: One of the recent challenges I've faced is navigating the evolving expectations of the business for our team. Initially, our focus was more on helping product teams. When we moved under customer success, our attention moved towards the post-sales journey. Now, we're under systems. Our scope has broadened to the whole-journey but also includes charter to look at internal systems.

We're looking at systems like Salesforce, Zendesk, and even systems that have not yet been created. These systems are used by various teams like sales, customer success, product teams, and support teams. The challenge here is to improve the experience of different colleagues using these systems, all while keeping in mind the customer implications.

While the business's expectations have shifted, we want to maintain our identity by first defining the customer and understanding their needs. Even though our charter and titles have changed, our mission hasn't really shifted from the customer focus.

Internal Systems and their connection with customers

Systems like Salesforce, or Zendesk are used by companies to track sales or support, respectively. Customers interact with one side of these systems, such as with zendesk web portal to submit support tickets. Customer support or sales use these systems. But other employees also use these systems. Product teams use Zendesk to see trends of bugs or problems faced by customers. Accounting teams use salesforce to review pipeline of deals.

The information in salesforce is used to decide which stakeholder to contact in a (customer) business. It might be used to allow customers to make feature requests. So, systems can often strand both sides of the divide - customer-facing and internal-oriented.

This emphasizes that you can solve customer experience problems, not just by changing the product or improving the product, but also by changing processes. I’ve written more about this here.

Q: You work with several stakeholders, from support, sales, and CSM to individual product teams. How much of your work is driven by the persuasion of teams, and how much by execution internally?

Jeff: My work is a balance of persuasion of other teams and internal execution, split about 50/50. Setting direction internally within the team and building consensus with external stakeholders are equally important.

For instance, internally, we're working to determine customers for our surveys and ensure this happens as intended. We decide our goals first, then collaborate with other teams. Another internal project is insourcing. This involves moving the responsibility of running queries in-house to gain more control over our operations.

When interacting with other teams, a big part of our work is defining the customer. This definition varies across the organization – marketing might consider a prospect a customer, while accounting might only consider a user as a customer above a monthly spend. Support would consider every free and paid user as a customer as any of those can create support tickets. Aligning these definitions is crucial in understanding the customer and discussing their experiences.

 see-saw representing a balance between internal execution and persuading other teams.
see-saw representing a balance between internal execution and persuading other teams.

Q: What have you seen work well in persuading stakeholders?


Our approach to persuading stakeholders has evolved. Initially, it was more about persuasion - convincing stakeholders about our perspective and the need to act upon our perceived reality. However, the approach has shifted towards collaboration and having conversations to distill reality together, getting to recommendations in a collective way.

How you approach people is crucial, especially in today’s macro environment in tech companies. It is about approaching people with curiosity and humility. This also provides a sense of safety. Customer Experience can often be viewed as meddling or nosy, so we aim to differentiate ourselves by being helpful, asking non-charged (not leading) questions, and seeking data rather than opinions.

It's also important to share hypotheses and work together to find the truth instead of making assumptions. This approach applies to stakeholders in every function and role, and it sets the foundation for success rather than creating difficulties.

Another key to partnering with stakeholders is understanding their business. I often think CX has the luxury of viewing things end-to-end. But, the organizational structure holds many stakeholders to a narrow purview. I realized I needed to earn the right to persuade someone. I must understand my colleague's personalities, goals, metrics, and incentives for this. I need to understand the constraints within which they must operate. I must understand where their role begins and where it ends. What is their role? What is the other team's role? I need to understand these, so my team doesn't offer irrelevant suggestions. A conversation is not possible without understanding their world and building a relationship.

need to understand 5 things about each stakeholder to build a relationship and have the right to persuade them.
need to understand 5 things about each stakeholder to build a relationship and have the right to persuade them.

Driving Change In An Organization

Chip Heath and Dan Heath underscore the significance of including stakeholders in the decision-making process. This is not merely a token gesture, but rather a mindful move to create a shared sense of ownership and commitment among all teams involved. They explain that stakeholders actively involved in decision-making are more likely to support the initiatives because they feel their perspectives and contributions are valued.

Another benefit is that stakeholder inclusion leads to more diverse input, offering a richer array of insights and ideas that can lead to more innovative and effective solutions.

This approach of collaborative problem-solving is in contrast with the expert-mindset where one team reviews all information, comes to a recommendation and persuades another team to implement their recommendation.

Q: What are the different ways you measure customer experience?

Jeff: We publish monthly satisfaction (C-SAT, TXS) and quarterly promotion reports (NPS). But, we need to provide the big-picture as well as recent anecdotes. Leaders want to track changes in scores over time. At the same time, frontline employees want the most recent insights that connect to their work.

Beyond the scores, our team works to tell stories. We amplify other teams' Business & Product Requirement Documents (PRDs). We do that by adding customer quotes, metrics, and impact statements.

More recently, we experimented with long-form 'Customer Spotlight' writeups to illustrate vivid journeys. This enabled our colleagues to see the customer experience before and after their contribution to changing it. Although everyone focuses on their roles, I want to bring awareness of their work's upstream/downstream dependencies and effects.

One rich source we use is customer feedback from survey invitations. We ask our customers open-ended questions. This text-rich data provides us with a wealth of insights directly from the customer's perspective. Our open-ended questions are:

  • “Please help us do better! Can you tell us more about the ratings you gave us?”

  • “What is the primary reason for this score?”

But it's challenging to synthesize these responses. In the past, we had to cherry-pick responses that are most representative of the feedback. But, this biased it with our interpretation. However, as the business and the volume of data have grown, it has become increasingly difficult to manually read all the responses and determine their significance. To help with this, we've been using tools like Monkey learn, AWS Comprehend to implement approaches like topic classification and topic modeling.

We're continuously experimenting with ways to turn this vast amount of text data into actionable insights in a scalable and unbiased manner. We're aiming to create a system that can repeatedly work and tag responses.

One of our takeaways that might be a gotcha for other businesses is the importance of aligning customer expressions with internal understanding. We are now mapping two dimensions together - how customers express their experiences and how we internally comprehend those expressions. This approach enables us to train our classification models better and present actionable recommendations to our internal teams.

takeaway of automated classification was to put in the upfront effort of mapping customer speak to organization speak.
takeaway of automated classification was to put in the upfront effort of mapping customer speak to organization speak.

Measuring Customer Experience

The ways to measure customer experience include:

  1. Customer experience scores, as described in a section above

  2. Customers’ text responses to open-ended questions

  3. Quantification of customer support requests including number of tickets per thousand customers or time to resolution of tickets.

  4. Quantification of escalations from customer support, customer success, or sales teams to solve problems they face or problems their customers face

Q: What processes or tools do you use to increase the coverage of CX measurements? To improve your team’s understanding of customer experience?

Jeff: The foundational infrastructure to know about customer experience is the same as mentioned in the previous section. Processes include NPS, C-SAT, TXS surveys. Tools include Qualaroo, Qualtrics, Looker, Tableau, Sendgrid, and more. However, we've added some new surveys to our toolkit. These surveys are based more on the customer's lifecycle rather than specific interactions.

For example, we now send out a signup survey about 28 days after a new user signs up. As the signup process has lengthened over time, we've adjusted the timing of this survey to ensure we're getting feedback at the most appropriate stage.

Our understanding and approach towards customer experience have evolved. We're beginning to shift from focusing on specific teams and interactions to concentrating more on the overall customer journey. After determining our customer and outlining their journey, we now aim to measure various touchpoints throughout that journey rather than focusing solely on separate groups. This holistic approach allows us to better understand and improve the entire customer experience.

Q: On the other hand, what processes or tools do you use to reduce the effort and time by you and other teams in measuring and improving CX?

Jeff: First, we've built a CX data dictionary. While Looker is a powerful tool for those comfortable with how the company organizes its data, it can be challenging for new employees or executives who may not be as familiar. The CX data dictionary helps to clear that initial hurdle by providing clarity on what we're looking at and what's available.

We've started using Tableau more frequently. Tableau may be less flexible than Looker, requiring some upfront organization. Buts its lack of flexibility makes it more user-friendly as it is harder for a user to make mistakes. In Looker, changing one variable can alter your entire perspective, whereas Tableau restricts what you can modify to only what's been pre-programmed.

This dual approach of offering greater specificity, visibility, and availability through the dictionary, and more defined context through Tableau, allows anyone in the company to easily access and understand the data.

Another central tenet of our CX program is Closing the Loop. When a customer leaves a dissatisfied rating after an interaction or tells us they are against promoting our products, we know they are dissatisfied. Closing the Loop program asks colleagues to chat with dissatisfied customers to learn where we fell short. We reaffirm to the customer that we care, so this was an incredibly valuable motion. We also gain golden feedback to improve.

But, the company (and industry) is trending toward leaner operations. So, teams are working hard to answer initial requests and don't have time to contact every dissatisfied customer. We want to adjust to this constraint. Our CX team is experimenting with using LLMs to replace this human outreach with an automated yet personalized response. If launched, customers will feel acknowledged, and Twilio will gain insights with less manual effort.

Balance Between Increasing Coverage And Reducing Time

When looking to measure something, you have a trade-off. You can get more accurate measurement by spending more on the measuring equipment or the time to measure it, for example a laser-guided physical measurement or averaging over more time or instances. But how much accuracy is enough to make a good-enough decision? At what point are you spending more money and time on measurement that will not help you?

The same applies to measuring customer experience. You can gather 100s of data points but run out of VC runway, miss quarterly planning cycles, or miss sprint planning. You could build a plan to gather 100s of data points but frontline teams who need to gather it don’t have time budget for it.

I created a variant here of the content quality stick visual from Coping with Content Creation Challenges.

strategy value stick vs CX measurement-value value stick. Increasing coverage gives more value but costs more. Reducing time reduce value and saves expenses.
strategy value stick vs CX measurement-value value stick. Increasing coverage gives more value but costs more. Reducing time reduce value and saves expenses.

Q: What was the most recent software or tool that you or your team signed up for? What tools do you use in your work?

Jeff: The most recent tool that my team and I experimented with is Hugging Face, an open-source large language model. It was quite effective in initial testing, as we sought to enhance our understanding of AI and leverage our data more effectively. We also use a variety of other tools on a day-to-day basis. For instance, Salesforce reporting is useful for conversations with our finance or sales colleagues. Qualtrics is our go-to for surveys and dashboarding. Additionally, we utilize visual presentation tools like Lucidchart and Figma to map out customer journeys. These tools vary depending on the needs and resources of the company.

Software and Tools for Customer Experience Managers

Gathering Data:

Existing Data Sources:

Reviewing Data:

Reaching out to customers:

Synthesizing responses:

Map Customer Journey

Q: Is there anything else you would like to share about leading CX?

Jeff: An essential perspective I've found in leading CX is acknowledging the tension between scaling processes and personally engaging with individual cases. Despite the emphasis on automation and efficiency, much of our impactful insights come from reading individual cases, which may not be scalable but offers a deep understanding of each customer's journey.

Choosing when to deep-dive into cases versus when to consider the cohort as a whole can be complex. You might lean towards individual cases if high-level diagnostics indicate a concern, or if customers escalate to us directly. We also regularly ask our managers about challenging cases they've encountered recently. These approaches need a culture of trust, ensuring that these inquiries are for learning, not blaming.

A longstanding observation of mine is that the CX team is rarely credited when we find out about a customer challenge and get something right. Our comments to stakeholder teams can seem like criticism. Teams may be reluctant to immediately acknowledge the insight. The CX team may not learn that we played a role in an improvement until months/years later. This is problematic in today's macro economy when each team needs to quantify their return on investment. I believe a CX team is valuable when tasked to play the long game. The CX team's value comes in viewing issues across a longer horizon than other teams. Yet, the CX team also has the capability of zooming in and providing real-time visibility to guide the business in the here and now. I credit the team for being adaptable toward making a near-term impact so that the business recognizes our value. The effort to improve customer satisfaction is not done for pride or perception. Rather, it is a bellwether for customer retention. It has revenue implications.

Inverted Pyramid of Customer Research

Similar to Jeff, I suggest finding a balance of depth and breadth when using multiple data sources to understand the customer experience. I write more about this here.

understand customer experience by combining a lot of data, some support tickets or verbatims, and a few conversations.
understand customer experience by combining a lot of data, some support tickets or verbatims, and a few conversations.

I can understand and measure customer experience in these three ways. This helped me improve customer experience and business outcomes.

From the pictorial chart above, when you do quantitative data, if you look at the parameter on the side of the inverted parameter, you get a lot of data but very little “why”. The data helps in prioritization, finding patterns, and building hypotheses. From there, you move where the number of data points reduces but the amount of “Why” or empathy increases. So overall, collecting all of this helps us know the customer experience.

Anything Else?

Are there more questions you would like to ask Jeff? You can reach him on LinkedIn.



Commenting has been turned off.
bottom of page