Empathy and Data Science: A Fable of Near-Success

Author:

William Proffitt

Date Published:
December 28, 2016

Michael is an analytics director. This evening we find him frazzled. As he pulls out of the TalkThree parking lot, contemplating his next move, his radio whimpers a slow rock ballad. Michael let down the executives of his company today. The recipients of his first project aren’t experiencing the success he loudly promised, and his team has ended up back at the drawing board. How did he get here? Michael failed to consider his audience. Let’s rewind the story.

The Tale of TalkThree

TalkThree is a regional cellular carrier whose customers are canceling, or “churning,” for seemingly unpredictable reasons. TalkThree’s customer retention specialists work hard to keep customers, but the best they know to do is call customers at intervals to inquire about their satisfaction and perhaps highlight a new deal, while hoping they aren’t too annoying.

Meanwhile, TalkThree’s executives hear a compelling talk about the power of data science at a conference. They hit it off with Michael, a man with five years of data science experience who happens to share their love of beer brewing; after the conference, they hire Michael to build TalkThree’s first analytics team.

The first challenge for Michael’s team is to improve TalkThree’s customer retention program. Michael is eager to prove his team’s value. He works tirelessly to find the best possible internal data for his team, even bribing his colleagues with cake to get it. Once the data is gathered, his team takes a couple of months to understand the data and create an analytic base table. They choose a holdout data set based on their main time variable, select a great set of likely predictor variables, and begin training models. Their models soon perform beautifully, even on holdout data.

TalkThree’s best customer retention specialist, Meg, has a legendary reputation for smelling customer churn before it even happens. Michael’s colleagues jokingly suggest testing the performance of the data against a hypothetical customer retention team made up of Meg’s clones. Three months into their effort, the analytics team’s models are outperforming the Meg swarm five-to-one. The team schedules a happy hour.

Michael is thrilled. He had been touting his team to the executives, and now he can go to them with proof. They are amazed by the model’s power and ask him to deliver it to the customer retention team so that they can start using it.

The next day, Michael joins the all-hands meeting for the customer retention team and proudly shares his new model with the group. He quotes figures about how much more powerful it is than their interval method when selecting whom to call, and he energetically recounts his team’s development process.

Once Michael finishes, the customer retention team mumbles amongst themselves. “How are we supposed to use this model?” “What is logistic repression? That doesn’t sound very kind.” “Will this help me hit my targets earlier?” “Personally, I think I prefer supermodels.” And perhaps most troubling, “Is this model going to replace some of us?”

Later that afternoon, the customer retention team gets an enthusiastic email from Michael thanking them for their time and providing the model output as a spreadsheet. The spreadsheet contains a list of customer account IDs, each paired with a score between zero and one to indicate how likely the customer is to churn in the next month. Some retention specialists ignore the spreadsheet and others aren’t sure how to use it, but a few get excited and race to call the highest ranked customers in the sheet. These customers seem to be better leads than the usual interval-generated calls, so a hurricane of calls begins.

The next morning, specialists again look up the highest-ranked account IDs from the spreadsheets, only to discover that all of these customers have been called already. Their manager notices this problem and informs the executive team.

The executive team approaches Michael for a solution that would make it easier for the specialists to use the scores. Michael realizes that the best option is a custom tool to provide both model-generated leads and a data pipeline for getting fresh leads. This process would require a software engineer and three additional months of development time.

Everyone is dismayed. The customer retention manager wanted his team to take full advantage of the model scores immediately. The executives had expected Michael to reach actionable success, and now they faced additional costs. Finally, Michael had hoped to devote team members to their exciting next project of optimizing where to locate new retail shops. Nobody left TalkThree happy that day.

Takeaways

Though TalkThree isn’t a real company, this project trajectory is based on the real challenges of reaching client audiences with relevant analytics.

DJ Patil, the Chief Data Scientist of the United States, defines a data product as “a product that facilitates an end goal through the use of data.” In order for any analytics solution to achieve success in an organization, it needs to reach its audience in a way that works for them. Unfortunately, the spreadsheet Michael’s audience received was a poorly conceived data product.

There’s plenty to learn from Michael’s Story:

  • Michael and his team achieved technical success. Their model offered great predictions, but they failed to turn technical success into a business success for their audience.
  • The first time the analytics team met with their audience (client) was when they had output to give them.
  • Michael’s team didn’t communicate the value of their product in words or images the retention team could understand.
  • Michael’s audience was an operations team.
  • Delivery was an afterthought for Michael’s team. They should have considered their end product from the start.
  • For the retention team to adopt the new model, they would need to alter their process substantially. This is a change management problem for TalkThree as a whole, and it doesn’t solve itself.

If you want to deliver a product that helps your audience meet their goals, you must consider what your audience is like.

Understand Your Audience

These are some questions for you or your team to help you understand your audience.

  • What is their role? Are they marketers, investigators, police officers, case examiners, researchers? What matters most to them? What are the idioms of their world? Are you part of your audience? Is your audience even made up of people?
  • What kinds of decisions does your audience make? Is your audience deciding which customers to call? Which cases to investigate? Which neighborhoods to patrol? Who to hire? Which stocks to sell? Which land to donate?
  • How big is your audience? Are there five of them? Or closer to five thousand?
  • How technical is your audience? Would they like to be able to do their own data analysis or exploration?
  • How often would your audience benefit from updated data? Yearly? Monthly? Continuously? Never? Data usually gets stale. How much does historical data matter to them?
  • What else is in your audience’s world? Imagine the moment when your audience makes contact with your data product. Where are they? What else are they doing? What else is in their environment?

This mindset can be summarized as empathy: taking the time to imagine what life is like for someone else. Talking with someone from your audience goes a long way, especially if you involve them early on. If Michael had gotten Meg on his side right away, TalkThree’s retention project would have worked out better for everyone.

Considering your audience is hard work, but it’s worth it. We succeed most when our work helps others succeed too.


Editor’s Note: While the story is fiction, the events are drawn from the experiences of the author and his colleagues.

Previously published on Predictive Analytics Times.