top of page

ConnectionPoint: Predictive Campaign Success Indicators

Summary/Intro:

This is a growth initiative I ran as Director of Growth at ConnectionPoint Systems. It was one of several big projects I delivered that touched all aspects of the business.

 

I identified one of our only sustainable differentiators was first party data (I posit that's one of the only sustainable differentiators post digital dawn). One of the problems I was chewing on was how to make customer campaigns more successful. The main revenue model was a per transaction tip ask, so more transactions leads to more asks, which leads to more revenue. 

We provided lots of coaching for customers - onsite, in app, webinars, success team engagements, etc. But I found customer campaign success rates were stagnate if we accounted for customer tenure. So I set about to figure out why. 

The Deep Dive

I distilled our qualitative success coaching tips and best practices into a Yes/No flaggable list. Then I ran those against our historical campaigns and found two principal insights: 

  • The rate of campaigns adopting our best practices loosely correlated with the rate of successful campaigns (loose, but strong enough and passed the sense check). 

  • There were several other key drivers we were not accounting for correctly. 

The first output was of the quick fix variety. I summarized the analysis, presented to Marketing, CS, and Product counterparts. We input the additional drivers into our existing content and future product coaching roadmap. 

Models and Bottles

Actually, there were no bottles. Just data models... couldn't help myself :)

Next, I went back to the lab to create a predictive model for campaign success. I used an auto ML engine (Alpine Meadow), deploying classification and regression models to account for a wide set of variables such as: 

  • Time of year, day of the week, customer tenure, campaign type, campaign category, donation frequency and size, social sharing frequency and effectiveness, campaign duration, Campaign text and labels, campaign images and videos, donation amount defaults, updates, supporter comments, etc. 

The two 'final' models worked in concert processing text, numeric, and datetime variables to predict campaign outcomes based on the choices customers made while creating it. The purpose was twofold - internal knowledge and external coaching. Externally, our ability to guide and coach customers through their fundraising endeavors was already a chief differentiator - productizing this was the real prize

Here is a good operational explanation of the ML engine. The regression model ended up being a supporting one, the clean-up batter was the classification model - performing the task of grouping campaigns based on a goal completion flag (Y/N). I used the F1 metric (read more here) - F1 is the harmonic mean of precision and recall. In a nutshell, it's a measure of both the purity and completeness of predictions. It's really good for training with imbalanced data sets. 

The model target is the goal completion rate flag (Y/N). The pipeline computes relative importance of each and probabilities for every row (+ other stuff). I used the probabilities by row to isolate each variable and show the difference between goal completion Y and N. 
 

It's difficult to mix them all together in a legible way for two reasons:

 

1. The self-fulfilling nature of some important variables (such as time to activation or update count). 

2. The model isn't THAT good. This model is more of a chicken breast. It's good enough for dinner, but not special.

But as a fastidious startup growth operator, I did anyways. 

 

 

 

 

I had working models and a solid proof of concept. The key question at this point is: will customers be more successful using our product if we provide quantitative evidence for recommendations

 

So I researched and put together two documents to present the findings to the rest of the company, the generalized one is shown below: 

If you're like most folks, you might not read that whole thing. So I'll pull out a few of the gold nuggets on your behalf. 

 

Overview:

  • Out of campaigns that launch and receive at least 1 donation, only 14% meet or exceed their goal.

  • The choice that takes more effort is the one that predicts a higher rate of success. 

  • There are two ways to get more of an action: reduce friction or increase motivation.

    • This document focuses on increasing motivation for our customers to complete the action(s), which is needed because the optimal course is more effort.

 

Size of Prize: 

  • The aggregate size of prize across the indicators identified is $874,589 (lost revenue). 

    • ​This does not take into account the impact of more success on retention.

Common Themes:

  • Mitigate against hyperbolic discounting.

    • By showing the fundraiser the impact of their choices designing a campaign, we bring a glimmer of the later reward into the present. 

  • Provide social proof.

    • Implied in the empirical evidence is a breadth of knowledge and expertise gathered by us from many other fundraisers. 

  • Activate loss aversion.

    • Showing the upside/downside in terms of results they can easily process, fundraisers must confront the later consequences of their present choices. 

  • Activate the sunk cost fallacy.

    • The more time and effort that goes into a campaign, the higher chance the fundraiser will follow through and take the subsequent steps required to make it successful. 

  • Increase product's perceived value.

    • The more effort that goes into the campaign, the more valuable the fundraiser will believe the campaign and therefore our product is (effort justification). 

  • Customization leads to ownership.

  • Precision increases trust.

    • The precision bias predicts the more precise we are in our recommendations, the more likely customers are to believe them. 

The Action

Following the presentation of the documents to the broader company and my product team peers, it was time to transform the work into customer value. I worked with Product, CS, Engineering, and Marketing to embed the model results into our PLG motion (pre-sign up, sign-up flow, onboarding, campaign creation, campaign reporting). 

  • The criteria for success is simple - do campaigns who see the quantitative evidence achieve their goals at a higher rate? 

This article does a nice job of breaking down the recommendations for customers: 6 Campaign Success Indicators. Visuals of the model's predictive results are below. 

The Outcome

After a few stress tests, we melded the model into all customer facing materials and our product experience. I crafted the deployment plan - start with returning customers in our top 5 campaign categories, only using our top 3 success influencing variables. 

 

Regrettably, I'm light on results here. A few weeks after what you just read, I was part of a significant reduction in force at ConnectionPoint. Early reads and customer feedback were positive, but not nearly enough time to assess campaign success. 

bottom of page