top of page

Growth Marketing Project
Experimentation & Email Template

This is a text based summary of a growth marketing project I completed in 2023. The company is an educational coaching and tutoring marketplace. I completed a one and done project with two facets.

 

#1 Designing and running two growth experiments

 

The growth experiments had the following constraints: 

  • Objective -> learn as much as possible about acquiring new learners for the marketplace. 

  • Each experiment takes less than 1 week to execute and get results. 

  • $1,000 max for each. 

I summarized these in a text format, roughly the same as my proposed plan of attack. 

#2 Creating a cold email template

I took a screenshot of this with specifics grayed out. This was a fun add on, that I offered as a cherry on top - leveraging my past experience in the space. 

Growth Experiment #1

Messaging - direct response

Strategy overview

The two most important things I see to illuminate how to acquire new users are channels and messaging.

 

This experiment focuses on messaging. Figuring out which messages work/don't work for which segments is going to take more than a week. It's more likely to be a continuous journey. Acknowledging that, it's better to start the journey immediately.

 

This is assuming we don't already know what messaging is working/not working. A growth marketer not knowing what messaging is going to work/not work for a segment is like a blind squirrel looking for an acorn. Finding one (or more) messages that work is a foundation from which to build on in the short term - and will continue to pay dividends over the long term.

Experiment overview

 

To get an experiment running and producing reliable results in a week, I'm inclined to forgo most non-paid activities and any creating demand tactics. Which leaves us with demand capture and mostly paid levers at our disposal.

 

The first order of business is the targeting, roughly as follows:

  • exclude all current contacts

  • exclude retargeting parameters (site, socials, etc.)

  • include all low income zip codes

  • include relevant student ages

  • include intent signals

    • These can and should be multiple 'or' logic statements. For example searching for LLMs, following alternative solutions on social, watching education videos, visiting SAT prep websites, etc. 

    • What we want is some indication of existing demand.

 

Second, the levers to pull:

For this test, I want to pull two levers - because I think they're most likely to be performant channels and we don't know very much about them already (e.g. rules out search).

  • Social media platforms

    • Tiktok, Instagram, Reddit (plus any others I'm not thinking of that fit tightly with the target audience)

  • Audio advertising

    • Spotify, Amazon (plus any others I'm not thinking of that fit tightly with the target audience)

 

Third, how to pull them:

  • For social media platforms - a short visual loop (3-4 assets) in a series.

    • Short overview video (10-15 seconds)

    • Target 75%+ viewers with an image hitting the messaging

    • Target #2 viewers with a social proof image/video (whatever we have that's best)

    • Target #3 viewers with the messaging again and call to action

  • For audio platforms - a 2 part sequence.

    • Overview and social proof.

    • Hit the messaging and call to action.

 

These are not to overlap. Thinking a 50/50 split, randomized smallest common location denominator (e.g. zip code).

 

Fourth, what to pull them with:

  • I'd test 3 variations of messaging:

    • Free, easy, accessible

    • Achievement and motivation

    • Disease and cure

 

Regarding the budget: Unless CPC/CPM has skyrocketed, since I last saw it, we should have no problem getting enough distribution and statistical power for less than $1000.

 

Finally - guardrails. I don’t know specifically what these would be. So speaking generally, when I experiment with customer facing things, I always use failure guardrails to protect customer experience. These are metrics that if they fall below a certain level, the test is paused or killed (depending on the diagnosis). 

Analysis

 

What I want to learn from this experiment is how do different segments respond to the messaging AND how do the different messages perform agnostic of segment. 

 

The key measures of success to tether to are:

  • Site traffic

  • Sign ups

  • Sessions booked (or other activation metric)

 

These primary success measures should be, and hopefully are, the most direct and clear output metrics. There should be very little to no ambiguity here. 

 

In the K-12 education world, there is so much publicly accessible data - I’d consider it a marketing felony not to use at least some. Consequently, aside from the messaging, the variables I'm most curious to slice by are:

  • Location

    • Urbanicity

    • State

    • Metro Area/County/zip code

    • Socioeconomic community composition

    • Community professions & industries

  • School district profile

    • Teacher tenure

    • Instructional spend as a rate of total spend

    • Test scores

    • Student teacher ratio

 

These additional variables increase the area and depth of the test, opening up more optionality for success to emerge. They will help construct a quantitative market map, showing if and where fit occurs. Additionally, these data can inform positioning, product development, donor development, market analysis, and customer segmentation.

Next steps

 

Next steps depend greatly on the yield. I shy away from prescribing before the diagnosis is in. Let’s walk down the objective of this experiment as an illustrative example. 

 

  • What I want as an outcome is at least 1 fit between audience segment and message. If there are more, fantastic. 

  • Then take the fit(s) and start running with it (organically is my preference, but if budget and ROI are there, then paid too).

 

  • In a secondary process, I'd take the messaging that didn't work or didn't work as well back to the lab.

    • These misfits are either to be refurbished and rolled back out or shelved  - depending on the findings. 

 

In past lives, I've had a database of messaging frameworks, segments, and rankings/metrics. I'd probably do something similar here.

 

Regardless of the outcome and direct next steps, experiment documentation is crucial. I’ve been involved with both good and bad documentation practices. Bad documentation practices have always negatively impacted future work, and more importantly, customers. So I’d document as much as possible - in a shared location and accessible format.

Dependencies & Assumptions

I believe most of these are called out in line above. Adding a couple of key things here:

  • Strong content assets are needed for the experiment to be successful in production. Not needed for the experiment to be successful. 

  • The methodology requires a base level of digital analytics for owned assets (website, product). If those don’t exist, I’d advocate for setting enough up to measure changes. 

  • I’m assuming: internally, there is an appetite and ability to advertise to the target users.

Growth Experiment #2

Distribution channels - direct response

Strategy overview


The two most important things I see to illuminate how to acquire new students are channels and messaging.

We know a bit about channels (hence #2 behind messaging), but not enough to be dangerous. We know a bit about who to reach, after the messaging experiment - a bit about what to say to them, now I want to know where to say it.

 

If the messaging experiment successfully unearthed at least one match between the audience and message - and we are successful in finding at least one distribution channel through this experiment - we have a solid foundation to build from.
 

Experiment overview

To get an experiment running and producing reliable results in a week, I'm inclined to forgo most non-paid activities and any creating demand tactics. Which leaves us with demand capture and mostly paid levers at our disposal.

 

The first order of business is the targeting:

  • exclude all current contacts

  • exclude retargeting parameters (site, socials, etc.)

  • where possible, exclude experiment #1 audiences

  • include all low income zip codes

  • include relevant student ages

  • include intent signals

    • These can and should be multiple 'or' logic statements. For example searching for LLMs, following alternative solutions on social, watching education videos, visiting SAT prep websites, etc. 

    • What we want is some indication of existing demand. 

 

Second, the channels to include:

The objective here is to delve into areas we don't already know will work and have asymmetric upside - which rules out search.

  • Social media

    • Tiktok, Instagram, Reddit (plus any others that fit tightly with the target audience, not including youtube b/c it's been expensive in my past lives).

    • The goal for social is to stack rank the channels together. Very likely there is a fit or multiple in here.

  • Audio platforms

    • Spotify, Amazon (plus any others that fit tightly with the target audience such as apple/soundcloud).

  • Referrals

    • Current adult contacts (that it's appropriate to email)

      • Multiple segments - donors, employees, volunteers, school employees, parents

    • Influencer types

      • I don't know enough about the space to be specific here.

      • Ideally, we could get this pro bono or at a greatly reduced cost. If not, then this is probably a no go for the budget.

 

Third, how to pull the levers for each channel set:

  • Social platforms- same playbook as experiment #1, assuming it didn't bomb.

    •  A short visual loop (3-4 assets) in a series.

      • Short overview video (10-15 seconds)

      • Target 75%+ viewers with an image hitting the messaging

      • Target #2 viewers with a social proof image/video (whatever we have that's best)

      • Target #3 viewers with the messaging again and call to action

  • Audio platforms - a 2 part sequence.

    • Overview and social proof.

    • Hit the messaging and call to action.

  • Referrals

    • Current adult contacts (that it's appropriate to email)

      • Share unique landing pages/ref links (whichever is easier in website builder)

      • Communicate all the great stuff we've been up to.

      • Call to action is to share the good word

    • Influencer types

      • Share unique landing pages/ref links (whichever is easier in website builder)

      • Call to action is to share the good word.

        • Hard to get into specifics without knowing what the partnerships/inventory would be.

 

Fourth, what to distribute:

The winning message from experiment #1 (aka most performant). 

  • Tailored to each channel - e.g. audio for audio, visual for Instagram, video clip for Tik-tok, text for Reddit, text/visuals for referrals. 

 

Regarding the budget: Unless CPC/CPM has skyrocketed, since I last saw it, we should have no problem getting enough distribution and statistical power for less than $1000.

 

Finally - guardrails. I don’t know specifically what these would be. So speaking generally, when I experiment with customer facing things, I always use failure guardrails to protect customer experience. These are metrics that if they fall below a certain level, the test is paused or killed (depending on the diagnosis).

Analysis

 

What I want to learn from this experiment is how different channels perform (cost & LOE vs. gain) individually and comparatively, and get insight into what audience behavior is like in different channels.

 

The key measures of success to tether to are:

  • Site traffic

  • Sign ups

  • Sessions booked (or other activation metric)

 

These primary success measures should be, and hopefully are, the most direct and clear output metrics. There should be very little to no ambiguity here. 

 

Aside from the channels, the variables I'm most curious to slice by are:

  • Location

    • Urbanicity

    • State

    • Metro Area/County/zip code

    • Socioeconomic community composition

    • Community professions & industries

  • School district profile

    • Teacher tenure

    • Instructional spend as a rate of total spend

    • Test scores

    • Student teacher ratio

 

These additional variables increase the area and depth of the test, opening up more optionality for success to emerge. They will help construct a quantitative market map, showing if and where fit occurs. Additionally, these data can inform positioning, product development, donor development, market analysis, and customer segmentation.

Next steps

 

Next steps depend greatly on the yield. I shy away from prescribing before the diagnosis is in. Let’s walk down the objective of this experiment as an illustrative example. 

 

  • What I want as an outcome is at least 1 fit between audience segment and channel. If there are more, fantastic.

  • Then take the fit(s) and start running with it (organically is my preference, but if budget and ROI are there, then paid too).

 

  • In a secondary process, I'd take the channels that didn't work or didn't work as well back to the lab. 

    • These misfits are either to be reengineered and rolled back out or shelved - depending on the findings. 

 

This is a first step in establishing and operationalizing a marketing mix. The learnings ladder into that down the road. In the short term, the ideal next step is to have a promising motion or multiple that we can lean on to create value while the bigger picture comes together.

Dependencies & Assumptions

I believe most of these are called out in line above. Adding a couple of key things here:

  • Strong content assets are needed for the experiment to be successful in production. Not needed for the experiment to be successful. 

  • The methodology requires a base level of digital analytics for owned assets (website, product). If those don’t exist, I’d advocate for setting enough up to measure changes. 

  • I’m assuming: internally, there is an appetite and ability to advertise to the target users. 

  • I’m assuming: a level of comfort with tapping the referral segments.

Cold Email Template

bottom of page