Experimentation and trying new things is the key to success in most aspects of life, from science to art to marketing. Marketers have strong intuitive abilities to come up with various hypothesis. But how do we find out which hypothesis are true and which are false?
Sometimes our intuition may not be 100% right and time can change how effective a campaign is. We need to run experiments and have access to data to back it up and when we don’t experiment, we are on the risk. We can try something different every time: offer something different, set Send Time Optimization for better engagement, set a different cadence, etc..
Study done by a major travel site shows that 10% of experiments generated positive tests and modifications to improve something failed 90% of the times. Experimentation goes beyond A/B and multivariate testing and leads into a culture of curiosity and continuous testing.
How to build & support a culture of Curiosity:
- Ask how you can help customers have a better experience with your brand
- Test It and Trust It
- Not all experiments yield expected results
- More experiments run, the lower risk
Path Optimizer recently introduced in the Marketing Cloud’s Journey Builder provides the ability to test various journey paths that meet the campaign goals and gives analytical insights into data, not only real time stats when the test is running but also once the winner is chosen, we can reference snapshots in time of the data from the test and ongoing data after the test, we can see the number of contacts that have flown throw and the engagement metrics during and after the test. We can review post test data to ensure performance matches the expectations and plan our next test based on this.
Anatomy of a Test:
Path Optimizer configuration includes 4 options:
- Winner Type: Email Engagement vs Manual
- Engagement Period: Wait before calculating winner
- Split: 2 to 10 paths with % random distribution
- Holdback: Save x% of contacts to Wait for Winner
Email Engagement is based on same logs as Engagement Split and includes all emails across a path. We can have more activities on a single path and and the Path Optimizer combines all engagement raats of all emails on a path and calculates the highest % of clicks or opens OR lowest % of unsubscribes. Also a reason that we wouldn’t be able to check engagement on an individual email’s click metrics.
Real power of Path Optimizer is unlocked when we use the Manual Selection. It allows you to monitor any data inside/outside of MC and pick the winner based on external analysis of the data. One of the use cases of manual selection: pull in promo codes from Point of Sale and pick which code is frequently used by customers or push back to Sales cloud so you can run reports to determine winner based on milestones like lead conversion and many more.
Complex Use Cases:
Calculate Winner in Tableau: You have a complex algorithm for selecting your winner that involves conversions over a certain dollar amount, all of which is calculated and stored outside of Marketing Cloud. You can use a Custom Activity to send relevant data about the contact on each path in your test to Tableau. From there you track conversion and order totals from your e-commerce website, which is also synced to Tableau, to calculate the winner once your data set reaches statistical significance.
Calculate Winner in Sales Cloud:
Here’s a recording of the Technical Marketers meeting by Rob Everetts, Bill Jennings and Matthew Hager where you can find more details, use cases and demo and much more. Still hungry to learn more? Access this trailhead module to gain in depth insights into Path Optimizer. Be sure to follow Guilda Hilaire for the latest updates on schedule and topics for the upcoming meetings.
Click here to find the curated list of resources for all the previous Technical Marketers Meetings