PXL Prioritization Framework
This is blog post #9 as part of the series of blog posts in which I will sum up my learning experience with CXL Institute attending their Growth Marketing Minidegree.
Last week I started talking about Research and Testing. Today I will continue with the Prioritization Framework summing up my learning experience and review the following key takeaways:
> Now, You Have Your Own Problems -- What to Do Next?
After conducting the research process, going through technical analysis, risk analysis, analytics, heat maps, and user testing. Now you have your own list of your very own problems. It's not a list from a random blog post that's 55 things that you should test.
So research to identify what the problems are and why there are problems.
What to do next? well, it's time to build hypotheses of changes/solutions to the problems you have discovered. After is creating a treatment. Resetrucing landing pages, coding it up with the testing tool. We test the treatment, analyze the results to post-test analysis, and run follow up experiments.
> Categorize The Discovered Problems Into 4 Types of Issues
After gathering problems/issues we then categorize them into 4 types of issues, so that it becomes easier to prioritize tests later on. four types of issues (Peeps Way) are as follows:
- Instruments
Issues that are mostly about something that's measured wrong or incorrect, most measurements related issues fall here. - Just Do It
Easy to fix and no need for test issues, that can be easily done once and for all, like making the font a bit bigger and so on. - Test / Hypothesis
Most of the issues/problems fall here, because even though we may know a solution for a problem but we don't know what's the optimal solution for it. that's why testing is often the key.
- Investigate
sometimes you're not sure if the problem that you uncovered is really a problem, maybe it doesn't exist, maybe it's not a problem. that's why you need to do more investigations.
> Test Prioritization framework
You usually end up with a list of problems, which has all these issues/problems What categories are the issue?. Background, why is this issue an issue? What should we do about it? And the rating of how terrible of a problem this is. Is the five-star problem, extremely bad Or is it a one-star problem, which is like, it's a minor usability issue that we should fix eventually. And out of those problems that we categorized we need to test solutions, but what to test first? We need a test prioritization framework to do that.
We may use popular frameworks like the ICE framework or the PI Framework but they can be challenging as you need to decide upfront, how well the test is going to do. It's like I have a good feeling about this stuff. it's just you like your own idea too much. And that's why Peep developed the PXL Prioritization Framework.
> PXL Prioritization framework
Assesing ideas/assumptions in a binary fashion, one crtiria is change above the fold and answer it in a yes or no? question, So if it's above the fold, you get a point. If it's below the fold, you don't get a point. And this is about impact 'cause the more, the faster people can see what's changed the higher likelihood of having a positive impact. Is it noticeable within five seconds, towards zero? So if it's noticeable, the idea gets two points, or zero in case if the hypothesis, the treatment removing or adding something? this is about potential test of impact. Is it designed to increase user motivation? because on average tests, treatments that work on motivation part, are 5x or 10x more successful than the tests that work on the friction part. if you people goes in checkout with "I want to sign up for something" attitude, they're ready to put their info even if there are some fraction points there
Looking at the data is it addressing an issue we discovered through user testing? Is it discovered through qualitative research,or analytics or heat maps? So what this means is that if somebody comes with a test idea and has no data for it, it was score zeros. And that idea will go to the back of the line. And the the percentage of winning tests will dramatically go up and ease of implementation as well. So we'll get an actual quote from the development team of how many hours does.
> A/B Testing
How long should the A/B test run? When are we done with the test? Statistical significance might be the answer, Statistical Significance which is an exercise in algebra that helps in knowing when to the end the experimentation process.
Statistical Significance tells you nothing until the following two main criteras met, which are Sample size reached and Business cycles reached.
- Sample Size
Do we have enough people going through the experiment? calaculting the sample size is important to determine how many is enough, there are many different tools only for that, in the end you will need to put in two main variables in there, your current conversion rate of that page, that you're testing, not site-wide average. So based on the conversion rate of the page or group of pages that you're running to test on, what is the minimum uplift that you want to be able to detect? And then it tells you how many people you need per variation.
If you can't run a test within four weeks, you cannot run tests at all. but what you do when you're a low traffic website and don't have enough traffic to run tests? implement your best possible idea for those problems and just set it live. - Business cycles
conversion rate fluctuates day by day, Monday is different from Tuesday, and so on. People behave differently on Mondays and Fridays. So if, if you don't test one week at a time always, So like you test for a week, do we have enough sample size? No, add another week to have enough sample size after two weeks? No, add another week, you don't end the test in the middle of a week. So if it starts on a Tuesday, the test has to end on a Monday, seven-day cycle.
I will be continuing with Conversion Research in next's week blog post.
Comments
Post a Comment