Conversion optimization (CRO) is one of the most impactful things you can do as a marketer.
I mean, bringing traffic to a website is important (because without traffic you’re designing for an audience of crickets). But without a cursory understanding of conversion optimization—including research, data-driven hypotheses, a/b tests, and analytical capabilities—you risk making decisions for your website traffic using only gut feel.
CRO can give your marketing team ideas for what you can be doing better to convert visitors into leads or customers, and it can help you discover which experiences are truly optimal, using A/B tests.
However, as with many marketing disciplines, conversion optimization is constantly misunderstood. It’s definitely not about testing button colors, and it’s not about proving to your colleagues that you’re right.
I’ve learned a lot about how to do CRO properly over the years, and below I’ve compiled 20 conversion optimization tips to help you do it well, too.
Conversion Optimization Tip 1:
Learn how to run an A/B test properly
Running an A/B test (an online controlled experiment) is one of the core practices of conversion optimization.
Testing two or more variations of a given page to see which performs best can seem easy due to the increased simplification of testing software. However, it’s still a methodology that uses statistical inference to make a decision as to which variant is best delivered to your audience. And there are a lot of fine distinctions that can throw things off.
There are many nuances we could get into here—Bayesian vs. frequentist statistics, one-tailed vs. two-tailed tests, etc.—but to make things simple, here are a few testing rules that should help you breeze past most common testing mistakes:
- Always determine a sample size in advance and wait until your experiment is over before looking at “statistical significance.” You can use one of several online sample size calculators to get yours figured out.
- Run your experiment for a few full business cycles (usually weekly cycles). A normal experiment may run for three or four weeks before you call your result.
- Choose an overall evaluation criterion (or north star metric) that you’ll use to determine the success of an experiment. We’ll get into this more in Tip 4.
- Before running the experiment, clearly write your hypothesis (here’s a good article on writing a true hypothesis) and how you plan to follow up on the experiment, whether it wins or loses.
- Make sure your data tracking is implemented correctly so you’ll be able to pull the right numbers after the experiment ends.
- Avoid interaction effects if you’re running multiple concurrent experiments.
- QA your test setup and watch the early numbers for any wonky technical mistakes.
I like to put all of the above fine details in an experiment document with a unique ID so that it can be reviewed later—and so the process can be improved upon with time.
An example of experiment documentation using a unique ID.
Tip 1: Ensure you take the time to set up the parameters of your A/B test properly before you begin. Early mistakes and careless testing can compromise the results.
Conversion Optimization Tip 2:
Learn how to analyze an A/B test
The ability to analyze your test after it has run is obviously important as well (and can be pretty nuanced depending on how detailed you want to get).
For instance, do you call a test a winner if it’s above 95% statistical significance? Well, that’s a good place to begin, but there are a few other considerations as you develop your conversion optimization chops:
- Does your experiment have a sample ratio mismatch?
Basically, if your test was set up so that 50% of traffic goes to the control and 50% goes to the variant, your end results should reflect this ratio. If the ratio is pretty far off, you may have had a buggy experiment. (Here’s a good calculator to help you determine this.) - Bring your data outside of your testing tool.
It’s nice to see your aggregate data trends in your tool’s dashboard, and their math is a good first look, but I personally like to have access to the raw data. This way you can analyze it in Excel and really trust it. You can also import your data to Google Analytics to view the effects on key segments.
This can also open up the opportunity for further insights-driven experiments and personalization. Does one segment react overwhelmingly positive to a test you’ve run? Might be a good opportunity to implement personalization.
Checking your overall success metric first (winner, loser, inconclusive) and then moving to a more granular analysis of segments and secondary effects is common practice among CRO practitioners.
Here’s how Chris McCormick from PRWD explains the process:
Once we have a high level understanding of how the test has performed, we start to dig below the surface to understand if there are any patterns or trends occurring. Examples of this would be: the day of the week, different product sets, new vs returning users, desktop vs mobile etc.
Also, there are tons of great A/B test analysis tools out there, like this one from CXL:
Tip 2: Analyze your data carefully by ensuring that your sample ratio is correct. Then export it to a spreadsheet where you can check your overall success metric before moving on to more granular indicators.
Conversion Optimization Tip 3:
Learn how to design your experiments
At the beginning, it’s important to consider the kind of experiment you want to run. There are a few options in terms of experimental design (at least, these are the most common ones online):
- A/B/n test
- Multivariate test
- Bandit test
A/B/n test
An A/B/n test is what you’re probably most used to.
It splits traffic equally among two or more variants and you determine which test won based on its effect size (assuming that other factors like sample size and test duration were sufficient).
An A/B test with four variants: Image source
Multivariate test
In a multivariate test, on the other hand, you can test several variables on a page and hope to learn what the interaction effects are among elements.
In other words, if you were changing a headline, a feature image, and a CTA button, in a multivariate test you’d hope to learn which is the optimal combination of all of these elements and how they affect each other when grouped together.
Generally speaking, it seems like experts run about ten a/b tests for every multivariate test. The strategy I go by is:
- Use A/B testing to determine best layouts at a more macro-level.
- Use MVT to polish the layouts to make sure all the elements interact with each other in the best possible way.
Bandit test
Bandits are a bit different. They are algorithms that seek to automatically update their traffic distribution based on indications of which result is best. Instead of waiting for four weeks to test something and then exposing the winner to 100% traffic, a bandit shifts its distribution in real time.
Bandits are great for campaigns where you’re looking to minimize regrets, such as short-term holiday campaigns and headline tests. They’re also good for automation at scale and targeting, specifically when you have lots of traffic and targeting rules and it’s tough to manage them all manually.
Unfortunately, while they are simpler from an experimental design perspective, they are much harder for engineers to implement technically. This is probably why they’re less common in the general marketing space, but an interesting topic nonetheless. If you want to learn more about bandits, read this article I wrote on the topic a few years ago.
Tip 3: Consider the kind of experiment you want to run. Depending on your needs, you might run an A/B/n test, a multivariate test, a bandit test, or some other form of experimental design.
Conversion Optimization Tip 4:
Choose your OEC
Returning to a point made earlier, it’s important to choose which north star metric you care about: this is your OEC (Overall Evaluation Criterion). If you don’t state this and agree upon it up front as stakeholders in an experiment, you’re welcoming the opportunity for ambiguous results and cherry-picked data.
Basically, we want to avoid the problem of HARKing: hypothesizing after results are known.
Twitter, for example, wrote on their engineering blog that they solve this by stating their overall evaluation criterion up front:
One way we guide experimenters away from cherry-picking is by requiring them to explicitly specify the metrics they expect to move during the set-up phase….An experimenter is free to explore all the other collected data and make new hypotheses, but the initial claim is set and can be easily examined.
The term OEC was popularized by Ronny Kohavi at Microsoft, and he’s written many papers that include the topic, but the sentiment is widely known by people who run lots of experiments. You need to choose which metric really matters, and which metric you’ll make decisions with.
Tip 4: In order to avoid ambiguous or compromised data, state your OEC (Overall Evaluation Criterion) before you begin and hold yourself to it. And never hypothesize after results are known.
Conversion Optimization Tip 5:
Some companies shouldn’t A/B test
You can still do optimization without A/B testing, but not every company can or should run A/B tests.
It’s a simple mathematical limitation:
Some businesses just don’t have the volume of traffic or discrete conversion events to make it worth running experiments.
Getting an adequate amount of traffic to a test ultimately helps ensure its validity, and you’ll need this as part of your sample size to ensure a test is cooked.
In addition, even if you could possibly squeeze out a valid test here and there, the marginal gains may not justify the costs when you compare it to other marketing activities in which you could engage.
That said, if you’re in this boat, you can still optimize. You can still set up adequate analytics, run user types on prototypes and new designs, watch session replays, and fix bugs.
Running experiments is a ton of fun, but not every business can or should run them (at least not until they bring some traffic and demand through the door first).
Tip 5: Determine whether your company can or even should run A/B tests. Consider both your volume of traffic and the resources you’ll need to allocate before investing the time.
Conversion Optimization Tip 6:
Landing pages help you accelerate and simplify testing
Using landing pages is correlated with greater conversions, largely because using them makes it easier to do a few things:
- Measure discrete transitions through your funnel/customer journey.
- Run controlled experiments (reducing confounding variables and wonky traffic mixes).
- Test changes across templates to more easily reach a large enough sample size to get valid results.
To the first point, having a distinct landing page (i.e. something separate and easier to update than your website) gives you an easy tracking implementation, no matter what your user journey is.
For example, if you have a sidebar call to action that brings someone to a landing page, and then when they convert, they are brought to a “Thank You” page, it’s very easy to track each step of this and set up a funnel in Google Analytics to visualize the journey.
Landing pages also help you scale your testing results while minimizing the resource cost of running the experiment. Ryan Farley, co-founder and head of growth at LawnStarter, puts it this way:
At LawnStarter, we have a variety of landing pages….SEO pages, Facebook landing pages, etc. We try to keep as many of the design elements such as the hero and explainer as similar as possible, so that way when we run a test, we can run it sitewide.
That is, if you find something that works on one landing page, you can apply it to several you have up and running.
Tip 6: Use landing pages to make it easier to test. Unbounce lets you build landing pages in hours—no coding required—and conduct unlimited A/B tests to maximize conversions.
Conversion Optimization Tip 7:
Build a growth model for your conversion funnel
Creating a model like this requires stepping back and asking, “how do we get customers?” From there, you can model out a funnel that best represents this journey.
Most of the time, marketers set up simple goal funnel visualization in Google Analytics to see this:
This gives you a lot of leverage for future analysis and optimization.
For example, if one of the steps in your funnel is to land on a landing page, and your landing pages all have a similar format (e.g. offers.site.com), then you can see the aggregate conversion rate of that step in the funnel.
More importantly, you can run interesting analyses, such as path analysis and landing page comparison. Doing so, you can compare apples to apples with your landing pages and see which ones are underperforming:
The bar graph on the right allows you to quickly see how landing pages are performing compared to the site average.
I talk more about the process of finding underperforming landing pages in my piece on content optimization if you want to learn step-by-step how to do that.
Tip 7: Model out a funnel that represents the customer journey so that you can more easily target underperforming landing pages and run instructive analyses focused on growth.
Conversion Optimization Tip 8:
Pick low hanging fruit in the beginning
This is mostly advice from personal experience, so it’s anecdotal: when you first start working on a project or in an optimization role, pick off the low hanging fruit. By that, I mean over-index on the “ease” side of things and get some points on the board.
It may be more impactful to set up and run complex experiments that require many resources, but you’ll never pull the political influence necessary to set these up without some confidence in your abilities to get results as well as in the CRO process in general.
To inspire trust and to be able to command more resources and confidence, look for the easiest possible implementations and fixes before moving onto the complicated or risky stuff.
And fix bugs and clearly broken things first! Persuasive copywriting is pretty useless if your site takes days to load or pages are broken on certain browsers.
Tip 8: Score some easy wins by targeting low hanging fruit before you move on to more complex optimization tasks. Early wins give you the clout to drive bigger experiments later on.
Conversion Optimization Tip 9:
Where possible, reduce friction
Most conversion optimization falls under two categories (this is simplified, but mostly true):
- Increasing motivation
- Decreasing friction
Friction occurs when visitors become distracted, when they can’t accomplish a task, or simply when a task is arduous to accomplish. Generally speaking, the more “nice to have” your product is, the more friction matters to the conversion. This is reflected in BJ Fogg’s behavior model:
In other words, if you need to get a driver’s license, you’ll put up with pure hell at the DMV to get it, but you’ll drop out of the funnel at the most innocent error message if you’re only trying to buy something silly on drunkmall.com.
A few things that cut down on friction:
- Make your site faster.
- Trim needless form fields.
- Cut down the amount of steps in your checkout or signup flow.
For an example on the last one, I like how Wordable designed their signup flow. You start out on the homepage:
Click “Try It Free” and get a Google OAuth screen:
Give permissions:
And voila! You’re in:
You can decrease friction by reducing feelings of uncertainty as well. Most of the time, this is done with copywriting or reassuring design elements.
An example is with HubSpot’s form builder. We emphasize that it’s “effortless” and that there is “no technical expertise required” to set it up:
(And here’s a little reminder that HubSpot integrates beautifully with Unbounce, so you’ll be able to automatically populate your account with lead info collected on your Unbounce landing pages.)
Tip 9: Cut down on anything that makes it harder for users to convert. This includes making sure your site is fast and trimming any forms or steps that aren’t necessary for checkout or signup.
Conversion Optimization Tip 10:
Help increase motivation
The second side of the conversion equation, as I mentioned, is motivation.
An excellent way to increase the motivation of a visitor is simply to make the process of conversion…fun. Most tasks online don’t need to be arduous or frustrating, we’ve just made them that way due to apathy and error.
Take, for example, your standard form or survey. Pretty boring, right?
Well, today, enough technological solutions exist to implement interactive or conversational forms and surveys.
One such solution is Survey Anyplace. I asked their founder and CEO, Stefan Debois, about how their product helps motivate people to convert, and here’s what he said:
An effective and original way to increase conversion is to use an interactive quiz on your website. Compared to a static form, people are more likely to engage in a quiz, because they get back something useful. An example is Eneco, a Dutch Utility company: in just 6 weeks, they converted more than 1000 website visitors with a single quiz.
Full companies have been built on the premise that the typical form is boring and could be made more fun and pleasant to complete (e.g. TypeForm). Just think, “how can I compel more people to move through this process?”
Other ways to do this that are quite commonplace involve invoking certain psychological triggers to compel forward momentum:
- Implement social proof on your landing pages.
- Use urgency to compel users to act more quickly.
- Build out testimonials with well-known users to showcase authority.
There are many more ways to use psychological triggers to motivate conversions. Check out Robert Cialdini’s classic book, Influence, to learn more. Also, check out The Wheel of Persuasion for inspiration on persuasive triggers.
Tip 10: Make your conversion process fun in order to compel your visitors to keep moving forward. Increased interactivity, social proof, urgency, and testimonials that showcase authority can all help you here too.
Conversion Optimization Tip 11:
Clarity > Persuasion
While persuasion and motivation are really important, often the best way to convert visitors is to ensure they understand what you’re selling.
Stated differently, clarity trumps persuasion.
Use a five-second test to find out how clear your messaging is.
Conversion Optimization Tip 12:
Consider the “Pre-Click” Experience
People forget the pre-click experience. What does a user do before they hit your landing pages? What ad did they click? What did they search in Google to get to your blog post?
Knowing this stuff can help you create strong message match between your pre-click experience and your landing page.
Sergiu Iacob, SEO Manager at Bannersnack, explains their process for factoring in keywords:
When it comes to organic traffic, we establish the user intent by analyzing all the keywords a specific landing page ranks for. After we determine what the end result should look like, we adjust both our landing page and our in app user journey. The same process is used in the optimization of landing pages for search campaigns.
I’ve recommended the same thing before when it comes to capturing email leads. If you can’t figure out why people aren’t converting, figure out what keywords are bringing them to your site.
Usually, this results in a sort of passive “voice of customer” mining, where you can message match the keywords you’re ranking for with the offer on that page.
It makes it much easier to predict what messages your visitors will respond to. And it is, in fact, one of the cheapest forms of user research you can conduct.
Using Ahrefs to determine what keywords brought traffic to a page.
Tip 12: Don’t forget the pre-click experience. What do your users do before they hit your landing page? Make sure you have a strong message match between your ads (or emails) and the pages they link to.
Conversion Optimization Tip 13:
Build a repeatable CRO process
Despite some popular blog posts, conversion optimization isn’t about a series of “conversion tactics” or “growth hacks.” It’s about a process and a mindset.
Here’s how Peep Laja, founder of CXL, put it:
The quickest way to figure out whether someone is an amateur or a pro is this: amateurs focus on tactics (make the button bigger, write a better headline, give out coupons etc) while pros have a process they follow.
And, ideally, the CRO process is a never-ending one:
Conversion Optimization Tip 14:
Invest in education for your team
CRO people have to know a lot about a lot:
- Statistics
- UX design
- User research
- Front end technology
- Copywriting
No one comes out the gate as a 10 out of 10 in all of those areas (most never end up there either). You, as an optimizer, need to be continuously learning and growing. If you’re a manager, you need to make sure your team is continuously learning and growing.
Conversion Optimization Tip 15:
Share insights
The fastest way to scale and leverage experimentation is to share your insights and learnings among the organization.
This becomes more and more valuable the larger your company grows. It also becomes harder and harder the more you grow.
Essentially, by sharing you can avoid reinventing the wheel, you can bring new teammates up to speed faster, and you can scale and spread winning insights to teams who then shorten their time to testing. Invest in some sort of insights management system, no matter how basic.
Full products have been built around this, such as GrowthHackers’ North Star and Effective Experiments.
Tip 15: Share what you learn within your organization. The bigger your company grows, the more important information sharing becomes—but the more difficult it will become as well.
Conversion Optimization Tip 16:
Keep your cognitive biases in check
As the great Richard Feynman once said, “The first principle is that you must not fool yourself and you are the easiest person to fool.”
We’re all afflicted by cognitive biases, ranging from confirmation bias to the availability heuristic. Some of these can really impact our testing programs, specifically confirmation bias (and its close cousin, the Texas Sharpshooter Fallacy) where you only seek out pieces of data that confirm your previous beliefs and throw out those that go against them.
It may be worthwhile (and entertaining) simply to run down Wikipedia’s giant list of cognitive biases and gauge where you may currently be running blind or biases.
Tip 16: Be cognizant of your own cognitive biases. If you’re not careful, they can influence the outcome of your experiments and cause you to miss (or misinterpret) key insights in your data.
Conversion Optimization Tip 17:
Evangelize CRO to your greater org
Having a dedicated CRO team is great. Evangelizing the work you’re doing to the rest of the organization? Even better.
Spread the word about the importance of CRO within your org.
When an entire organization buys into the value of data-informed decision making and experimentation, magical things can happen. Ideas burst forth, and innovation becomes easy. Annoying roadblocks are deconstructed. HiPPO-driven decision making is deprioritized behind proper experiments.
Things you can do to evangelize CRO and experimentation:
- Write down your learnings each week on a company wiki.
- Send out a newsletter with live experiments and experiment results each week to interested parties.
- Recruit an executive sponsor with lots of internal influence.
- Sing your praises when you get big wins. Sing it loud.
- Make testing fun, and make it easier for others to join in and pitch ideas.
- Make it easier for people outside of the CRO team to sponsor tests.
- Say the word “hypothesis” a lot (who knows, it might work).
This is all a kind of art; there are no universal methods for spreading the good gospel of CRO. But it’s important that you know it’s probably going to be something of an uphill battle, depending on how big your company is and what the culture has traditionally been like.
Tip 17: Spread the gospel of CRO across your organization in order to ensure others buy into the value of data-driven decision making and experimentation.
Conversion Optimization Tip 18:
Be skeptical with CRO case studies
This isn’t so much a conversion optimization tip as it is life advice: be skeptical, especially when marketing is involved.
I say this as a marketer. Marketers exaggerate stuff. Some marketers omit important details that derail a narrative. Sometimes, they don’t understand p values, or how to set up a proper test (maybe they haven’t read Tip 1 in this article).
In short, especially in content marketing, marketers are incentivized to publish sensational case studies regardless of their statistical merit.
All of that results in a pretty grim standard for the current CRO case study.
Don’t get me wrong, some case studies are excellent, and you can learn a lot from them. Digital Marketer lays out a few rules for detecting quality case studies:
- Did they publish total visitors?
- Did they share the lift percentage correctly?
- Did they share the raw conversions? (Does the lack of raw conversions hurt my case study?)
- Did they identify the primary conversion metric?
- Did they publish the confidence rate? Is it >90%?
- Did they share the test procedure?
- Did they only use data to justify the conclusion?
- Did they share the test timeline and date?
Without context or knowledge of the underlying data, a case study might be a whole lot of nonsense. And if you want a good cathartic rant on bad case studies, then Andrew Anderson’s essay is a must-read.
Tip 18: Approach existing material on CRO with a skeptical mindset. Marketers are often incentivized to publish case studies with sensational results, regardless of the quality of the data that supports them.
Conversion Optimization Tip 19:
Calculate the cost of additional research vs. just running it
Matt Gershoff, CEO of Conductrics, is one of the smartest people I know regarding statistics, experimentation, machine learning, and general decision theory. He has stated some version of the following on a few occasions:
- Marketing is about decision-making under uncertainty.
- It’s about assessing how much uncertainty is reduced with additional data.
- It must consider, “What is the value in that reduction of uncertainty?”
- And it must consider, “Is that value greater than the cost of the data/time/opportunity costs?”
Yes, conversion research is good. No, you shouldn’t run blind and just test random things.
But at the end of the day, we need to calculate how much additional value a reduction in uncertainty via additional research gives us.
If you can run a cheap A/B test that takes almost no time to set up? And it doesn’t interfere with any other tests or present an opportunity cost? Ship it. Because why not?
But if you’re changing an element of your checkout funnel that could prove to be disastrous to your bottom line, well, you probably want to mitigate any possible downside. Bring out the heavy guns—user testing, prototyping, focus groups, whatever—because this is a case where you want to reduce as much uncertainty as possible.
Tip 19: Balance the value of doing more research with the costs (including opportunity costs) associated with it. Sometimes running a quick and dirty A/B test will be sufficient for your needs.
Conversion Optimization Tip 20:
CRO never ends
You can’t just run a few tests and call it quits.
The big wins from the early days of working on a relatively unoptimized site may taper off, but CRO never ends. Times change. Competitors and technologies come and go. Your traffic mix changes. Hopefully, your business changes as well.
As such, even the best test results are perishable, given enough time. So plan to stick it out for the long run and keep experimenting and growing.
Think Kaizen.
Conclusion
There you go, 20 conversion optimization tips. That’s not all there is to know; this is a never-ending journey, just like the process of growth and optimization itself. But these tips should get you started and moving in the right direction.