Your Website Needs Conversion Research for Optimization
Week 6 of the CXL Growth Marketing course focused on web Conversion Research. The idea is, after getting to know the customer or user, to focus on the next most important thing: The purchase journey, or sales funnel.
Walk a mile in the Customer’s Shoes
Site walkthroughs are needed to determine gaps in cross-platform, and cross-browser user experience. The only way you will identify gaps is by performing walk-throughs yourself. Perform the walk-through in every scenario you can think of, either by yourself or through hiring a tester if you are under time constraints. Experience for yourself what the user would be experiencing when he or she enters, uses and navigates your website.
Peep Laja outlines three key questions to ask when performing conversion research:
1. Does the site work with every major browser?
2. Does the site work across all devices?
3. What is the user experience like with every device?
Tablet conversions should be similar to desktop conversions.
· Create a custom report to see conversions per browser.
· Look at conversion rates by browser version.
· Check revenue
Create an “Areas of Interest” document as you perform the walkthrough.
The CXL Institute uses the below model to conduct such research, with six key dimensions as guiding pillars under which to collect data:
The six guiding points are:
1. Heuristic Analysis
2. Technical Analysis
3. Digital Analytics
4. Qualitative Research
5. User Testing
6. Mouse Tracking Analysis
Of course, it is also important to establish a feedback framework for monitoring and evaluation of the research project.
We can discuss the ResearchXL in further detail:
Heuristic Analysis
Adobe XD.com describes Heuristic Analysis as “an analysis, done by experts, that determines the susceptibility of a system toward a particular risk. A heuristic analysis in UX design is a procedure used to identify a product's common usability issues”.
We therefore perform this analysis this to establish a solid sense of familiarity with the site, and identify problem areas for improvement. This analysis refers to Heuristics as we are focusing more on the “human” experience when engaging with a website here. We can (and must) use data to back up decisions but ultimately, we need to understand the human experience – and then to establish from there whether our quantitative data backs up those findings. The heuristic analysis allows us to inform our hypothesis, and we then need to search for data that denies or confirms it.
Often, we are at risk of two types of biases when performing this type of analysis:
1. Bias Blind Spot – this refers to our ability to recognise the cognitive biases in others, while failing to see the same biases that may occur within ourselves.
Recommended by LinkedIn
2. Confirmation Bias – often referred to as “myside bias” is the tendency to favor and recall information that supports our already-established thoughts and beliefs.
Evaluating Your Site’s Usability
Usability is about the ease with which a user is able to perform a desired task on your website. Usability issues can often be the core components to work on when performing conversion optimisation. Generally, once the usability issue is identified, it can be fixed right there and then – for example, having “cheesy” stock images where original high-resolution pictures that clearly communicate your value proposition should be.
The course makes reference to Jakob Nielson’s 5 quality components of usability:
1. Learnability – Upon initially visiting the website, how easy is it for users to complete their basic initial desired tasks?
2. Efficiency – How quickly can users perform actions once they have learned the design and structure of the site?
3. Memorability – after not having visited the site for a while, will users till remember how to use it?
4. Errors – how many mistakes do users make, how often and how severe are they? Can users recover easily from these errors?
5. Satisfaction – how visually appealing is the design?
Usability Testing is NOT the same as User Testing
Under user testing, the findings that are discovered are based on the tasks that you give them. The context can skew the accuracy of the data since the testers know that they are testers and do not necessarily have to part ways with money in order to complete the tasks. They also may not comment on everything as something could be bothering them subconsciously. Hence, you want to conduct a usability audit on your site on top of user testing.
Survey Design Theory
The collection, analysis and use of qualitative data is fascinating to me as it is more complex than quantitative data and takes the human nature into account while attempting to draw conclusions in a similar accuracy to that of quantitative data.
In this case, the collection of qualitative data is important to better understand:
· Who your customers are.
· What users think of your products/services.
· Who you’re competing with .
· Attitudes and beliefs towards brands in the market.
Data can be collected in the form of an open-ended survey (discussed in previous weeks’ articles). Open-ended questions are effective for the purpose of identifying points of friction on a website. In the analysis phase, we would conduct a zero-sum analysis in which a grid would be used to cluster responses together. In this way, recurring themes, responses and keywords are identified and are then assigned a code, which can be analysed similarly to that of quantitative data.
In order to get the best quality data possible out of the surveys, it is important that the questions are focused around what it is exactly we want to know; also it is important to remember: the bigger the sample size, the less risk there is for error in the results.
Common Mistakes in Survey Design
This course outlined the most common mistakes that we as researchers make during the design phase of the qualitative survey:
· Non-intuitive scales – we automatically associate the number 5 with being highly positive and 1 with being highly negative, it would be confusing to switch this around.
· Mixing questions of behaviour with questions of attitude – this can be very jarring to the respondent. Best practice is to segment the questions depending on their focus – on attitude or behaviour – and not to mix them amongst one another.
· Questions that don’t have relevance to audience (unknown jargon, language etc.) This type of language is not inline with the readers’ mindset which leads to failure to create a connection.
· The survey is too long – this can lead to what is known as Error of Central Tendency (survey fatigue)—whereby respondents become bored or tired of answering the questions and and pick the easiest responses just to be able to submit and move on. We can track this occurrence when we see all responses from a certain point are converting to the mean value.
· People learn from surveys – a type of bias is formed as they generally want to help and give you what you want when participating. As they progress through the survey, they may start getting an idea of what you want to know, thereby skewing their answers. Therefore, we need to be aware of structuring the survey such that the learning curve of the respondent stays neutral.
This is only the tip of the iceberg when it comes to Conversion Research. I look forward to discussing the other components to it, including talking to customer support, live chat, mouse tracking and Google Analytics in the days to come.