What's Copytesting for?
It's an audience research tool focused on copy. To improve your sales copy in a data-informed way, you need to understand what the problems are with your sales copy.
You get quantitative and qualitative data on where the problems are with your website copy and what those problems are.
Copytesting helps you dramatically speed up your research efforts (results in hours!), settle debates, and make your copy convert better.
Where are the testers from?
All feedback comes from real people—folks that represent your target audience.
We tap into various panel services via an API, pre-vet each participant, and conduct in-house quality control after each test to ensure you can trust the results.
How are you calculating the statistical probability of the results?
The test results represent a binomial distribution of the average likelihood of tester feedback (based on a Likert scale) to a content block. According to our tests, the threshold for dependable results is a population of 20 testers.
The 20 testers assess clarity and relevance of content blocks on 1-5 scale. Their votes form a virtual voting histogram that has a mean value.
This value lies between 0 and 100% (0 meaning all 20 ratings were the lowest possible rating, and 100% meaning that all votes were maximum). This is never the case and usually the mean is somewhere in between 0 and 100%.
Depending on if the mean is below or above the 65% mark we decide if the result is on "good" or "bad" side.
On top of that we assess whether the spread of the histogram is narrow enough. If we know with at least 95% confidence that a test block scores either high or low, we'll display the score as such. If the result falls in the "mediocre" range (i.e. is statistically insignificant), we'll display the result as such.
If a text block returns an average score, then, as a marketer, you don't need to know how average the text block is. It's enough to know that the block wasn't good enough and, hence, needs improvement.
Is this for sales pages or blog posts?
Copytesting is designed to be used on web pages with direct response copy.
These are pages that try to get people to take an action—sign up, purchase, click a button, and so on. The goal is to increase the conversion rate of those pages.
It's not designed for informational content like blog posts.
Does one test include a single web page or full website?
Each test is for a single URL, so that means a single web page.
Each additional page would be a separate test.
Can it be used on websites in languages other than English?
Currently, 100% of our audience is based in the United States, so it's English only.
Adding more languages is on our future roadmap.
How long does it take to get results?
During the business hours in the United States, it typically takes 6-8 hrs.
Since the feedback comes from real people, it'll take more time during nights and weekends.
How is it different from user testing?
User testing is different as its primary function is usability testing - seeing how easy it is to perform certain tasks on the website. It typically offers little insight on copy, but indeed has some overlap.
Copytesting is 100% about the copy (and the surrounding design elements).
We ask our panelists research questions about each specific content block - so the inputs are very granular.
Resulting data is a mixture of quantified (clarity and relevance of each copy block) and qualitative (answers to open-ended questions). Average panelist spends ~35 minutes answering questions.