You are on page 1of 3

A/B Testing: How Data Can Drive Your Recommendations and How to Do

it Right
(Taken from Dan Stokers Talk)
Idea is that youre taking data and using data decisions to fashion your
recommendations
This workshop is structured as lessons
Summary
A/B Testing are statistical experiments that help you decide whether a change is
actually making a significant impact on your product. Data decisions can be used
to fashion recommendations, as well as hone products, marketing efforts. On top
of it, its easy, (mostly) free and very effective.
This assumes that there are two big advantages over a traditional A/B trest that
distributes traffic evenly until completion. Using Google Analytics, you set up two
pages with the same code, and then have google run the experiment where it
randomly picks either page,
Objectives (do not modify)
Content Outline (can but should not modify; keep in mind objectives)
1) What is A/B Testing?
a. A/B Testing are statistical experiments that help you decide whether
a change is actually making a significant impact on your product
b. You have two versions of an event and a metric that defines success
c. For your projects, you might have a webpage, with a design
element or wording that youd like to test
d. In the end, you measure which version was more successful and
select the version for real-world use

Workshop Outline (can modify to suit preferences; keep in mind objectives and
content)
Set-up (using google analytics)
1) The practical application is left to us, or to whom?
Lesson #1: Define quantifiable success metrics
-

Use
o
o
o
o

Google Analytics (free tools)


Mixpanel
Omniture SiteCatalyst
Kissmetrics

What you choose depends on your goals,


-

Eg: If your goal is to increase the number of sign-ups, then you might test
the following: length of sign-up form, type of fields in the form, display of
privacy policy, social proof

Lesson #2: Explore before you refine


-

You refine and refine, and you miss the best solution
The best way to do this is to explore
You shouldnt conclude too ealy, and this Is where statistical confidence
comes in where you see if your results are significant
Also dont let your gut feeling override the test results. Even if say a red
button look unappealing, what you are after (the metric you would be

testing) is visits or conversions, and so dont reject results because of


arbitrary judgement
Example:
-

The ABC Family Home Page, what people initially thought was that people
wanted to have an ad for a new show, as you would on TV
What they actually found was that they actually wanted tofind an episode
show that they had missed
o So they changed the site to be hierachial, to increase reader
engagement in text-heavy
o They found that they increased views by 600%

Lesson #3: Less is more, reduce your choices


-

SeeClickFix
o Had in the past a big map where you could see where neighbours
were reported
o Huge engingeering endeavour, lots of stuff going on
o They removed two fields, ended up with 8.4% more views

Other cool tricks (that we cant really test)


Add the following bit of javascript to a button code
onClick = _gaq.push([_trackEvent,category,action,opt_label);
-

Then create two versions of your page


Then set up the experiment (which Is the A/B Test)
Then click create new goal then +goal. Select event and under goal
details use the dropdown to select that is equal to

Junk
-

Can code (

Code:
Configuration
$server_config[listing_buy_button_up_top] = array(enabled => 25);
If (Feature::isEnabled(listing_buy_button_up_top)) {
// do experiement
} else {
//do control
}

You might also like