Experiment in Microsoft Advertising allows you to set up a duplicate of your campaign and run a test on a segment of its traffic, which is similar to drafts and experiments in Google Ads. “This way, you can run a true A/B test within a campaign to determine whether a particular update will work well for you and your business,” wrote Subha Hari, senior program manager andPiyush Naik, principal program manager, for Microsoft Advertising in the announcement.
One of agencies that participated in the experiments was Perfromics. The agency was able to use it to test the maximum clicks bidding strategy. The media director at Performics, Brian Hogue, told Microsoft Advertising that the feature was easy to set up, execute and implement results.
Show do you use this feature? From the experiments tab, name your test, set a start and end date and the percentage of ad traffic you wan to include in the test in the experiment split field.
In order to evaluate performance, you want to make sure you’ve selected the right metrics in the table on the experiment’s page. The metric values will either be green, red or gray. Here is what each color means:
- Green – indicates that the experiment is performing better than the original for that metric
- Red – the experiment is getting worse
- Gray – there is no statistically significant difference
You can then opt to apply an experiment to the original or to a new campaign. If the experiment is applied to a new campaign, the original will be paused automatically.
You’ll want to build in at least four weeks for testing, per Microsoft’s recommendations.
Microsoft has suggested running in A/A mode in which your control and experiment are identical for two weeks. “this will allow time for the experiment campaign to ramp up and help validate that it’s running the same as the original, so that you can run a true A/B test,” said Hari and Naik.
After this, you can make the change to your duplicate campaign to run the A/B test. Just as it was mentioned before, it’s a good idea to run the test for a minimum of two weeks, but then four or more for bidding strategies such as target CPA and maximize conversions.
When determining the experiment split, make sure your ads are going to get enough traffic to run an effective test that doesn’t take forever to reach statistical significance. It’s recommended by the company to set the split at 50%, but that will vary depending on your volume. For lower volume campaigns, it’s possible you’ll need to increase that, while higher volume campaigns may be able to test on a smaller segment.
Something you’ll want to keep in mind is that you can’t change the experiment’s budget without changing the original campaign’s budget. The change in budget will be applied to your experiment split. If other changes are made to the original campaign while an experiment is running won’t be applied to the test. So that means that if you make changes to the original, you won’t be able to run a true A/B test. This is why it’s recommended to leave everything alone while running the experiment.