In Google Play Console, select your app > Grow > Store listing experiments. This section lets you create experiments to test different versions of your store listing elements against each other.
Google Play Console
Running A/B Tests with Google Play Store Listing Experiments
Learn how to set up and run A/B tests on your Google Play Store listing. Test icons, screenshots, descriptions, and feature graphics to optimize conversion rates.
Navigate to Store Listing Experiments
Choose what to test
You can test the following elements:
- App icon — Test up to 3 variant icons
- Feature graphic — The 1024×500 banner at the top of your listing
- Screenshots — Test different screenshot sets
- Short description — The 80-character preview text
- Full description — The 4,000-character listing text
Test one element at a time for clean, actionable results.
Tip: Start with screenshots or your app icon—these are the highest-impact visual elements and typically show the fastest, clearest results.
Create your experiment
Click Create experiment. Give it a name, select the element to test, and upload your variant assets. Google will show the original (control) to 50% of visitors and the variant to the other 50% by default. You can also set a custom traffic split.
Configure the experiment settings
Set the target audience (all users or specific locales). Choose whether to test on first-time visitors only or all visitors. Configure the experiment to end automatically when statistical significance is reached, or set a manual end date. Google requires a minimum of 7 days of data.
Launch and monitor
Click Start experiment. Google will begin splitting traffic between control and variant. Monitor results in the experiment dashboard, which shows install rate, retained installers, and statistical confidence. Wait for at least 90% confidence before drawing conclusions.
Apply the winner
Once the experiment shows a statistically significant winner, you can Apply variant to make it your new default listing. If the control wins, keep your original. Document your findings—these insights compound over time and inform future experiments. AppDrift's platform helps you manage the resulting asset updates across your listings.
Common Errors & Solutions
Experiment not reaching statistical significance
Solution: Your app may not have enough traffic. Experiments need thousands of store visitors to reach significance. For low-traffic apps, run the experiment longer or test higher-impact elements (icon vs. description).
Cannot start experiment while another is running
Solution: Google Play allows only one active experiment per listing element at a time. End or apply the current experiment before starting a new one for the same element.
Related
Related Guides
Using Managed Publishing for Coordinated Google Play Releases
Learn how to use Google Play Console's managed publishing feature. Control exactly when updates go live, coordinate multi-market launches, and avoid accidental releases.
Read guideSetting Up Google Play Developer API Access
Step-by-step guide to enabling the Google Play Developer API. Learn how to enable the API in Google Cloud Console, create credentials, and connect them to your Play Console account.
Read guideCreating and Configuring a Service Account for Google Play Automation
Detailed guide to creating a Google Cloud service account for Google Play Console automation. Configure permissions, manage keys, and set up secure access for third-party tools.
Read guideFrequently Asked Questions
Automate this with AppDrift
Skip the manual work
AppDrift connects to your Google Play Console to automate metadata publishing, screenshot management, and localization across 40+ languages. Set up once, publish everywhere.