A/B Test Significance Calculator
Did your A/B test actually win? Returns p-value, statistical significance, and minimum sample size for 95% confidence.
Did your A/B test actually win? Returns p-value, statistical significance, and minimum sample size for 95% confidence.
The A/B Test Significance Calculator tells you whether the apparent winner in your conversion test is a real result or random noise. Enter the visitors and conversions for each variant; the tool returns the p-value, confidence level, the absolute and relative lift, and whether you have reached the conventional 95% confidence threshold required to declare a winner.
The calculator also shows the minimum sample size required to detect your observed effect with statistical reliability. Most A/B tests are called too early - the team sees variant B winning by 10% at 500 visitors and ships, only to find the lift evaporates at 5,000. The minimum sample size view prevents this trap by setting a clear stop-point before you start the test.
Set your sample size before starting the test, then evaluate exactly once at the end. Peeking at the results daily and stopping when something looks significant inflates false-positive rates dramatically - what looks like a 95% confident win can have an actual 60% chance of being noise. The minimum sample size in the calculator is your stopping rule; respect it.
The A/B Test Significance Calculator runs entirely in your browser. Nothing you enter is uploaded, logged, or shared with third parties - the math happens locally and your inputs disappear when you close the tab. There is no signup, no email collection, and no daily-use limit.
Be the first to share your experience with the A/B Test Significance Calculator.