How important is it to avoid A/B test collisions?
Let's say you have 3 A/B tests as such
- Control vs A
- Control vs B
- Control vs C
Unless you specifically work to ensure that tests do not collide (run on top of each other), you will end up with populations of users who see both A+B, B+C, A+B+C.
Technically, because if you're looking at results of experiment 1, both Control and A will see an equal number of pollution from 2 and 3, so the only difference in performance between 1. Control and A should be just the impacts of A.
Thoughts? Is it always best practice to avoid collisions or can we just assume, so long as pollution is evenly distributed, we're good?