A/B testing usability with static user count
I'm new to A/B testing and I have few questions.
The situation
I would be testing an information system with no new users, so user count is more or less constant. In the system there is a big form users are filling. I won't be measuring conversion rates or something like that. The aim is to measure completion times for this form and the goal is to improve the form, so it takes less time for users to fill it.
Some users might fill this form once a moth, while some might fill it multiple times a day.
The questions
- Do I divide users in half based on form count (so there are approximately equal count of filled forms) or based on user count (so there are approx. equal count of users in each group)?
- Can I look at each form completion as one "instance" (instead of users) despite the fact that one user can fill multiple forms?
- How do I calculate how long should I run the test to get statistically significant results?
For example, I have found sample size calculator (https://www.surveysystem.com/sscalc.htm), and I enter such data:
-Confidence Level: 95%
-Confidence Interval: 5
and as an output I get 384. Is 384 the count of form completions for each variant?
Let's say, there are 70 form completions a day on average. Does that mean I have to run the test for 11 days? (The calculation is:384/70 * 2
(multiplied by 2 as there is A and B variant)) Or should I round it up to full weeks (so 14 days in this case)?
I aplogize if my questions are very simple. I have been reading quite a lot about A/B testing, but there is usually conversion rates and I cannot seem to apply it to my situation.