precision vs. usefulness [on hold]
I presented a testing (specifically, symbolic execution) approach for special language. I have a problem in evaluating my approach. I am going to evaluate correctness
and usefulness
of the approach. My own perception is computing precision
for the correctness. I created 100 mutations of one case study by injecting one error in each of them and calculating the fraction of (number of real errors/number of detecting errors). By the way, I have no idea about computing the usefulness
of the approach and the differences between those. Could you please help me in this regard?