How many test participants to ‘prove’ an issue?

Its often said that 5-10 usability testing participants is what you should aim for to identify the bulk of issues in a system. There's loads of research behind this. This is the right way to do usability testing, its about uncovering issues rather than proving anything.

However...it is often demanded from stakeholders that you provide proof that things are an issue. That your participants having trouble with something are not just a handful of stupid people out of their userbase of half a million.

I wonder, is there any way maths can be brought into usability testing analysis on top of the useful stuff of identfying issues and their likely causes to provide numbers for the liklihood of a person encountering an issue?

Yes. If I have 10 participants and 8 have the issue I can say its a 80% chance, but thats relying on the people I tested with. How can this be blown up to take into account the size of the overall user base? Would the direction to take be a separate calculation on my 80% to show the liklihood of my participants actually being representative users?