What UX technique or techniques do you use to test specific domain knowledge information?
There are some cases where the domain knowledge is so specific that you wish to "test the waters" and see if the user understands what your team is after. Will they grasp the content we are showing them? Will they understand, even if just a little, the underline complexities of what your team is trying to address?
- Contextual inquiries will not help you test that particularly advanced domain knowledge area. They are to broad. They certainly help you understand the general context, but there’s no place to test specific information with them.
- Usability tests will not be available at that point, as you are at ideation stages of your content, and your focus is not yet, on what needs to be fixed in terms of usability.
- Surveys may help, but you lose the context, the storyline behind the information that, in itself, provides value to what we are after.
What do you use in those cases?
Let me give an example:
Let's say that your user's frustrations/needs have been identifying as, for them: "Not being able to prioritize non-functional tests related to performance (speed) metrics across the business." The reason for that it's because this connection, between performance metrics (timing points, distributions, predictive analysis), and business value it's hard to understand. We can do anything to help them out solving their problem that we've identifying but, beforehand, there's some domain knowledge that needs to be tested.
I’m wondering if someone has some ideas about a possible UX technique to address this problem, expressed in the example above.
We've developed one possible way to address this, but, unfortunately, we lack the quantitative numbers that a good consultancy company have to test the validity of such method thoughtfully. We call it: Content Inquiry, and it's a qualitative method.
1) You show (and record) some prototypes with the target users. (with almost non-visual design work on them, keeping them as neutral as possible).
For each prototype of that specific domain knowledge we ask some questions:
- How do read this?
- How would you expect to receive this?
- What would be the most/least relevant information for you?
- What would you do next?
- Any important information missing?
We are not after the specific answers to these questions, nor after the outputs to improve the mocked-up interface shared. However, they serve as a trigger, to help the user expand in their explanations, and see if the information we are providing is acknowledged at some extension.
2) With that, seeing the users interpretations, a collaborative team takes notes about their context. What do they find valuable? What did they pay more attention to? They didn't get the content details, that's fine, what were they after instead?
3) Finally, we do an affinity map session to aggregate the outputs into actionable elements to address afterwards.
How does this sound?