What are some arguments against user experience testing?
My argument is a little long, if you want to read it in a better format then go here.
http://000fff.org/getting
If you had an idea for a new hammer and you wanted to test it, which of the following ways do you think would yield you the most valuable feedback?
I am guessing most of you would choose #4. After all, getting a feel for the hammer requires the customer to actually try it out. Looking at a picture of a hammer, cutting it out, or even providing a Styrofoam prototype simply won’t provide you or the customer with sufficient foundation on which to evaluate it.
However, if I asked you to test a digital product, whether it be a website, an application or an e-commerce site, most of us would choose #1, 2 or 3.
Isn’t that odd?
A simple product like a hammer is best tested in its final form. But a digital product, that in some ways is much more complex, is tested by lesser means, as if you could extract important feedback about the hammer.
Nonetheless, this is the current reality of applied UCD, or User-Centered Design.
Usability tests, focus groups and personas to name a few, all are intended to increase usability, and create better products by having users test them.
There is something flawed in this approach. What is wrong is that it doesn’t deliver on that promise. Valuable resources are being used on what one could arguably call a placebo for uncertainty.
Clarification:Some people have commented on this. I am not saying that you shouldn’t do 1, 2 or 3. I am saying you shouldn’t test 1, 2 or 3 on users—you should, though, go as far as 4 and then test as much as possible.
Correct theory, wrong application
Now, there are many different interpretations of UCD. Is it a theory, or a process, or both?
Wikipedia states:
“In broad terms, user-centered design (UCD) is a design philosophy and a process in which the needs, wants, and limitations of end users of an interface or document are given extensive attention at each stage of the design process. User-centered design can be characterized as a multi-stage problem solving process that not only requires designers to analyze and foresee how users are likely to use an interface, but also to test the validity of their assumptions with regards to user behaviour in real world tests with actual users. Such testing is necessary as it is often very difficult for the designers of an interface to understand intuitively what a first-time user of their design experiences, and what each user’s learning curve may look like.
The chief difference from other interface design philosophies is that user-centered design tries to optimize the user interface around how people can, want, or need to work, rather than forcing the users to change how they work to accommodate the software developers approach.”
Doesn’t sound too bad, does it?
In theory, it makes a lot of sense. Getting feedback from users about your product will provide you with valuable insights as to what works and what doesn’t. The users are, to state the obvious, the ones who are going to be using it, so their input is going to be valuable.
It’s also a correct assumption that designers, in theory, cannot foresee how users are likely to use an interface, thus their assumptions must be tested.
A long learning curve is clearly an issue when it comes to getting users to adopt your product. It needs to be easy to sign up, log in, buy, navigate, understand, and utilize what your product offers. Too often the software determines the option space of designers, forcing them to design less than optimal solutions.
So, in many ways, there is no problem with the philosophical definition of UCD. Involving users is critical if you want your products to be used.
No, the devil is in the actual practical outcome of this philosophy—what I call applied UCD.
Here is a typical UCD process for web or application design. (source:http://www.w3.org
Have a look at the model again. Look at the last two parts of the process. Yes, you got that right. Generally there isn’t much to say about implementation and deployment from an UCD point of view. It’s normally not considered part of the process. On the contrary, it is most often assumed that, once you have gone through the process of choices 1, 2 and 3, the UCD process is complete, and you have established a solid solution.
So, as you can see, a typical UCD process, to define it in terms of our hammer test, is based on testing the drawing, the cutout and the Styrofoam hammer—not the actual hammer.
So why is that? How come something that appears such an obvious problematic implementation of the goal of UCD, has become the norm?
I believe there is a series of reasons for this, that I will attempt to cover here.
Waterfall process
Historically, development of digital products has been based on the waterfall method. This made it very expensive for organizations to change things once the product was deployed. So, with UCD promising to test the usability and relevance of the product before it went into production, it’s no wonder it gained so much traction in the 1990s.
UCD has its origins in academia, not in design or engineering
With heavyweights such as Jakob Nielsen, Jared Spool and Don Norman’s extensive and fantastic research, it’s easy to see why UCD has gained traction. There is an answer to almost every question or problem you can devise. Facts provided about users are based on years of research.
Most UCD proponents and practitioners are academics
It’s easy to see why the usability community has picked up on this research and why they consider it more valuable than, say, theory of typography or grid structures, color theory, engineering, animation, programming or development process, or even marketing. It’s a continuation of the academic process they know so well, no matter their original domain.
‘Get out of jail free’ card
As I mentioned in an earlier post, most large and management-driven organizations don’t allow for mistakes. They religiously demand certainty, and consequently punish uncertainty. If you don’t have your back covered, if your project tanks, you are headed for trouble. UCD is the perceived guarantee that your product has been tested thoroughly by way of user tests. Managers use this as a way to get some of the responsibility outsourced.
The user’s advocate
Humans are not just machines. We aim to serve higher goals. Many usability experts see themselves asadvocates for the user. There is a sense of purpose greater than just “doing the work” within the UCD community. It has created a strong sense of purpose, and a strong, almost moral culture around users.
With the above in mind it’s not difficult to understand why UCD has gained such success the last decade.
With both extensive research, great academic minds, economic incentives, get out of jail free cards and a sense of certainty, it’s a sure sell. Why should companies not want to utilize UCD to design better and more usable products?
The following observations may make this more evident.
Customers are not users
The first issue is with the term user. It might seem like a rhetorical nitpick but it speaks volume of one of the main issues and misconceptions with applied UCD.
Users are of statistical value meaning, i.e., what we know of users we know through the research that has been done on them. Users are of relevance to the field of usability, but not to the field of product design. It has no actual reference to whether a product will gain adoption or not.
Users are not customers
A customer is someone who, through all the noise of competition, has chosen your product over someone else’s.
What might be a no-go from a user’s point of view (statistical) might not be a no-go from a customer’s (utilitarian) point of view.
What matters is that you design for the customers of the organization instead of the users of your ideology. If you design for users, you end up spending a lot of time on pseudo problems that might have ideological value but no implication for the adoption of a product.
According to studies by J.Nielsen, users don’t scroll. Although this might be the case from a statistical point of view, it is certainly not of pragmatic value. Maybe what was tested was not relevant enough to make the user to want to scroll. Maybe the design was executed in a way that obscured the fact that you could scroll.
In fact, users scroll more than ever, yet this myth is continuously being upheld.
If you design for customers, you are forced to think about how to benefit the relationship between the customer and the organization. If you design for customers, you design for the jobs they are trying to do. You are designing for an optimal solution, not necessarily a perfect solution.
Be the advocate for the customer relationship, not for the users.
The map is not the territory
We can draw up as many plans, diagrams and projections as we wish. A product before it’s launched is not a product yet. It’s a set of ideas, sketches, designs and assumptions, maybe even some patent-pending technology.
It’s a map of how the designers envision the product. But it’s not a product. It has no actual form; it doesn’t belong to an ecosystem; the user can only think about how it might work. In short, it doesn’t contain any of the qualifiers that make a product a product.
So the kind of feedback that UCD provides before and after deployment is not the same. They are two very different paradigms.
One is a paradigm of ideology and thus provides information of theoretical value. The other is a paradigm of application and utility, and provides real-world, real-time feedback.
Pseudo environment
What you are solving in the wireframe phase are problems inherent in the wireframe phase, not problems with the product. What you are solving when testing the prototype are problems inherent in the prototype, not in the final product. There is only one true test and that is the final product. Only then will you start to receive valuable feedback in combination with quantitative feedback. You will get it from where it matters most.
No transcendence
By this I mean there is no quality transfer from insights you establish in the testing of your product into the design and development phase. One reason as already stated is that problems are addressed in a pseudo environment instead of an actual one.
Another reason is that what makes products successful and usable is only really obvious in the final form and environment. With UCD notorious for dealing with the product before implementation and deployment, it relies on the assumption that it can solve problems before they arise. It relies on the assumption that, by fixing the usability issues, your product will have a greater chance for success. This is, quite simply, wrong, and flies in the face of reality.
Intuition is neither objective nor always the goal
Intuitive behavior is not an objective standard. It’s a slow-moving target. If something becomes the default way of doing something, then it has become the intuitive way of doing it. You don’t need to test for that; you use design patterns. They are the product of the norm.
Some of the most successful products weren’t intuitive to begin with. They became intuitive after the product gained sufficient popularity. So, even if a product fails the intuitive test, it does not impact whether or not it becomes a success.
Usability studies and focus groups are for refinement, not for innovation
Again, the problem is in thinking in terms of users instead of customers. Something might fail being intuitive or make sense from an ideological point of view, but it still helps customers do a job that makes them ready to endure the learning curve.
Therefore, forcing a user-centric model to evaluate your ideas can be completely damaging to your assessment of your product’s likelihood of success.
When you test your product, whether by focus groups or user tests, many other factors determine the outcome of a session. Social dynamics, groupthink, the “dumb” expert, and dealing with a product in the abstract.
Implementation is a black box
When most companies exercised waterfall methodology, UCD had some value. But, today, the landscape is different. Implementation is part of the design process. With agile, continuous integration and continuous deployment methodologies, and the speed at which products appear, no longer is there room or need for lengthy processes that only obfuscate the goal of making good solutions through the division of UCD phase and the implementation phase.
Good UCD processes mean good products
Well, no, unfortunately, this is not the case. Successful products have nothing to do with the process as such. A good process merely allows you to cover what needs to be done.
It allows you to collect the necessary data and create the structure of your product, to map out what needs to be done, who should do it, and in what order it needs to be done.
Putting users into the process after research and before the final product is finished provides primarily pseudo value. It confuses users with customers, and is responsible for valuable time being wasted in a pseudo environment tackling pseudo problems that have no bearing on those you may find yourself with after the launch.
However, at that point it’s too late, since applied UCD normally leaves the building right before the implementation starts, and where the product really starts to take shape.
This is where UCD really fails.
What is required to create good products is not the ability to test your idea or the usability with users. What is necessary is testing the finished product with customers and improving it from there. This requires a different approach to thinking, and a different set of skills.
Testing users is about testing the current state of usability and intuition. This belongs to the type of research that people like J.Nilesen, J.Spool and others perform so well. It contains a lot of valuable knowledge and should be the foundation for anyone wanting to do UCD.
But testing customers means getting the product in their hands, and learning how it behaves; to pinpoint problem areas and then figure out ways to improve it. It allows you to test the actual problems instead of a number of pseudo problems that may never arise.
Summing up the problem with applied UCD
Applied UCD has wedged itself into a corner from which it will be difficult to extricate itself. It must extricate itself if it wants to remain relevant.
On one hand, UCD is as successful as ever; the area is very well covered. Lots of research and lots of value have historically come out of this area—understanding the users is important.
On the other hand, it has become successful on the wrong premise. Too often, it is used simply to push responsibility away from those who should have it. And when it’s not, it’s being misapplied for some of the reasons mentioned above. It’s being perceived as providing certainty where none really exist.
The speed at which products, services and applications are being launched today is increasing rapidly. There is more competition, but, also, products are built and iterated much more rapidly than they once were. And with the move away from waterfall methodology, the consequence is that UCD proponents and practitioners need to rethink just where it is in the process they see themselves adding value. There is nothing today that hinders a process in which products are launched before they are tested and then iterated if necessary. A revised UCD process would look something like this:
In this way, users become customers, giving you the opportunity to test where it matters with valuable feedback.
This will no doubt mean that many have to re-educate themselves, and rethink how they approach design, whether it be UX, IA, UI or GUI. Nonetheless, as stated, it is necessary to stay relevant for the future. A pivotal part of this will also be to re-educate clients and help them understand they must look at at product design a little differently.
Design is a decision, not a democracy. If you are serious about using design strategically, then courageis the strategic advantage you should aim for. With the ability to quickly adjust wrong assumptions, it’s not about risk, just common sense.
Let me know what you think.
See Questions On Quora