Guest Post: What is usability expertise and what is the result of it?

While working with multiple clients I noticed that a common perception of usability is some vague “it improves convenience and thus conversion rate”.

So it was enough to do a sales pitch with the following mantra:

"We will make better usability, the conversion of the form will be more, the money will spill generous rain tomorrow. Here are the faces of our satisfied and dynamically developing customers. Let’s start with UX audit, anyway”

Conversions are much easier to sell - it's in the client's language, it's clear and visible at once. Now the community has already learned to make good interfaces against the background of a lot of working examples, published cases, and normal box solutions. On the other hand, users have become more experienced and the ability to materialise the refrigerator at home at a small price with the help of the Internet has ceased to be some kind of shamanism.

Thus, the interfaces have become better and the users more experienced, so the cases of enchanting growth of conversion are becoming less and less, and to raise the conversion by hundreds of percent now it is possible only on absolutely untalented samples.

Now let's look at the usability services that are usually available on the market and what benefits they can bring. In my practice, if a person says, "make me usability," he may mean something from this list.

1. Interface design
A well-known and mobile topic when a person wants a developer to offer a good interface hypothesis satisfying all types of requirements from the very beginning. Sketches, wireframes, prototypes, and analysis are usually included in the price of the service. In the end, the customer receives a prototype interface, sometimes with documentation describing the interaction of the user with the interface and the system as a whole.

It is quite easy to start designing interfaces in the well-studied e-commerce - there are a lot of off-she-shelf and tested solutions here. To undertake designing in e-commerce, it is necessary to read some base books, to learn some successful patterns, to keep some references to descriptions of cases to give the words of probative force.

This is enough to design any commercial website. As a rule, at this level UI designer and UX designer is just one person.

The next level is noticeably higher in complexity and more demanding to competences. Here live all sorts of CRM-systems, professional interfaces, web-services, complex desktop programs, non-standard mobile applications.

The main problem is to collect all the requirements in a single interface and do not forget anything. For the executor it will be extremely useful to have a rich experience of collecting requirements, so you can often see the interface designers who came from the business analysis. Thus, by the way, more it is necessary to think not about how to make the different interface, and about its economic efficiency and performance of every possible requirements and restriction. My experience suggests that for such a level of designing it is necessary to get a specialist inside the company. Or better yet search for full service agency.

2. Usability expertise or usability audit
Such a servant implies that the Expert Advisor sits down, looks at the interface and tells you how to make it better. At the exit, the customer can get everything he wants - it directly depends on the level and competence of the executor. In my opinion, there are a lot of "experts" in the market who have read several books, armed with some usability check-list and learned all sorts of stupid rules, such as "7±2".

After that, these guys begin to call themselves "experts" and spit out a bunch of "examinations" of 3 thousand rubles, often it comes complete with SEO-audit.

In my opinion, in order to acquire expert competences, it is necessary to participate directly in the processes of creating interfaces, testing, and measurement of results for several years. People who only design, design, examine, and have never conducted interviews with users or climbed through the statistics of the interfaces created by them, are engaged in the manufacture of spherical horses.

In the work of the expert will be at least:

• The list of problems found with the criticality ranking (for example, from "small defects" to "critical error").
• A detailed explanation of each detected problem, which describes its essence and possible solution. Examples will be given for some problems. These will be either reference to case descriptions or studies or examples of existing solutions.
• The problems will be presented in a coherent manner, i.e. it will not be an assessment of ten separate interface screens, but an assessment of sequential interaction scenarios typical of target groups and characters.
• The expert will necessarily draw conclusions from the statistics of site attendance if such data will be disclosed to him. He will give recommendations based on real data. For example, there may be no economic sense in a complex revision of the page with 0.01% of the attendance.
• Separately, the expert's assumptions will be made, which require a separate check.

What a competent expert will not have:

• Nobody in their right mind will predict how the conversion or any other indicator will change. No one can have a knowledge base that allows making predictions in numbers with an acceptable level of error.
• Not always the expert can offer the decision of a problem - there are difficult cases where the decision of a problem needs to be designed separately and such work is strongly beyond the cost of the examination.
• There will be no "hits" in related topics - branding, marketing, design.

3. Custom testing and research
If very roughly, it is a study for which special people from the target audience (respondents) are recruited and given an interface. And then the options begin, depending on the design of the particular study.

This can be a qualitative or quantitative study. In qualitative research, the researcher most often asks why the user has done something, what kind of interface reaction he or she expected, what he or she did not understand, and so on.

BTW, there are quite a few done-for-you services out there.

Or the user is initially invited to voice all his thoughts and impressions of the interface. In quantitative studies, the user is not distracted from the task to, for example, measure the speed of execution of the task. Almost always, after the interview, respondents are given questionnaires that help to identify subjective satisfaction with the interface.

In our reality, qualitative surveys are conducted much more often than quantitative ones. This is due to two reasons: the high cost of quantitative studies and the widespread use of A/B testing systems, which provide a large amount of quantitative data at a fraction of the cost.

In the cases when in pursuit of the beauty and "seriousness" of the report, some researchers write shockingly on slides with graphs that are particularly pleasing: "100% of Group A respondents completed the task in less than 5 minutes. And the grayish shame is added: "The sample size is 4 respondents. That is, they present the results of qualitative research as quantitative ones.

There are nuances in user testing that make a simple process quite sensitive to the results. For example, it is necessary to avoid participation in testing of professional respondents, who often attend interviews and focus groups. And, for example, the results of testing in a laboratory with a large mirror and a bunch of cameras may be very different from the results of a home study. This is especially true when we tested the interface for mothers at home with young children.

There are respondents who say some things just to be useful, begin to invent difficulties and describe them vigorously - you need to be able to get such a person back on track or just not take their answers seriously. These difficulties disappear with experience.

In testing, interaction errors are detected, difficulties in understanding are found and comments about the interface are given. Very caloric food for the interface author's mind. I highly recommend that those responsible not only look at the report but also look through the research records - this practice gives a good immersion into the user's world.

4. Usability testing using eye-tracking
Researchers within the framework of usability testing also offer to follow the view of users with the help of special equipment. As a result, they give beautiful pictures.

In general, eye-tracking can be referred to as quantitative research rather than qualitative, and to obtain valid quantitative data you need about forty respondents for each target group.

Few companies can afford such spending. Beautiful attention cards, of course, are nice to sell and present to high management, but from a practical point of view, it is difficult to draw any conclusion that will help to change the design. Once you have proof of where people are looking, it's hard to understand why they're looking. The object of attention may be, for example, an unpleasant object.

A sequence of points where the eye moves and when it clings to them looks much more informative in this respect, but, again, we need an experienced specialist who can compare the interviewer's questions, the movement of the eye, the respondent's actions in the interface, the respondent's facial expressions and draw conclusions.

A better alternative might be just to use good old heatmaps.

Why not A/B testing
If you have a lot of traffic on the site and a lot of ready-made hypotheses to test, you can and should test.

Not many people can wait 2-3 months until they have enough data to draw statistically significant conclusions.

And if you don't have any traffic or you are going to buy it specifically for the test, it would be better to spend money on analysing and improving usability or just UX.

Effects
In addition to increasing the conversion rate, I have seen such pleasant effects from improving usability in my practice:

• Increasing trust in the site. Measured by the average check. The higher the level of trust, the higher the average check. The effect appears through the average time lag spent on the purchase decision. For the front door site, this was, for example, one and a half months.
• Improvement of readability. Especially if you or your subcontractor used UX writer.
• Increase brand loyalty because of the pleasant interaction, but there is already an intersection with user experience and a close relationship with the business processes of the company. Effect two - repeat orders and increasing the number of direct transitions. Repeated orders from loyalty, direct transitions from the sarafan radio. "Sarafunk managed to measure only a couple of times on small projects that were not advertised and it was possible to do not make many noisy measurements.

Author’s bio Dmitrii B. is the CEO and founder of GRIN tech (full service & white-label agency).



Contact us at Grizzly to learn how we can help you with your website.


Contact us


Back to Blog