Evaluate the Existing Website Using the Usability Evaluation Criteria
Usability evaluations refer to any one of a set of methods allowing a user experience (UX) expert to evaluate the usability of a product or system in varying levels of detail. Usability evaluations are often confused with usability, the holistic view in designing and evaluating systems for people overall. As described in earlier chapters, the terms usability, UX, and interaction are confused; given that the field is still evolving, this is not a surprise.
Usability refers to the quality of a user's experience when interacting with products or systems, including websites, software, devices, or applications. Usability is about effectiveness, efficiency and the overall satisfaction of the user. It is important to realize that usability is not a single, one-dimensional property of a product, system, or user interface. ‘Usability’ is a combination of factors including: Intuitive design: a nearly effortless understanding of the architecture and navigation of the site. Ease of learning: how fast a user who has never seen the user interface before can accomplish basic tasks. Efficiency of use: How fast an experienced user can accomplish tasks. Memorability: after visiting the site, if a user can remember enough to use it effectively in future visits. Error frequency and severity: how often users make errors while using the system, how serious the errors are, and how users recover from the errors. Subjective satisfaction: If the user likes using the system.
Usability test was constructed to address key features of the Evaluation Checklists. Project website; Usability test instrument was pilot tested with one participant and revised prior to use with seven evaluators; Usability test method involved authentic users and authentic tasks (Dumas and Redish 1993); Sessions were conducted during a two week period from September 25 to October 5, 2002; • Usability test sessions averaged approximately 45 minutes and ranged from 35 minutes to 1 hour; Participants were asked to perform a think aloud protocol (Ericsson and Simon 1993) in which they described their thoughts as they completed each task; Empirical data was recorded by note-taking and in some cases audio-taping; • Data collection was suspended when redundancy of data was reached; Data analysis involved the identification of patterns of usage and common themes identified by test participants.
As has been noted, researchers should consider ways to reduce criterion deficiency and criterion contamination. We believe the easiest way to reduce criterion deficiency is through the use of several measures in the actual criterion, each focusing on a different characteristic of the UEM. In addition, it may be possible to examine how multiple measures can be combined into a composite measure that has a stronger relationship to the ultimate criteria
Dumas, J. S. and J. C. Redish (1993). A practical guide to usability testing. Norwood, NJ, Ablex.
Ericsson, K. A. and H. A. Simon (1993). Protocol analysis: Verbal reports as data. Cambridge, MA, MIT Press.
Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34(4), 457-468.
Wharton, C., Bradford, J., Jeffries, R., & Franzke, M. (1992). Applying Cognitive Walkthroughs to More Complex User Interfaces: Experiences, Issues, and Recommendations. Proceedings of CHI Conference on Human Factors in Computing Systems. New York: ACM, 381-388.
Winer, Brown, & Michels. (1991). Statistical Principles in Experimental Design New York: McGraw-Hill.