How can we better explain the differences between users’ perceptions of a task and the objective measures?

Computer science

Researchers can contribute their experience with experimentation


Researchers can contribute their experience with experimentation to develop improved techniques for interface evaluation and the user experience. Guidance in conducting pilot studies, acceptance tests, surveys, interviews, and discussions would benefit large-scale development groups, but additional attention needs to be given to smaller projects and incremental-type changes. Strategies are needed to cope with evaluation for the numerous specific populations of users and the diverse forms of disabilities that users may have. In this project 2, you are the Experts in helping design and constructing psychological tests which can help in preparing validated and reliable test instruments for subjective evaluation of the varying types of interfaces, from small mobile devices to very large displays, including specialized interfaces such as gaming. Such standardized tests would allow independent groups to compare the acceptability of interfaces.

SCOPE: You are working for an Independent company and you are tasked to design an evaluation instrument tool used to profile users’ skill levels with interfaces that would be helpful in job-placement and training programs.

STEP ONE (1) Use PowerPoint (MS Suite Products or suitable Tool) to design (draw) your HCI Interface, design an Evaluation Instrument Test Tool to validate an interface for a small mobile device or a very large display to include specialized interfaces such as gaming. Please show in your design how you would incorporate quality features e.g., usability, universality, and usefulness using an AI and/or Machine Learning approach.

OBJECTIVE: This project 2 should show how do you best incorporate and evaluate qualitative data and dimensions such as fun, pleasure, joy, affect, challenge, or realism.

STEP TWO: Answer each of the following question to include why as it relates to Evaluations and the Users Experience.

  • Would benchmark datasets and task libraries help standardize evaluation?
  • How useful can researchers make automated testing against requirements documents?
  • How many users are needed to generate valid recommendations?
  • How can we better explain the differences between users’ perceptions of a task and the objective measures?
  • How do we select the best measure for a task?
  • How can life-critical applications for experienced professionals be tested reliably?
  • Is there a single usability metric that can be used and compared across types of interfaces?
  • Can we combine performance data and subjective data and create a single meaningful result?
  • Is there a scorecard that can be used to aid in the interpretation of usability results?
  • Is there a theory to explain and understand the relationship between measures?