Evaluation Strategies for HCI Toolkit Research

Ledo, D., Houben, S., Vermeulen, J., Marquardt, N., Oehlberg, L. and Greenberg, S. (In Press)
Evaluation Strategies for HCI Toolkit Research. In Proceedings of the ACM Conference on Human Factors in Computing Systems (ACM CHI'17). (Montreal, Quebec, Canada), 17 pages, 2018, April 21-26. Earlier version as Report2017-1096-03.

View Publication and Related Materials

PDF PaperPDF Paper (2018-EvaluationStrategies.CHI.pdf)

Abstract

Toolkit research plays an important role in the field of HCI, as it can heavily influence both the design and implementation of interactive systems. For publication, the HCI community typically expects that research to include an evaluation component. The problem is that toolkit evaluation is challenging, as it is often unclear what ‘evaluating’ a toolkit means and what methods are appropriate. To address this problem, we analyzed 68 published toolkit papers. From that analysis, we provide an overview of, reflection on, and discussion of evaluation methods for toolkit contributions. We identify and discuss the value of four toolkit evaluation strategies, including the associated techniques each employs. We offer a categorization of evaluation strategies for toolkit researchers, along with a discussion of the value, potential biases, and trade-offs associated with each strategy.

Keywords

Toolkits; user interfaces; prototyping; design; evaluation.

Bibtex entry

@INPROCEEDINGS { 2018-EvaluationStrategies.CHI,
CLASS = { CONFARTICLE },
AUTHOR = { Ledo, D. and Houben, S. and Vermeulen, J. and Marquardt, N. and Oehlberg, L. and Greenberg, S. },
TITLE = { Evaluation Strategies for HCI Toolkit Research },
YEAR = { In Press },
MONTH = { 2018, April 21-26 },
BOOKTITLE = { Proceedings of the ACM Conference on Human Factors in Computing Systems (ACM CHI'17) },
ADDRESS = { Montreal, Quebec, Canada },
PAGES = { 17 pages },
NOTE = { Earlier version as Report2017-1096-03 },
KEYWORDS = { Toolkits; user interfaces; prototyping; design; evaluation. },
}