You are here

Mini-crowdsourcing end-user assessment of intelligent assistants: A cost-benefit study

TitleMini-crowdsourcing end-user assessment of intelligent assistants: A cost-benefit study
Publication TypeConference Paper
Year of Publication2011
AuthorsShinsel, A., T. Kulesza, M. M. Burnett, W. Curran, A. Groce, S. Stumpf, and W-K. Wong
Conference Name2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
Pagination47 - 54
Date Published09/2011
Conference LocationPittsburgh, PA
ISBN Number978-1-4577-1244-9

Intelligent assistants sometimes handle tasks too important to be trusted implicitly. End users can establish trust via systematic assessment, but such assessment is costly. This paper investigates whether, when, and how bringing a small crowd of end users to bear on the assessment of an intelligent assistant is useful from a cost/benefit perspective. Our results show that a mini-crowd of testers supplied many more benefits than the obvious decrease in workload, but these benefits did not scale linearly as mini-crowd size increased - there was a point of diminishing returns where the cost-benefit ratio became less attractive.