Issue No. 02 - April-June (2011 vol. 10)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/MPRV.2011.25
<p>Pervasive computing sits at the interface of computer science, social sciences, psychology, and engineering. As a consequence, consistent standards and guidelines for empirical evaluation are elusive. Thus, in most key conferences and journals in the field (including <it>IEEE Pervasive Computing</it>), "lack of adequate evaluation" is the most common reason for rejecting a submission. At the same time, the evaluation's quality is often the subject of heated discussion among reviewers and program committee members. <it>IEEE Pervasive Computing</it>'s "Experimental Methodology" department will look at specific problems, practices, and recommendations related to empirical research in pervasive computing. The department is motivated by the increasing awareness that the field needs to mature, moving from visions of what could be done toward real-world systems that quantitatively prove what can be done in a reproducible, objective way.</p>
experimental design, benchmarking, methodology, standards
P. Lukowicz and S. Intille, "Experimental Methodology in Pervasive Computing," in IEEE Pervasive Computing, vol. 10, no. , pp. 94-96, 2011.