3.1.1 Participants Twenty-one students from the University of British Columbia participated in the study. Participants received financial compensation ($20.00) for their involvement. Two participants were removed because of technical difficulties.
3.1.2 Stimuli Participants watched a 1 hour lecture on the stock market. The user experience survey consisted of a series of open (e.g., did you find your annotations useful during Part 2? If Yes, then what was useful about it. If No, then why did you not find it useful) and closed questions (e.g., did you use the plot of other peoples' annotations in Part 2?). There were six sections each assessing a different aspect of the user experience. A custom user experience survey rather than a generic user experience survey (e.g., [ 38]) was employed to address specific issues related to the CLAS. That said, the custom questions addressed critical concepts in measuring system success [ 39] including system quality, use, user satisfaction, and net benefits. The focus of this present analysis was on perceived usefulness and perceived ease of use [ 38]. A more generic user experience survey could be adopted in future studies (see Section 4.2 for future directions and limitations). The specific sections are listed below.
1. Online courses: questions addressed past history/future plans regarding online courses or video-based lectures.
2. Plot of annotations: questions addressed use of both the individual and group plots.
3. Annotating the lectures: questions addressed the difficulty of the annotation function and potential improvements of this function.
4. Ease of learning: questions addressed the difficulty of learning to use the CLAS.
5. General information: questions addressed global aspects of the tool such as its overall utility and strengths/weaknesses.
6. Graphical user interface: questions addressed ease of use and potential improvements.
3.1.3 Procedure The study consisted of two parts. In the first part (the study session), participants were introduced to the CLAS and watched a lecture using it (see Fig. 3). Participants were instructed to treat the video lecture like it was a part of a class they were taking and that they would later be tested on their understanding of the presented material. In the study session, participants were able to use the individual annotation features in CLAS to mark the points they found important but the “group” data were unavailable. In the second session, participants were given 30 minutes to study for the test on the lecture watched in the first session. During this time participants were able to use all of the features of CLAS, but did not have enough time to watch the whole lecture again. Although all participants were shown how CLAS worked it was up to the individual how s/he ultimately used the system (or indeed if they used it at all), and which particular features to focus on. The current implementation of CLAS recorded usage statistics by logging when participants selected a new time point on the individual or group graph. To create the group graph bins of 10 second length were generated and annotations were grouped and assigned according to their time-based bin.
3.2.1 Online Courses None of the participants reported previously being in a class using video lectures, but 82 percent ( ) said that if they were to take an online course that used video lectures they would like to use a tool like the CLAS.
3.2.2 Plot of Annotations The majority of participants used both their graph and the group graph. Specifically, 84 percent ( ) used their own graph and 79 percent ( ) used the group graph. In addition, participants reported splitting their time between both about equally (54 percent; ). With respect to their own graph, most participants found it useful (63 percent; ). Participants' answers to the open-ended questions addressing why the tool was useful focused on the graphs allowing them to go back to important points while studying. One participant remarked that the act of marking the important points increased their “focus” on the lecture. The few participants who did not find their graph useful brought attention to the lag between the identification and annotation of an important point and the beginning of that point (i.e., recognition of an important point and thus its annotation typically occur after the start of the point).
The majority of participants also found the group graph useful (68 percent; ). It is interesting to note that participants actually ranked the group graph more useful than their individual graph. Participants reported finding the group graph useful because it allowed them to attend to points that they may have missed and to focus on consensus “important” points. Participants who reported not finding the group graph useful remarked that they preferred their own “more personalized” annotations.
A large majority of the participants (83 percent; ) also found it useful to be able to use the graphs to navigate to different points in the lecture (rather than using video control). The majority reported reviewing the lecture by using the graph to click to the marked points in theirs and the group's graph.
3.2.3 Ease of Learning On a likert scale of 1 (Very Easy) to 4 (Very Difficult) participants rated how easy it was to use the CLAS as a whole as a 1.7 ( ). Thus, learning the CLAS was somewhere between very easy and easy. When asked if they would be able to utilize the CLAS better with more practice only 40 percent ( ) said yes and when asked how they would better utilize it they tended to focus on more efficiently annotating (e.g., not annotating as many points as they did). The participants that indicated that more practice would not help them noted that the tool was straightforward in terms of usage and thus there was minimal room for improvement.
3.2.4 Annotating the Lectures On a likert scale of 1 (Very Easy) to 4 (Very Difficult) participants rated the act of annotating the lecture using the CLAS in Part 1 as a 1.8 ( ). Thus, the perceived level of difficulty for a user to create an annotation lay between very easy and easy. While this provided evidence of effective usability design, the participants did provide further suggestions for improvement. These included adding a rewind button and adding a start and stop button so that segments rather than points could be highlighted.
3.2.5 General Information The large majority (78 percent; ) thought the CLAS tool was “useful” and reasons for this response focused on the utility of being able to mark and go back to marked points and see and navigate to others' points. Those who did not find it useful cited the ambiguity of the point-based marking (i.e., they wanted to be able to write a note about why this point was important) and a general lack of interest in others' notes. Suggested improvements to the tool focused on the graphical user interface, the difficulty navigating the lecture using the video controls, and the need for a note taking utility.
3.2.6 Graphical User Interface The majority of participants noted that the graphical user interface, like the CLAS concept, was easy to use and understand. Suggested improvements included making the videos larger, the annotation graphs less clunky, and the general display more appealing.
3.2.7 Usage Statistics Analyzing participants' clicking behavior on the individual and group graphs allowed for a preliminary survey of how participants were using the CLAS during the second, revision session. This capability was only available for approximately the second half of the sample ( ). The usage logs confirmed that most participants used the graphs extensively during the revision period. In the 30 minute review period, participants clicked on one or the other of the annotation graphs 72.5 times ( ). Students tended to select a time point by clicking one of the annotation graphs and then watching the lecture for an average of 30.41 seconds ( ) before pausing or moving on. Participants tended to move forward through the lecture during review (89 percent). Consistent with the feedback given in the post study questionnaire, slightly more time points were selected by clicking on the group graph (54 percent; ) than on the individual graph (46 percent; ). The usage data also revealed that those who thought that being able to navigate with the graphs was useful (mean = 84 clicks) tended to click on the graphs more often than participants who did not find it useful ( clicks), , , . Furthermore, of the participants that clicked on both graphs only 5 percent clicked on the same time point in both graphs. Thus, participants seem to be using the alternate graphs to select different information. Given the small N interpretation of statistical analyzes should be taken with a grain of salt. Nonetheless, an advantage of the CLAS is that this type of data can be easily recorded, for the group and the individual, and linked to the precise lecture content viewed and future academic performance.
E.F. Risko is with the Social and Behavioral Sciences Department, Arizona State University, Glendale, AZ 85306. E-mail: Evan.F.Risko@gmail.com.
T. Foulsham is with the Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK. E-mail: email@example.com.
S. Dawson is with Arts ISIT, University of British Columbia, Vancouver, BC V6T 1Z1, Canada. E-mail: firstname.lastname@example.org.
A. Kingstone is with the Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
Manuscript received 24 Sept. 2011; revised 3 Apr. 2012; accepted 20 June 2012; published online 3 July 2012.
For information on obtaining reprints of this article, please send e-mail to: email@example.com, and reference IEEECS Log Number TLT-2011-09-0098.
Digital Object Identifier no. 10.1109/TLT.2012.15.
Evan F. Risko received the BA degree in psychology, and the MA and PhD degrees in cognitive psychology from the University of Waterloo. He was subsequently a Killam and NSERC Postdoctoral Fellow at the University of British Columbia. He is currently an assistant professor at Arizona State University. His research interests include embodied and embedded cognition and the contribution of attention to everyday activities (e.g., education).
Tom Foulsham received the BSc degree in psychology and cognitive neuroscience and the PhD degree from the University of Nottingham, United Kingdom. He was subsequently a Commonwealth Postdoctoral Fellow at the University of British Columbia, Canada. He is currently a lecturer at the University of Essex, United Kingdom. His research focuses on vision and visual attention in the natural environment.
Shane Dawson is currently the director of Arts Instructional Support and Information Technology (ISIT) at the University of British Columbia. His research focuses on the measurement, collection, analysis, and reporting of data about learners and their contexts, for the purposes of understanding and optimizing learning and teaching practices. His research has demonstrated the use of student online communication data to provide lead indicators related to learning support, academic performance, student sense of community, creative capacity, and course satisfaction. He is a cocreator of SNAPP—a social network visualization tool for teaching staff to better understand and evaluate the impact of their implemented learning activities. SNAPP is currently in use throughout 60 countries and in excess of 500 educational institutions.