# Browsing within Lecture Videos Based on the Chain Index of Speech Transcription

Stephan Repp
Andreas
Christoph Meinel, IEEE

Pages: pp. 145-156

Abstract—The number of digital lecture video recordings has increased dramatically since recording technology became easier to use. The accessibility and ability to search within this large archive is limited and difficult. Additionally, detailed browsing in videos is not supported due to the lack of an explicit annotation. Manual annotation and segmentation is time-consuming and therefore useless. A promising approach is based on using the audio layer of a lecture recording to obtain information about the lecture's contents. In this paper, we present an indexing method for computer science courses based on their existing recorded videos. The transcriptions from a speech-recognition engine (SRE) are sufficient to create a chain index for detailed browsing inside a lecture video. The index structure and the evaluation of the supplied keywords are presented. The user interface for dynamic browsing of the e-learning contents concludes this paper.

Index Terms—Multimedia retrieval, speech recognition, index, semantic, user interface, browsing, recorded lecture videos.

## Introduction

In terms of streaming media, audiovisual recordings are used more and more frequently for correspondence-course institutions where, independent of time and place, learners can access libraries of recorded lectures. Fig. 1 illustrates an example of such a system 1 that delivers three main parts of the lecture: the desktop-recording (the captured desktop of the presenter's laptop) on the right side, a manual annotation in the lower left corner, and a video of the speaker in the upper left corner. In fact, such recorded lecture videos are ideal for correspondence courses and could be regarded as a complement to traditional classroom courses [ 1]. However, the accessibility and traceability of the content from large lecture archives for further use is rather limited.

Figure    Fig. 1. Information resources in recorded lecture videos.

Two major challenges arise while preparing recorded lectures for content-based retrieval: the automated indexing of multimedia videos and the retrieval of semantically appropriate information from a lecture knowledge base. It is evident that the rapid growth of multimedia data available in e-learning systems requires more efficient methods for content-based browsing and retrieval of video data. The requested information is often covered by only a few minutes of the lecture recording and is therefore hidden within a full 90-minute recording stored among thousands of others. It is often not the problem to find the proper lecture in the archive but rather to find the proper position inside the video stream. It is not practical for learners to watch the whole video to get the desired information inside the lecture video. The problem becomes how to retrieve the appropriate information in a large lecture-video database more efficiently. Segmentation of video lectures into smaller units, each segment related to a specific topic, is a highly necessary approach to finding the desired piece of information.

Traditional video retrieval based on feature extraction cannot be efficiently applied to lecture recordings. Lecture recordings are characterized by a homogeneous scene composition. Most of the time, the lecturer is in focus, presenting a topic which is not visible. Thus, image analysis of lecture videos fails even if the producer tries to loosen the scene with creative camera trajectories.

A promising approach is based on using the audio layer of a lecture recording to gain information about the lecture's contents (e.g., [ 2], [ 3], [ 4], [ 5], [ 6]). Transcriptions of lectures recorded live consist of unscripted and spontaneous speech. Thus, lecture data has much in common with casual or natural speech data, including false starts, extraneous filler words, and nonlexically filled pauses [ 7]. Furthermore, a transcript is a stream of words without punctuation marks. Of course, there is also great variation between one tutor's speech and another's. For example, one may speak accurately, the next completely differently with many grammatical errors, for example. One can also easily observe that the colloquial nature of the data is dramatically different in style from the same presentation of this material in a textbook. In other words, the textual format is typically more concise and better organized. Furthermore, speech-recognition software produces outcomes prone to error, approximately 20 to 30 percent of the detected words are incorrect.

The not-in-the-vocabulary problem is a dilemma of the recognition software as well [ 8]. The software needs a database of all words used by the lecturer for the transcription process. If a word that occurs in the speech of the lecturer is not in the vocabulary, the wrong word is transcribed by the engine. Also, changing the language in a presentation may lead the software to unsolvable problems. A lecture presented in German may sometimes use English terminology. This causes false words to be included in the transcript. Another issue is that the analysis of the audio layer of a lecture video recording provides two types of information: the spoken words (transliteration) and prosody. Prosodic cues such as pitch, duration, pauses, and energy distribution mark topic changes and points of interest [ 9]. A state-of-the-art speech-recognition engine (SRE) does not support this prosodic information and, further, most lecture recordings do not provide optimal sound quality. It has been shown in [ 8], however, that a keyword-based search in an imperfect transcript yields reliable results. Some universities 2 offer a keyword-based search for their lecture-video archive. However, such solutions fail if word-sense disambiguation is required, i.e., in the case of words with multiple uses or meanings.

Our focus is on recorded lectures from computer science. These kinds of lectures have a typical internal structure. The tutor talks about section topics. The section topics are parts of the main topic the tutor is talking about. For example, the tutor talks successively about the section topics "star-topology," "bus-topology," and "tree-topology." The main topic is "topology" in this case. The lecture structure in other courses (e.g., in social science, history of art, etc.) and the lecture structure between lecturers could vary and is not analyzed in our research. Further, the spoken language is a monologue and not a dialogue as it is in a seminar or in classroom courses.

In this article, we present our efforts at putting together results from different fields and projects in order to create a user interface for browsing educational lecture videos based on a chain index. Our proposed algorithm requires neither semantically annotated lecture videos nor an external knowledge source such as WordNet [ 10]. The automatically supplied keywords are advantageous for dissolving word sense ambiguity.

The structure of this article is as follows: Section 2 analyzes the current state of the indexing of lecture videos and we show that a new dynamic indexing technique is required. Section 3 qualifies our indexing and describes the advantage of such an index. To obtain an estimate of the accuracy of the retrieval, an exploratory study is presented in Section 4. A browsing system based on the index created, as well as the appropriate interface, is presented in Section 5.

## Statusof Lecture Video Indexing

The process of establishing access points to facilitate the retrieval of information is called indexing [ 11]. Indexing of a recorded lecture video requires two main steps. First, the video is split into coherent segments (topic segmentation), i.e., each segment represents a topic and stands for a specific assertion. Second, the segments have to specify the topic with descriptors, keywords, a summary, or a semantic description. The indexing is a part of an Information Retrieval System. Information Retrieval (IR) is the representation, storage, organization, and access for information items [ 12].

IR in lecture videos is a very active, multidisciplinary research area. Spoken language, any kind of action (e.g., demonstrations, experiments, writing on a blackboard, mouse clicks, etc.), the gestures of the speaker, social tagging, manual annotations, desktop-recordings (e.g., GUI of a demo program, slides, formula, etc.), or the presented slides in document form could act as a source for better access to the contents of the lecture. Fig. 1 gives an overview of the resources in a recorded lecture video. It is clear that all resources are not available in every recorded lecture.

In this article we will not have a look at systems which store the presentation in proprietary formats and have additional information about the activity in the lecture, such as [ 13] or [ 14], [ 15], [ 16]. For instance, Chu and Chen have already presented an approach for a cross-media synchronization [ 13]. They match audio recordings with text transliterations of these recordings based on dynamic programming methods [ 17]. Chu and Chen make use of explicitly encoded events for synchronization instead of implicitly automated annotation.

However, most lecture recordings that are accessible on the Web ordinarily consist of spoken speech and slides that are stored in a common video format (mpg, rm). The indexing of these lecture videos is highly important for further access.

There are four main indexing opportunities for this kind of lecture recording. The first is that the audiovisual lecture recording is indexing manually after the lecture. A different approach is that the users of the videos do the indexing (annotation) themselves with tags, which is called tagging. The slides can also be used as a source for the automatic indexing and the implication in this method is that a slide is a summary of the presenter's speech and thus a good source of data for the search. A fourth option is that the speech can be used as a resource for automatic indexing.

### 2.1 Manual

Here, both the segmentation and the annotation are done manually. Typically, a professional who is familiar with the issue does segmentation and marks each segment with descriptors or other descriptions. This is done after the lecture in a time consuming and expensive process. Maybe this is useful for a small lecture archive but it is not realistic for an existing large archive. It is clear that this solution obtains the best results depending on the accuracy and the diligence of the professional.

As a complex example that is limited to manageable videos segments, we will present a system for a semantic search based on Description Logics (DL) [ 18]. It has been recognized that a digital library benefits from having its contents understandable and available in a machine-processable form, and it is widely agreed that ontologies will play a key role in providing much of the enabling infrastructure to achieve this goal. A fundamental part of the system is a common domain ontology. The ontology is basically composed of a hierarchy of concepts ( taxonomy) and a language. In the former case, a list of semantically relevant words is created with regards to a domain, e.g., internetworking, and organized hierarchically. In the latter case, the Description Logics are used to formalize the semantic annotations. Description Logics [ 19] are a family of knowledge representation formalisms that allow the knowledge of an application domain to be represented in a structured way and to be reasoned through. In DL, the conceptual knowledge of an application domain is represented in terms of concepts such as IPAddress and roles such as $\exists$composedOf. Concepts denote sets of individuals and roles denote binary relations between individuals. Complex descriptions are built inductively using concept constructors which rely on basic concepts and role names. Concept descriptions are used to specify terminologies that define the intentional knowledge of an application domain. For example, the following imposes that a router is a network component that uses at least one IP address: Router$\sqsubseteq$NetComp$\sqcap \exists$uses.IPAddress. Definitions allow meaningful names to be given to concept descriptions such as ${\rm LO}_1 \equiv$IPAdress$\sqcap \exists$composedOf.HostID.

Figure    Fig. 2. Example of terminology concerning learning objects.

To index a video, the video is split up into segments. One segment stands for one Learning Object (LO) where the lecturer speaks about one topic. The semantic annotation of the four LOs is shown in Fig. 2, describing the following contents:

• ${\rm LO}_1$ : general explanation of IP addresses,
• ${\rm LO}_2$ : explanation that IP addresses are used in the protocol TCP/IP,
• ${\rm LO}_3$ : explanation that an IP-address is composed of a host identifier, and
• ${\rm LO}_4$ : explanation that an IP-address is composed of a network identifier.

Some advantages of using DL are that DL terminologies can be serialized as OWL ( Semantic Web Ontology Language) [ 20], a machine-readable and standardized format for semantically annotating resources. Second, DL allows the definition of detailed semantic descriptions about resources and logical inference from these descriptions [ 19]. Finally, the link between DL and natural language (NL) has already been shown [ 21].

The query of the user represents an OWL description as well. The way the NL processing works is described in detail in [ 22], [ 18]. The query DL OWL and all LO OWL descriptions are the input of the semantic search engine. This checks over the LO and calculates the suitable ones. In tests, an ${\rm MMR}_5$ -value 3 ( Medium Reciprocal Rank of the answers [ 23]) of 75 percent is reached [ 24]. This system has been successfully used in a school in Luxembourg [ 25] for the domain fraction arithmetic.

When using DL, four types of manual tasks have to be done for the manual indexing:

• Creating the taxonomy manually or using an existing one.
• Creating dictionaries which map NL words to the concepts and roles.
• Segmenting the videos into parts (LO).
• Annotating the video segments with concepts and rules.

In contrast, a pure manual indexing is based only on the segmentation of the videos into parts (LO) and the enrichment of the parts with the basics contents. Certainly, the manual process is very time-consuming and not really practical for the indexing of a large video library.

### 2.2 Tagging

The recent emergence and success of folksonomies and tagging have shown the great potential of this simple approach to collecting metadata about resources. Unlike traditional categorization systems, the process of tagging is nothing more than annotating documents with an unstructured list of keywords. Although the amount of research on tagging is still comparatively low, several studies have already analyzed the semantic aspects of this process and why it is so popular in practice [ 26], [ 27], [ 28]. Tagging strikes a balance between the individual and the community: The cost of participation is low for the individual and tagging a document benefits both the individual and the community.

An approach to video annotation is described in [ 29]. There the user is involved in the annotation process by deploying collaborative tagging for the generation and enrichment of video metadata annotation to support content-based video retrieval. When a user bookmarks a position in a lecture video, he/she stores it for later use. Users only bookmark documents that are valuable or relevant to them. When a document is of no interest to a specific user, it is unlikely that he/she would bookmark it, and when users actually do store bookmarks in order to find them again later, they have an incentive to add meaningful metadata to them.

Finding word associations for describing a document in the form of tags is a subjective user task. For the annotation of lecture videos, the timeline is a further dimension not present in text documents. Surely the words are also ordered in the text document but the exact time position of the word is lost. In order for the annotation to be useful and exact, the user has to annotate all the segments in the lecture video, not just the one(s) relevant to him/her. The quality and the quantity of tagging are improved by the square of the numbers of users if the quantity of users increases (Metcalfe's law 4), but it is very doubtful that enough learners tag one lesson so that the quality of the annotation is appropriate under the consideration that many universities' lecture videos cover similar topics.

To the best of our knowledge, no research exists on the quality of the annotation of lecture videos, nor do studies exist on the quality of the tagging information of lecturer videos compared with a reference data set. The main disadvantage is that this annotation needs a lot of time and depends on many learners who have to watch and put to use the ability to annotate.

### 2.3 Slides

Slides (e.g., PowerPoint slides) represent the main information in video segments, particularly in computer science [ 8], [ 30]. Hürst evaluated that the slides carry the most information for a keyword search. He evaluated a retrieval performance based on slides of a 40 percent precision value and a 63 percent recall value compared to a precision of 33 percent and a recall of 54 percent based on corrected transcriptions. The recall and the precision value are standard evaluation measures in Information Retrieval [ 12].

One problem is the synchronization of the slides and the video stream. An approach is described in [ 31] for synchronizing presentation slides by maintaining a log file during the presentation that keeps track of slide changes. Sack and Waitelonis [ 31] mention optical character recognition (OCR) for the identification and synchronization of the presentation slide currently being shown within a desktop-recording. If the log files and the slides are given, then these annotations have good retrieval results based on keywords. But, OCR recognition is erroneous and only adapted to the special video format and to the PowerPoint format. Another solution is to synchronize the existing PowerPoint slides with the speech of the lecture recordings [ 32]. Repp et al. synchronized with a difference of plus or minus one slide the speech transcription.

If slides exist for the lectures and the time stamps of each slide are available, then this source is one of the best for access to the lecture contents [ 30]. However, most lecture recordings available neither support desktop recordings nor maintain a dedicated log file with the appropriate slides. So, the speech itself remains the only reliable source of information despite positive results from these methods.

### 2.4 Transcripts of Speech Recognition Engines

The spoken language is one of the main information resources in lectures. The lecturer speaks about a topic with a characteristic vocabulary. Each transition of the vocabulary (or other features in the spoken language, e.g., a pause) could mark a segment boundary and so the video could be classified into video sections. The task is to partition the text into a sequence of topically coherent segments and thereby induce a content structure (called topic segmentation). For each video segment, a index could be created with several standard indexing methods [ 11], [ 12], e.g., the distribution of words in the video segments is used for the automatic indexing process.

Topic segmentation has been extensively used in text information retrieval and text summarization [ 33], [ 34], [ 9], [ 35], [ 36]. The user would prefer a document passage in which the occurrence of the word or topic is concentrated in one or two passages. The development of text segmentation algorithms is a central concern in natural language processing. This can be stated as the issue of detecting the boundaries with standard text segmentation algorithms.

For the first step in automatically indexing the lecture video based on the transcripts, a small test is arranged. Standard text segmentation algorithms are implemented to show whether it is possible to recognize the segments based on spoken language. A more detailed study about this research is presented in [ 37].

#### 2.4.1 Test Setup

After a preprocessing step which deletes all the stop words (i.e., words without semantic relevancy) and transforms all words into their canonical form (stemming) [ 38], the following algorithms are implemented:

• Linear. A linear distribution of the topics during the presentation time is implemented. The algorithm assumes the number of topics is given.
• Pause. The duration of silence (pause in speech) is used as a feature for a segment boundary. The time stamps of the longest silences are used as the boundaries. The algorithm assumes the number of topics is given.
• C99 and LCseg. The boundaries number is given and a sentence length of 10 words is defined. A segment boundary is assumed to be a text boundary and can be detected by the C99 [ 39] and the LCseg [ 40] algorithms. The description and the implementations of the algorithms can be downloaded from the appropriate author pages [ 39], [ 40]. In this way the number of segments is provided to the algorithms. A stop-word list and a stemmer, both for the German language, are adapted to C99 and LCseg.
• SlidingWindow ( SW). The SlidingWindow is based on the TextTiling [ 41] algorithm and on the research of [ 42]. A sliding window system is implemented. This window (120 words) moves across the text stream of the transcript over a certain interval (20 words) and compares the neighboring windows with each other using the cosine-measure. After postprocessing, the points with the lowest similarity become a boundary.

#### 2.4.2 Metrics

The WindowDiff [ 43] measurement ( WinDiff) is used as a standard evaluation measure for text segmentation. The implementation of the WindowDiff is used from [ 44]. 5 The WindowDiff measurement does not take into account the time delay between the calculated time and the real boundary time [ 32]. For this reason, the mean error rate and the standard deviation are used. The mean error rates ( $\bar{x}$ ) are calculated as the difference between the point in time of the reference and the point in time achieved by the algorithm for each topic boundary in the course. The mean of these values is the medial of all differences (i.e., the offset of the time shift). Additionally, we calculate the mean of all absolute differences ( $\bar{y}$ ). Further, the standard deviation ( SD) points out how much the data are spread out from the mean. If the data sets are close to the mean, the standard deviation is small and the algorithm matches the boundaries very well.

#### 2.4.3 First Test, Segment Boundaries Are the Slide Transitions

The text boundaries are assumed as slide transitions. The question posed is how standard text segmentation algorithms can segment the erroneous transcript into coherent segments without any additional resources like the PowerPoint slides. The count of segment boundaries (slide numbers) is given for each lecture. Table 1 shows a summary of the data set's contents. The data set for the first test includes two different speakers, two different languages (German and English), and three different topics (WWW, Semantic, and Security). For the first two minutes of each lecture, the word accuracy is determined manually as depicted in [ 45].

Table 1. Summary of the Lecture Series Archive

Table 2 shows the results as the mean, the standard deviation, and the WindowDiff of the first test. "SlidingWindow" has the best mean value. Table 2 shows further that the "Linear" segmentation yields better time results ( $\bar{y}$ and the standard deviation) compared to the "C99," "LCseg," "SlidingWindow," and "Pause" algorithm for the data set. For the WindowDiff measurement, the simple "Pause" algorithm seems to be the best one.

Table 2. Results (in Seconds) of the First Test Based on the Data Set of Table 1, Segment Boundaries Are the Slide Transitions

#### 2.4.4 Second Test, on Real Segment Boundaries

The problem with relying primarily on the slides for creating segments is that these segments will always be partially wrong due to the fact that speech is dynamic. The tutor uses his freedom to discuss topics not classified by the slides. Taking this into consideration, three people (a lecturer and two PhD students) were asked to discuss the "gold standard" of boundaries. One lecture is randomly selected from Mr. Meinel's WWW course and a perfect transcript (corrected by one person) is generated. This lecture of approximately 100 minutes in duration consists of 12,608 words and Word Error Rate is nearly 0 percent. The first test scenario A is based on the 62 slide transitions as boundaries and the second scenario B is based on 42 "real" boundaries.

Table 3 shows the mean, the standard deviation and the WindowDiff of this second test. It is surprising that the simple "pause" algorithm out-performs the WindowDiff measure for the data set (boundaries are generated by persons). Most algorithms achieve better results for the data set with the "real" boundaries compared with the slide transition. Hence, the results show that slide transition is not a good segmentation for topic boundaries.

Table 3. Results (in Seconds) of the Second Test Based on an Improved Transcript with a WER $\approx$ 0 Percent

#### 2.4.5 Conclusion

The overall results are very unfruitful as they show that detecting the topic segment is nearly impossible. Detecting the topic segment (slide transition or real boundaries) is practically impossible. A time shift between +/- 2 to 3 minutes (SD of C99 algorithm, Second Test) occurs. If the segment count isn't given to the algorithm, the results are much worse because then the algorithms have to additionally determine the boundary count. Further, the definition of a topic segment is not clear. Malioutov and Barzilay [ 44] show that the three lecturers chosen to segment the lecture did so differently. They defined between 8.9 and 13 topic segments per lecture. This discrepancy in the segmentation results is highly critical for information retrieval.

### 2.5 Current Approaches versus the Ideal Approach

All current approaches, such as manual indexing, using only the slide, tagging, or using the speech transcripts in an ordinary information search, have serious disadvantages.

It is clear that manual indexing achieves the best retrieval results but it is too time consuming and too expensive for the libraries. Further, although the slides as a resource are good, the problem of segmentation within the lecture is not resolved (see test 2). The speaker is usually on the same topic for one or more slides, and so one slide is not a topic, but rather a topic consists of more than two slides [ 44]. Additionally, the question of what a topic consists of is highly debatable. Moreover, most lecture recordings available neither support desktop recordings nor maintain a dedicated log file, meaning this also poses problems. Tagging, on the other hand, doesn't ensure a consistent annotation of the video streams if the videos are only watched by a few learners. They do some tags, but not objectively and automatically. In the end, a classical retrieval (segmenting, extracting keywords) of the transcription fails on the first segmenting step. It is not possible to segment or to find the topic boundaries (slide transitions) in the lecture video in a serious way.

Thus, we need a better method, without an explicit definition of a segment, in order to give users the freedom to browse the lecture and find the desired point in the lecture.

## Indexing

In this chapter, we describe our solution for the indexing of lecture videos based on the speech transcript. First, we explain our indexing technique, called "chaining." Second, we explain why this enhances the search within lecture videos.

### 3.1 Chaining

Clustering is used to detect cohesive areas—we call them chains—in the transcript. Linguistic research has shown that word repetition in a text is a hint for creating thematic cohesion. A change in the lexical distributions is usually a signal for topic transitions [ 46], [ 47].

The word stream of the raw data can contain all parts of speech, such as a noun, verb, number, etc. A term is any stemmed word within the lecture. From that stream, the distinct terms $T$ are stored in a term list $L$ . In other words, the list $L$ consists of all distinct word stems detected by the SRE of the lecture (without stop words). $n$ is the count of the distinct terms.

$L=\{T_1,T_2,T_3,....,T_n\}.$

(1)

A chain is constructed to consist of all repetitions ranging from the first to the last appearance of the term in the lecture. The chain is divided into subparts when there is a long break (time distance d) between the terms. The chain is a segment of accumulated appearances of equal terms. The process works as follows:

1. take the term $T_1$ from the term-list $L$ ,
2. build clusters—we call them chains—so that the distance between two adjacent terms $T_i$ is not more than a distance $d$ , count the occurrences (TN), and set the start time and end time for the chain,
3. store the chain data in the database as a "chain index" for the video, and
4. take the next distinct term $T_i$_+$ _1$ from the term-list $L$ and go to 2).

The chains can overlap because they have been separately generated for each term $T_i$ used in the course. For all chains that have been identified for the terms, a weighting scheme is used. Chains containing more repeated terms receive higher scores. The term number TN is the ranking value for the chain; the higher the value of TN, the higher the relevance of the chain. In a preliminary experiment, we had a precision of 88 percent for the first relevant chain [ 48].

In Table 4 and in Fig. 3 is an example of the chain index. For example, in the first chain "Topology," the stem "topology" occurred 10 times and the chain has a start-time at 1600s and an end-time at 2400s in the specific lecture. Further, there is a chain "Topology" that also starts at 2600s and ends at 3300s, but the chain only has a word repetition of four, so it is not given as high a ranking as the first chain.

Table 4. Example of the CHAIN INDEX

Figure    Fig. 3. Chains in a chronology sequence.

### 3.2 Resolving Word Disambiguation

The chain index supports a dissolving of any ambiguity. In Fig. 4 the term topology is used in the context of network and the term ring is used in the context of topology. The sense of this word is clear from the context it is used in. Moreover, the index supports a keyword register for each video segment. In Fig. 4, the search query of the user is IP and the chain with the highest TN and their inside chains ( Address, UDP, Suffix, Header, TCP) are returned. The inside chains represent the content of the video segment IP. In fact, a chain has a before, after, and superordinate area, too. These areas, before, inside, after, and superordinate, consist themselves of areas. The user has the opportunity to browse through these areas to find the semantically proper position in the video.

Figure    Fig. 4. The Inside, Before, After, and Superordinate Area for the Query IP.

Furthermore, the problem of composite words can be solved with the help of the chain index. The query IP Address leads to the intersection of the chains IP and Address being returned to the user.

## Evaluation

In this chapter, we describe our evaluation of the index. First, we explain our experimental setup. Second, we evaluate in two tests the accuracy of the chaining.

### 4.1 Experimental Setup

The database of the videos consists of the course "Einführung in das WWW," held in the German language in the first semester 2006 at the Hasso Plattner Institut in Potsdam. The second row of the Table 1 summarizes the features of the data set. The data set consists of 24 lectures; each lecture is approximately 90 minutes in length. The video corpus has an overall length of approximately 1,860 minutes and is stored as RealMedia files. The SRE needs a training phase for adapting the microphone to the SRE (10 minutes). The dictionary of the SRE is supplemented with an existing domain lexicon or, if they exist, the keywords from the slides. The domain words (in our case, the keywords from the PowerPoint slides) are trained with a standard tool in 20 minutes. So, the training phase for the SRE is approximately 30 minutes long. Please take into consideration that this training phase is done once for the lecture series. Our purpose is neither to enhance existing SRE nor to develop new speaker independent SRE; our purpose is to use the existing SRE practically. If no training is done by the lecturer, the accuracy is lower. Hürst analyzed the dependency between training phase and word accuracy [ 31].

### 4.2 The Distance $\schmi d$

To find the proper $d$ break for the chaining, we vary d between 0.5 to 10 minutes. and measure the accuracy of 10 keywords. The ten keywords are randomly selected from the domain lexicon. Fig. 5 shows the results of this test. In this figure the C(orrect) means that the video segment retrieved is correct and is within +/- 30 seconds. It is debatable whether this interval (between 2 minutes and 8 minutes) is an acceptable value, in part because only 10 words were tested and the measurement involved only one speaker, but it is a first clue of that value. Also, the question of whether there is a new segment or not does not arise because in the topic returned, the keyword is definitely used by the speaker. The second test provides more details.

Figure    Fig. 5. Accuracy of the results dependent of the distance factor $d$ .

### 4.3 Accuracy of the Chaining

One hundred and fifty three keywords (topic words) are chosen for the evaluation (e.g., XML, DSL, ISDN, ATM, Token, SGML, topology, mpeg, security, exam...). These words occur at least once in the transcript of the course and the words are different from those in the first test. A detailed study of keyword searches in speech transcripts of lecture recordings is presented in [ 30]. We evaluated whether our generated chain represents the topic for the search word. For this, we took only the chain with the majority of the topic words TN and decided whether this chain represents the subject accurately and, further, if this is the best section in the whole course and how long the time shift between the start of the calculated chain and the start of the topic in the lecture is.

We evaluated whether each hit in the result set:

1. is correct and is within +/-30 seconds (C),
2. is correct and is around the area of +/- 120 seconds (CA),
3. is similar to the area of the topic and is within +/- 30 seconds (S),
4. is similar to the area of the topic and is around the area of +/- 120 seconds (SA), and
5. is not correct and not in the area of the beginning of the chain (W).

Fig. 6 shows this classification in detail. Similar to the area of the topic means that the lecturer speaks about a similar topic or he/she speaks about a topic that is related. For example, the search keyword is topology and, typically, several positions exist in the lecture series where this is discussed. The correct position in the lecture series is the description of topology and not a similar area such as ATM topology or anything else related to that. Surely, the decision of whether it is the correct position or not is a subjective task, but put into the perspective that this is a special kind of lecture series, the task becomes less subjective. The lecture is a basic tutorial about internetworking and, so, each lesson consists of several new terms and definitions and each is presented only once in the whole lecture series. Certainly, the lecturer mentions the terms on several occasions, however, the decision is only whether this is the position where the lecturer explains the terms and definitions. A few retrieved positions are not easy to decide, for example, Address. The speaker discusses Address in the context of Mobile Address, Internet Address, Mailing Address, etc. In this case, we decided that these positions are all correct positions.

Figure    Fig. 6. Classification of the results.

Table 5 depicts the results of the evaluation. 59.4 percent of all hits are correct and only 13.1 percent are completely wrong. If we sum up all correct hits with all similar hits (similar to the topic), then we obtain a result of 86.8 percent correct hits.

Table 5. Results of the Accuracy of the Chaining with $d=180$ Seconds

Comparing these results with the results of the test from the section Transcriptions of Speech-Recognition Engine is complex. In that section, the segmentation problem leads to very imprecise results (about +/- 2.5 minutes for given segment boundaries; this value is higher when segment boundaries are not given). Additionally, the retrieval within these segments and the represented keywords of the segment are not perfect. One error leads to further errors. Finding the right time position in the video stream could not be done better than with our method, although our retrieval performance of the best chain depended on the accuracy of the SRE. This dependency is evaluated by [ 30].

## Browsing

Browsing is a subjective selective process for filtering a large object set for navigation by the user [ 49]. It is the activity of engaging in a series of glimpses, each of which exposes the browser to objects of potential interest. Depending on the interest of the user, the browser may or may not examine more closely one or more of the objects. Depending on the interest, this may or may not lead the browser to acquire the objects [ 50]. In contrast with that, retrieval is the process of recalling and finding information and sending it to the user. The above system based on the chain index has the ability to retrieve topic words from the text stream for indentifying relevant positions in the lecture video. The user has to select from the supported list the appropriate information he/she is searching for and, further, has the opportunity to browse within the objects.

In this section, we present a schematic overview of our components for the browsing system illustrated by Fig. 7. The system is organized in three basic functional components, described in the following sections.

Figure    Fig. 7. Components of the system.

### 5.1 Speech Component

Lectures are recorded in a multimedia form, for example, as RealMedia files (.rm) in our implementation. The conversion of the audio data into text and the preprocessing of that text is the task of this component. An out-of-the-box SRE is used to generate text data with a time-stamp on each word [ 6], [ 32]. The SRE needs a training phase for adapting the microphone to it. The dictionary of the SRE is supplemented with an existing domain lexicon or, if they exist, the keywords from the slides. The training phase's usefulness for obtaining accurate transcripts certainly depends on the user's motivation.

The transcript consists of a list of words with the corresponding point in time when the word was spotted in the speaker's flow of words. A preprocessing step deletes all the stop words (i.e., words without semantic relevancy) and transforms all words into their canonical form (stemming) [ 38]. Then, the resulting words with their time stamp—we call them raw data—are stored in a database.

### 5.2 Chaining Component

The automatic clustering is the task of this component. The input is the raw data from the speech component and the output data is an index of the term chains. This component is presented in Section 3.1.

### 5.3 Web Access

This well-known component consists of a Web-server. The Web-server uses the chain index for requests from the Web-browsers. The implementation of the system is done with the Django-Framework. 6 Django implements the Model-View-Controller Concept. The model represents the data of the application (chain index), the view manages the elements of the user interface, and the controller manages the communication to the model of user actions. The out-of-the-box SRE needs approximately 3.5 hours (dual core 2.4 MHz, 2 GB) for the transcription process of a lecture (100 minutes, 12,000 words) and less than 3 seconds for the chaining. The calculation is done once after the lesson and the later search process is not time critical. The search process only has access to the chain index that is based on integer values.

Figure    Fig. 8. User interface with time information.

The user interface ( Fig. 8) consists of three regions (see also [ 51], [ 52]):

1. The first region is set for the input of the search query and, additionally, if the information is available, the restriction of the language and the related lecture.
2. The second region is the result of the first search. It illustrates the results with the time information and exemplifies the results for the most relevant chains for a lecture.
3. The third region shows the summary of the before, after, and inside areas and their chains.

After a click on an adequate chain, an external player plays the video from the starting-point and, additionally, the third region of our site is expanded. The third region consists of the new before, after, and inside areas of the played chain.

## Discussion

It is clear that the results produced by such a search tool depend on the accuracy of the SRE. Some incorrectly detected words and incorrect compound words are not a problem for our system and user interface. The term occurs very often in relevant chains and, so, a high redundancy exists. Thus, some wrongly detected words have no influence on the generated and final results [ 8]. But, the main problem is that even a state-of-the-art SRE cannot recognize words from different languages in the transcription process. Lectures in German, especially lectures in informatics, contain some English words and phrases. An example of this problem is the English word "Source," for which the SRE detects the German word "Soße." To avoid this, an editor tool was developed for the correction of the main chains in the areas before, inside, after, and superordinate.

The editor tool ( Fig. 9) has an input box for selecting the keyword. After the selection, the chains are expanded to an initial timeline and the working area ("Working Frame"). In some cases, the video does not start at the beginning of the sought-after video segment. The user can watch the appropriate area and now has the opportunity to change the start and end time of the chain. They also have the opportunity to change the word if the word is incorrectly detected by the SRE for the chain. The editor also has the opportunity to browse inside the chain and to have a detailed look at it. After the editing process, the user can store the changes. After that, the save modus ("Approved Frame") is depicted under the working area. A further option is when a chain (or a word) is corrected. Then, all terms of the raw data will be corrected and the indexing (chaining) starts again. The intention is not to correct the whole transcript—that would be too time-consuming—but to correct the main and fundamental chains and words. This editor can be used as a multifunctional annotation tool as well.

Figure    Fig. 9. Editor tool for the correction of the chains.

If the speaker-independent SRE had a better performance, then the training phase, the additional domain words, and the editing process would not be necessary, so our system would not necessarily need any additional resources.

## Conclusionand Further Works

In this paper, we have evaluated a system that allows browsing in sections of videos from a multimedia knowledge base. The results show that it is possible to add data to the result set that supports the learners with the helpful information they are searching for.

Regardless of the issues presented by imperfect SRE, this user interface allows an exact, easy, and fast navigation in the video archive. It also allows the disambiguation of words. The results of the tests demonstrate how the browsing and word-disambiguation are effective and that learners are indeed able to retrieve the results they are searching for.

Additionally, we are planning a usability study with learners to support the results we have already obtained. The searching process and the user interface will be evaluated during this test. Furthermore, we are working on improving the ranking. The part of speech (noun, verb, number, etc.), the time-duration of the chain, and the sum of chains for a term and the occurrence of the term in different lectures could be helpful parameters for a new ranking algorithm.

We are also planning to embed a MPEG-7 annotation in an MPEG-4 video container [ 53]. With MPEG-4 Binary Format for Scenes (BIFS), it is possible to create a scene description with navigational elements and special search facilities which require annotation. This is a compact representation of audiovisual information in a single container with all information needed to compose, consume, share, and distribute this data. This offers a wide range of applications in video and audio retrieval, such as semantic search and content classification.

The application of our algorithm is not limited to indexing university lectures or presentations in general. All activity applications, e.g., newscasts, theater plays, video material, or any kind of speech complemented by textual data, could be analyzed and annotated with the help of the proposed algorithm.

## ACKNOWLEDGMENTS

This project was developed in the context of the Web University project, 7 which aims to explore novel Internet and IT technologies in order to enhance university teaching and research. Special thanks to Carol Ebbert for correcting the language. Also to the student Johannes Köhler for the implementation of the interface components.

## REFERENCES

• 1. S. Linckels, S. Repp, N. Karam, and C. Meinel, "The Virtual Tele-Task Professor: Semantic Search in Recorded Lectures," Proc. 38th SIGCSE Technical Symp. Computer Science Education (SIGCSE '07), pp. 50-54, 2007.
• 2. C.-W. Ngo, F. Wang, and T.-C. Pong, "Structuring Lecture Videos for Distance Learning Applications," Proc. Multimedia Software Eng., pp. 215-222, 2003.
• 3. N. Yamamoto, J. Ogata, and Y. Ariki, "Topic Segmentation and Retrieval System for Lecture Videos Based on Spontaneous Speech Recognition," Proc. Eighth European Conf. Speech Comm. and Technology, pp. 961-964, 2003.
• 4. A. Haubold, and J.R. Kender, "Augmented Segmentation and Visualization for Presentation Videos," Proc. 13th ACM Int'l Conf. Multimedia, pp. 51-60, 2005.
• 5. L. Tang, and J.R. Kender, "Designing an Intelligent User Interface for Instructional Video Indexing and Browsing," Proc. Int'l Conf. Intelligent User Interfaces (IUI '06), pp. 318-320, 2006.
• 6. S. Repp, and C. Meinel, "Segmenting of Recorded Lecture Videos - The Algorithm VoiceSeg," Proc. Conf. Signal Processing and Multimedia Applications (SIGMAP '06), pp. 317-322, 2006.
• 7. J. Glass, T.J. Hazen, L. Hetherington, and C. Wang, "Analysis and Processing of Lecture Audio Data: Preliminary Investigations," Proc. Workshop Interdisciplinary Approaches to Speech Indexing and Retrieval (HLT-NAACL '04), pp. 9-12, 2004.
• 8. W. Hürst, T. Kreuzer, and M. Wiesenhütter, "A Qualitative Study towards Using Large Vocabulary Automatic Speech Recognition to Index Recorded Presentations for Search and Access over the Web," Proc. IADIS Int'l Conf. WWW/Internet (ICWI '02), pp. 135-143, 2002.
• 9. G. Tür, A. Stolcke, D. Hakkani-Tür, and E. Shriberg, "Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation," Computational Linguistics, vol. 27, no. 1, pp. 31-57, 2001.
• 10. G. Miller, "An On-Line Lexical Database," Int'l J. Lexicography, vol. 3, no. 4, pp. 235-312, 1990.
• 11. H. Nohr, Grundlagen der automatischen Indexierung. Logos Verlag, 2005.
• 12. R.A. Baeza-Yates, and B.A. Ribeiro-Neto, Modern Information Retrieval. ACM/Addison-Wesley, 1999.
• 13. W.-T. Chu, and H.-Y. Chen, "Cross-Media Correlation: A Case Study of Navigated Hypermedia Documents," Proc. 10th ACM Int'l Cong. Multimedia (MULTIMEDIA '02), pp. 57-66, 2002.
• 14. G.D. Abowd, C.G. Atkeson, J.A. Brotherton, T. Enqvist, P. Gulley, and J. LeMon, "Investigating the Capture, Integration and Access Problem of Ubiquitous Computing in an Educational Setting," Proc. Conf. Human Factors in Computing Systems (SIGCHI '98), pp. 440-447, 1998.
• 15. S. Mukhopadhyay, and B. Smith, "Passive Capture and Structuring of Lectures," Proc. ACM Int'l Conf. Multimedia (Part 1), pp. 477-487, 1999.
• 16. R. Müller, and T. Ottmann, "The Authoring on the Fly System for Automated Recording and Replay of (Tele)presentations," Multimedia System, vol. 8, no. 3, pp. 158-176, 2000.
• 17. H. Ney, and S. Ortmanns, "Progress in Dynamic Programming Search for lvcsr," Proc. IEEE, vol. 88, no. 8, pp. 1224-1240, 1997.
• 18. S. Linckels, and C. Meinel, "Applications of Description Logics to Improve Multimedia Information Retrieval for Efficient Educational Tools," Proc. ACM SIGMM Int'l Conf. Multimedia Information Retrieval (MIR '08), 2008.
• 19. The Description Logic Handbook: Theory, Implementation, and Applications, F. Baader, D. Calvanese, D.L. McGuinness, D. Nardi, and P.F. Patel-Schneider, eds. Cambridge Univ. Press, 2003.
• 20. W.W.W.C. W3C, OWL Web Ontology Language, http://www. w3.org/TR/owl-features, 2004.
• 21. R.A. Schmidt, "Terminological Representation, Natural Language & Relation Algebra," Proc. 16th German Conf. Artificial Intelligence (GWAI '93), pp. 357-371, 1993.
• 22. S. Linckels, and C. Meinel, "Resolving Ambiguities in the Semantic Interpretation of Natural Language Questions," Proc. Seventh Int'l Conf. Intelligent Data Eng. and Automated Learning (IDEAL '06), pp. 612-619, Sept. 2006.
• 23. E.M. Voorhees, "The TREC-8 Question Answering Track Report," Proc. Text REtrieval Conf. (TREC-8), 1999.
• 24. S. Repp, S. Linckels, and C. Meinel, "Question Answering from Lecture Videos Based on Automatically-Generated Learning Objects," Proc. Seventh Int'l Conf. Web-Based Learning (ICWL '08), pp. 509-520, 2008.
• 25. S. Linckels, C. Dording, and C. Meinel, "Better Results in Mathematics Lessons with a Virtual Personal Teacher," Proc. 34th Ann. ACM SIGUCCS Conf. User Services, pp. 201-209, 2006.
• 26. M.G. Noll, and C. Meinel, "Web Search Personalization via Social Bookmarking and Tagging," The Semantic Web, Proc. Sixth Int'l Semantic Web Conf., Second Asian Semantic Web Conf. (ISWC/ASWC '07), pp. 367-380, 2007.
• 27. S. Golder, and B.A. Huberman, "Usage Patterns of Collaborative Tagging Systems," J. Information Science, vol. 32, no. 2, pp. 198-208, 2006.
• 28. C. Marlow, M. Naaman, D. Boyd, and M. Davis, "Ht06, Tagging Paper, Taxonomy, Flickr, Academic Article, to Read," Proc. 17th ACM Conf. Hypertext and Hypermedia (HHYPERTEXT '06), pp. 31-40, 2006.
• 29. H. Sack, and J. Waitelonis, "Integrating Social Tagging and Document Annotation for Content-Based Search in Multimedia Data," Proc. First Semantic Authoring and Annotation Workshop (SAAW '06), 2006.
• 30. W. Hürst, "Multimediale Informationssuche in Vortrags- und Vorlesungsaufzeichnungen," PhD dissertation, Fakultät füur Angewandte Wissenschaften, Universität Freiburg, 2005.
• 31. H. Sack, and J. Waitelonis, "Automated Annotations of Synchronized Multimedia Presentations," Proc. Workshop Mastering the Gap: From Information Extraction to Semantic Representation (ESWC '06), 2006.
• 32. S. Repp, J. Waitelonis, H. Sack, and C. Meinel, "Segmentation and Annotation of Audiovisual Recordings Based on Automated Speech Recognition," Proc. Eighth Int'l Conf. Intelligent Data Eng. and Automated Learning (IDEAL '07), pp. 620-629, 2007.
• 33. M.A. Hearst, "Multi-Paragraph Segmentation of Expository Text," Proc. 32nd Ann. Meeting Assoc. for Computational Linguistics, pp. 9-16, 1994.
• 34. D. Beeferman, A. Berger, and J.D. Lafferty, "Statistical Models for Text Segmentation," Machine Learning, vol. 34, nos. 1-3, pp. 177-210, 1999.
• 35. M. Utiyama, and H. Isahara, "A Statistical Model for Domain-Independent Text Segmentation," Proc. Conf. 39th Ann. Meeting of the Assoc. for Computational Linguistic and 10th Conf. European Chapter, pp. 491-498, 2001.
• 36. F.Y.Y. Choi, P. Wiemer-Hastings, and J. Moore, "Latent Semantic Analysis for Text Segmentation," Proc. Conf. Empirical Methods on Natural Language Processing (EMNLP '01), pp. 109-117, 2001.
• 37. S. Repp, and C. Meinel, "Segmentation of Lecture Videos Based on Spontaneous Speech Recognition," Proc. IEEE Int'l Symp. Multimedia Workshops (ISMW '08), pp. 692-697, 2008.
• 38. M. Porter, "An Algorithm for Suffix Stripping," Program, vol. 14, no. 3, pp. 130-137, 1980.
• 39. F.Y.Y. Choi, "Advances in Domain Independent Linear Text Segmentation," Proc. First Conf. North American Chapter of the Assoc. for Computational Linguistics, pp. 26-33, 2000.
• 40. M. Galley, K. McKeown, E. Fosler-Lussier, and H. Jing, "Discourse Segmentation of Multi-Party Conversation," Proc. 41st Ann. Meeting of the Assoc. for Computational Linguistics (ACL '03), pp. 562-569, 2003.
• 41. M.A. Hearst, "TextTiling: Segmenting Text into Multi-Paragraph Subtopic Passages," Computational Linguistics, vol. 23, no. 1, pp. 33-64, 1997.
• 42. M. Lin, J. Nunamaker, M. Chau, and H. Chen, "Segmentation of Lecture Videos Based on Text: A Method Combining Multiple Linguistic Features," Proc. 37th Ann. Hawaii Int'l Conf. System Sciences (HICSS '04)—Track 1, p. 10003c, 2004.
• 43. L. Pevzner, and M.A. Hearst, "A Critique and Improvement of an Evaluation Metric for Text Segmentation," Computational Linguistics, vol. 28, no. 1, pp. 19-36, 2002.
• 44. I. Malioutov, and R. Barzilay, "Minimum Cut Model for Spoken Lecture Segmentation," Proc. 21st Int'l Conf. Computational Linguistics and 44th Ann. Meeting of the Assoc. for Computational Linguistics (ACL '06), pp. 25-32, 2006.
• 45. A. Hauptmann, and H. Wactlar, "Indexing and Search of Multimodal Information," Proc. IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP '97), p. 195, 1997.
• 46. M. Halliday, and R. Hasan, Cohesion in English. Longman, 1976.
• 47. J. Reynar, "Topic Segmentation: Algorithm and Applications," PhD dissertation, Univ. of Pennsylvania, 1998.
• 48. S. Repp, and C. Meinel, "Semantic Indexing for Recorded Educational Lecture Videos," Proc. Fourth IEEE Conf. Pervasive Computing and Comm. Workshops (PerCom '06), pp. 240-245, 2006.
• 49. R.V. Cox, B.G. Haskell, Y. LeCun, B. Shahraray, and L. Rabiner, "On the Applications of Multimedia Processing to Telecommunications," Proc. Int'l Conf. Image Processing (ICIP '97), vol. 1, pp. 5-8, 1997.
• 50. M.J. Bates, "What Is Browsing–Really? A Model Drawing from Behavioural Science Research," Information Research, vol. 12, no. 4,http://informationr.net/ir/12-4/paper330.html, Oct. 2007.
• 51. S. Repp, A. Groß, and C. Meinel, "Dynamic Browsing of Audiovisual Lecture Recordings Based on Automated Speech Recognition," Proc. Ninth Int'l Conf. Intelligent Tutoring Systems (ITS '08), pp. 662-664, 2008.
• 52. S. Repp, A. Groß, and C. Meinel, "Webbasierte Suche in Vorlesungsvideos auf Basis der Transkripte eines Spracherkenners," e-Learning Fachtagung Informatik (DeLFI), pp. 427-429, 2008.
• 53. S.F. Chang, T. Sikora, and A. Puri, "Overview of the MPEG-7 Standard," IEEE Trans. Circuits and Systems for Video Technology, vol. 11, no. 6, pp. 688-695, 2001.