Issue No. 03 - July-September (1998 vol. 5)
Music plays an important role in everyday life. With the appearance in the early 1960s of electronic musical instruments (remember the famous Moog synthesizer?), music entered the electronic age. Once the Musical Instrument Digital Interface (MIDI) standard was defined, electronic instruments could exchange data over digital links. With the advent of the audio CD, music recording and distribution crossed the threshold into the digital domain. More and more delivery media are going digital. With the wide use of pulse code modulation (PCM) in the telecommunications industry and of MPEG audio in multimedia systems, the coding, compression, and transmission of digital sound are now well understood. Radio stations have begun converting their archives to online digital disks, and a radio announcer can now play back any piece of music within seconds, based on a large index of titles and artists. So why do we hear so little about the role of music in modern multimedia systems?
Let's recall what multimedia stands for. A widely used definition from Ralf Steinmetz and Klara Nahrstedt ( Multimedia Computing, Communications, and Applications, Prentice Hall, Upper Saddle River, N.J., 1995) reads,
A multimedia system is characterized by computer-controlled, integrated production, manipulation, presentation, storage, and communication of independent information, which is encoded at least through a continuous (time-dependent) and a discrete (time-independent) medium.
So when we talk about multimedia and music, the point is really the integration of data streams of digital music and other discrete and continuous media.
While the audio stream has always played a central role in multimedia systems, most research and development work—such as videoconferencing systems, video-on-demand, and multimedia databases—has focused on the video stream. In each of these areas, music is of minor interest. The audio track in videoconferencing consists of speech only. Video-on-demand typically contains a mixture of speech, noises, and music. Often it's interleaved with the video stream; little work has been done on the extraction and specific processing of the music component. In the area of multimedia databases, research and development work emphasizes the indexing and retrieval of still images and video rather than addressing audio issues. Multimedia artists who complain about the lack of interdisciplinary cooperation between themselves and multimedia engineers are well justified.
Thus the editorial board of IEEE MultiMedia felt it appropriate to devote a special issue to the integration of music into multimedia systems. We're pleased to present three innovative articles about this topic. Minami et al. address the question of how the audio track, and in particular the music on it, can help in analyzing the semantics of an audio-video stream and in using music to better index digital AV archives. Fels, Nishimoto, and Mase present a novel way of creating multimedia art: their MusiKalscope is a combination musical instrument and visual art tool. And Borchers and Mühlhäuser introduce a system for the creation of music based on a very innovative user interface: infrared batons are used as an input device. Earlier versions of the three articles were originally presented at IEEE's International Conference on Multimedia Computing and Systems in Ottawa in 1997.
I hope that this special issue stimulates creativity and inspires new research and development in the fascinating and little-explored area of music in multimedia systems.
Wolfgang Effelsberg is a professor of computer science at the University of Mannheim. He received his dipolma in electrical engineering and Dr.-Ing degree in computer science from the Technical University of Darmstadt, Germany, in 1976 and 1981, respectively. He is a member of ACM and Gessellschaft fuer Informatik. Contact Effelsberg at the University of Mannheim, e-mail email@example.com.