The Future of Multimedia on the Internet
Guest Editor’s Introduction • Christian Timmerer • November 2016
Translations by Osvaldo Perez and Tiejun Huang
Listen to the Guest Editors' Introduction
Audio by Martin Omana, Timothy K. Shih, and Steve Woods
Most of today’s Internet users have at least two personal devices that they use to create, share, and consume multimedia content anywhere and anytime. Both the amount of content and the amount of time people spend viewing it have increased significantly in recent years. It’s estimated that by 2019 it would take a viewer more than 5 million years to view all the Internet videos that will be generated each month! According to the June 2016 Sandvine Global Internet Phenomena Report, “streaming audio and video now accounts for 71 percent of evening traffic in North American fixed access networks. Sandvine expects this figure will reach 80 percent by 2020.”
In the coming years, the majority of multimedia Internet traffic will be transmitted wirelessly. Quality issues will become increasingly important as user expectations rise. To address these and other demands from an application perspective, intelligent apps and APIs must reduce complexities for developers and designers. Multimedia standards must also allow for basic interoperability.
The articles in Computing Now’s November 2016 issue highlight the following technologies and considerations related to multimedia on the Internet and discuss how they might look in the future:
- media formats, such as ultra-high-definition (UHD), high frame rate (HFR), and high dynamic range (HDR);
- media processing within the cloud;
- architectures, specifically with respect to information-centric networking (ICN);
- devices and interaction possibilities;
- quality of experience (QoE); and
- security and privacy.
In this Issue
The first article, “Manipulating Ultra-High Definition Video Traffic” by Yan Ye, Yuwen He, and Xiaoyu Xiu, describes UHD video and suggests scalable coding as a solution to support both HD and UHD. The authors present an architecture that aims to enable further scalability features available in the scalable extensions of HEVC/H.265 (SHVC)—as opposed to scalable video coding (SVC), which is based on advanced video coding (AVC/H.264). However, similar arguments surfaced when introducing SVC (for SD and HD), which never reached widespread market deployment.
“KaaS: A Standard Framework Proposal on Video Skimming,” by Lanshan Zhang and his colleagues, addresses the growing amount of video content online by proposing a technique called skimming. Skimming provides summaries that let users quickly get a general idea of the video topic and decide whether they want to view it. The authors present the approach as software-as-a-service (SAAS) and as an extension of an existing adaptive streaming standard. The approach provides additional metadata describing various content sections that appropriate user interfaces can exploit to present the end user with only those sections in which she or he is likely interested.
In “Service Provisioning in Content-Centric Networking: Challenges, Opportunities, and Promising Directions,” Xiuquan Qiao and his colleaguesdiscuss the evolution of content-centric networking, namely keyword information-centric networking. They argue that upper-layer protocols should be the fundamental driving force for future Internet architectures. A proposed service-innovation environment aims to bridge the gap between upper-layer services and the underlying future Internet infrastructure.
Sarah Clinch, Jason Alexander, and Sven Gehring review three significant classes of pervasive display technologies — conventional 2D displays, urban media facades, and bespoke or novel hardware — in “A Survey of Pervasive Displays for Information Presentation.” They identify augmented and virtual reality as potential future directions, as well as examine shape-changing displays, which do not require the use of personal wearables or devices. The article describes three principles for pervasive display technologies: situatedness, personalization, and interactivity.
Quality of experience and user experience are important aspects of future Internet multimedia services. In “Optimizing the Perceptual Quality of Real-Time Multimedia Applications,” Jingxi Xu and Benjamin W. Wah present OptPQ, a systematic method for optimizing perceptual quality using just-noticeable-difference (JND) profiles. The method uses probability theory to combine JND profiles, which are based on the well-known Weber-Fechner law for single quality metrics. Although the article applies the proposed approach to voice over IP, it could also have other multimedia applications in the future Internet.
Finally, security and privacy is sometimes neglected in discussions about deploying multimedia services over the future Internet. “HbbTV Security and Privacy: Issues and Challenges,” by Marco Ghiglieri and Michael Waidner, describes how smart TVs and their apps typically don’t allow users to configure privacy and security. TV viewers are vulnerable to numerous violations, from malicious URLs to behavior-information tracking. The article measures these issues and proposes some possible solutions.
Although future networks will have more capabilities and capacities, the amount of multimedia content online will also increase. We must foster innovation in this domain to develop intelligent apps and APIs, as well as multimedia standards (but take care not to over-standardize). This Computing Now issue is certainly not an exhaustive discussion of these topics. Please add your views and feedback in the comments section below.
C. Timmerer, “Real-Time Entertainment Now Accounts for >70% of Internet Traffic,” blog post, Dec 2015; https://multimediacommunication.blogspot.co.at/2015/12/real-time-entertainment-now-accounts.html.
C. Timmerer, “News from the Future Internet: Adaptive Video Streaming over Information-Centric Networking,” blog post, Sept. 2016; https://bitmovin.com/news-future-internet-adaptive-video-streaming-information-centric-networking/.
S. Lederer et al., “Adaptive Multimedia Streaming in Information-Centric Networks,”IEEE Network, vol. 28, no. 6, 2014, pp. 91-96.
C. Timmerer and A. Vetro, “Recent MPEG Standards for Future Media Ecosystems,” Computing Now, guest editors’ introduction, October 2013; https://www.computer.org/web/computingnow/archive/october2013/.
Christian Timmerer is an associate professor at Alpen-Adria-Universität Klagenfurt, Austria. His research focuses on immersive multimedia communication, streaming, adaptation, and quality of experience. He has published more than 160 papers and was the general conference chair of WIAMIS 2008, QoMEX 2013, and ACM MMSys 2016. He has participated in several EC-funded projects, as well as in ISO/MPEG. Timmerer is CIO and head of Research and Standardization at Bitmovin, a multimedia technology company he cofounded. Follow him on Twitter @timse7, and subscribe to his blog at http://blog.timmerer.com.