Data Compression Standards Committee

Learn more about the Data Compression Standards Committee, it's mission, chair, and more.
Share this on:

The IEEE Data Compression Standards Committee (DCSC) is chartered by the IEEE Computer Society. It is responsible for managing the developing of standards within the technical area of data compression and its associated applications, including without limitation data compression algorithms, data compression metrics, transmission methods, and cybersecurity issues related to data compression. It now has 7 Standard working groups.

Chair: Feng Wu

IEEE Std 2941™-2021

Title: IEEE Standard for Artificial Intelligence (AI) Model Representation, Compression, Distribution, and Management (published)
Chair: Yonghong Tian
Approved by Standards Board: 2021-12-08

Need of the project: The standard satisfies the demand of the unified representation of heterogeneous computing platforms and frameworks, higher efficiency of inference and compression, and secure and credible distribution and management of models, especially on resource-constrained devices and large-scale model distribution learning. Standardization is essential for improving the efficiency of large-scale AI model distribution management and application and resource-constrained devices.

Stakeholders: Industry and commerce: AI products (hardware or software) manufacturers or vendors, AI service providers, AI model providers; Government; Consumers; Academic and research bodies; Standards application businesses

IEEE Std 1857.10™‐2021: IEEE Standard for Third‐Generation Video Coding (published)

Chair: Siwei Ma
Approved by Standards Board: 2021-11-09

Need of the project: There are some alternative specifications with similar purpose but they are not able to satisfy the demand of higher coding efficiency for video data, especially for the surging ultra-high definition videos (UHD), i.e. 4K and 8K, and VR (Virtual Reality) video. The committee views standardization as essential for improving the coding efficiency for high volume video data applications and low-bandwidth consumer devices.

Stakeholders: Audio and video products (hardware or software) manufacturers or vendors; Video and audio service providers, including broadcasting operators, Internet video service providers; Aural and visual content providers.

IEEE Std 3161™-2022: IEEE Standard for Digital Retina Systems (published)

Chair: Yaowei Wang
Approved by Standards Board: 2022-12-03

Need of the project: Ubiquitous camera networks in current smart cities create a massive amount of images and videos at a range of spatial-temporal scales. However, the capabilities of sensing systems often lag behind the fast growth of video applications. Therefore, digital retina systems, which provide a novel visual computing framework, are designed to align high-efficiency sensing models with video coding, feature coding, model coding, as well as their joint optimization. In particular, the compressed video stream targets for human vision, the compact feature stream for machine vision, and the model stream incrementally update deep learning models to improve the performance of human/machine vision tasks. Most of the existing technology standards define a one-camera-one-stream framework, whereas, digital retina systems employ a one-camera-three-streams framework. Consequently, digital retina systems allow comprehensive, intelligent, and efficient interactions between retina-like cameras and edge servers over the cloud through the three streams. Digital retina systems can also achieve a higher compression ratio for visual data while maintaining competitive performance with uncompressed signals for various visual analysis tasks. They can support precise localization and tracking of objects across multiple cameras in different cities, playing a fundamental role in visual big data analysis and retrieval in smart cities. Currently, there is no existing specification for a visual computing architecture based upon the three streams.

Stakeholders: Industry and commerce: AI chip manufacturer, equipment manufacturers, AI algorithm or service providers, edge computing vendors, cloud computing vendors, and multimedia related vendors.

P3122: Standard for Data Processing and Compression Framework for Internet of Things (on going)

Chair: Jinghui Lu
PAR Approved by Standards Board: 2021-11-09

Need of the project: The standard responds to the demand of the information exchanges and application data integrations for IOT products. There are many kinds of information transmitted from IoT front end devices, such as, camera, smart bracelet, and sensor, to IoT center node where the IoT data are collected, stored, and analyzed. The IoT data includes video, audio, digital label or characteristic data, sensor data, etc. The standard provides the framework and tools to improve the efficiency of transmission and storage, protect the security and integrity of data.

Stakeholders: Industry and commerce: IoT products (hardware or software) manufacturers or vendors Consumers

P3184: Standard for Data Framework for Autonomous Driving (on going)

Chair: Yanyong Zhang
PAR Approved by Standards Board: 2022-06-16

Need of the project: A standard is needed to define a data-related architecture, to enable the rapid development of highly efficient autonomous driving systems and support growth of the autonomous driving industry. Autonomous driving systems include multiple subsystems such as sensors, actuators, machine learning models, heterogeneous computing platforms, and vehicle-road-cloud collaborative control components. These subsystems impact the level of effective data interaction and data processing that a system can achieve. Today, the data frameworks (data format, data processing flow, data interaction protocol) of all these subsystems is designed individually, which severely limits the efficiency of information exchange among the various subsystems. The lack of common interfaces hinders the fast development of autonomous-driving-related perception, decision-making, and planning algorithms.

Stakeholders: Autonomous vehicle manufacturers, AI chip manufacturers, sensor and communication product manufacturers, AI algorithm developers, service providers, edge computing vendors, cloud computing vendors, and multimedia vendors.

P3404: Standard for Requirements and Framework for Sharing Data and Models for Artificial Intelligence across Multiple Computing Centers (on going)

Chair: Yue Yu
PAR Approved by Standards Board: 2023-09-21

Need of the project: With the emergence of technologies such as large-scale distributed machine learning, Artificial Intelligence (AI) for science, and joint-cloud computing, the need of sharing the raw data, intermediate computing results, and model parameters across geographically distributed computing resources also increases rapidly. However, the data and model to be shared are usually heterogeneous, with large sizes, and have different privacy requirements, which hinders the feasibility and efficiency of data and model sharing across multiple computing centers. Thus, it is essential to identify the new requirements and standardize the framework of data and model sharing procedures across multiple computing centers such that the interconnection and collaboration of computing centers can be exploited in real-world applications.

Stakeholders: Industry and commerce: AI products (hardware or software) manufacturers or vendors, AI service providers, AI model providers, equipment manufacturers, system developers, computing center owners.

P3366.1: Standard for Geometry Point Cloud Compression (on going)

Chair: Shan Liu
PAR Approved by Standards Board: 2023-06-05

Need of the project: With the advent of technology, the demand for more immersive experience beyond the conventional 2-D video viewing continues to rise. Point cloud is a data format which can be used to form 3-D models and provide six-degree-of-freedom viewing experience. A point cloud usually consists of huge amount of data, thus efficient compression is critical for point cloud to be used in many applications. Standardization of point cloud compression is essential for transmission of high volume point cloud data over bandwidth constrained networks and processing such data on storage constrained consumer devices.

Stakeholders: Manufacturers and vendors (hardware or software) with products using point clouds; Service providers, including broadcasting operators, Internet service providers distributing point clouds or models developed from point cloud data; Point cloud or point cloud generated content providers.

Helpful Links

Data Compression Standards Website

P1857 Audio Video Coding Working Group

P2941 AI Model Representation, Compression, Distribution and Management Working Group

Future Video Coding Study Group