The Community for Technology Leaders
Green Image
Issue No. 11 - Nov. (2016 vol. 22)
ISSN: 1077-2626
pp: 2423-2436
Georges Nader , Université de Lyon, LIRIS UMR 5205 CNRS, France
Kai Wang , CNRS and University Grenoble Alpes, GIPSA-Lab, Grenoble, France
Franck Hetroy-Wheeler , University Grenoble Alpes, LJK, Grenoble, France
Florent Dupont , Université de Lyon, LIRIS UMR 5205 CNRS, France
ABSTRACT
It is common that a 3D mesh undergoes some lossy operations (e.g., compression, watermarking and transmission through noisy channels), which can introduce geometric distortions as a change in vertex position. In most cases the end users of 3D meshes are human beings; therefore, it is important to evaluate the visibility of introduced vertex displacement. In this paper we present a model for computing a Just Noticeable Distortion (JND) profile for flat-shaded 3D meshes. The proposed model is based on an experimental study of the properties of the human visual system while observing a flat-shaded 3D mesh surface, in particular the contrast sensitivity function and contrast masking. We first define appropriate local perceptual properties on 3D meshes. We then detail the results of a series of psychophysical experiments where we have measured the threshold needed for a human observer to detect the change in vertex position. These results allow us to compute the JND profile for flat-shaded 3D meshes. The proposed JND model has been evaluated via a subjective experiment, and applied to guide 3D mesh simplification as well as to determine the optimal vertex coordinates quantization level for a 3D model.
INDEX TERMS
Three-dimensional displays, Computational modeling, Visual systems, Distortion, Sensitivity, Visualization, Solid modeling,3D mesh, Just noticeable distortion, human visual system, psychophysical experiments, contrast sensitivity function, contrast masking
CITATION
Georges Nader, Kai Wang, Franck Hetroy-Wheeler, Florent Dupont, "Just Noticeable Distortion Profile for Flat-Shaded 3D Mesh Surfaces", IEEE Transactions on Visualization & Computer Graphics, vol. 22, no. , pp. 2423-2436, Nov. 2016, doi:10.1109/TVCG.2015.2507578
258 ms
(Ver 3.3 (11022016))