Standard Details
As the demand and supply for 3D technologies grows, the development of accurate quality-assessment techniques shall be used to develop the 3D display device and signal-processing engine industries. The underlying principles and statistical characteristics of 3D contents based on the human visual system (HVS) are described in this standard. In addition, a reliable 3D subjective assessment methodology that covers the characteristics of human perception, display mechanism, and the viewing environment is introduced in this standard.
Standards Committee | |
Status |
Active
|
Board Approval |
2015-03-26
|
History |
Published Date:2015-07-10
|
Additional Resources Details
PAR |
Working Group Details
Working Group |
HFVE_WG - Human Factors for Visual Experiences Working Group
|
Working Group Chair |
Sanghoon Lee
|
Standards Committee | |
Society | |
IEEE Program Manager | |
Active Projects |
This standard defines deep learning-based metrics of content analysis and quality of experience (QoE) assessment for visual contents, which is an extension of Standard for the Quality of Experience (QoE) and Visual-Comfort Assessments of Three-Dimensional (3D) Contents Based on Psychophysical Studies (IEEE STD 3333.1.1)) and Standard for the Perceptual Quality Assessment of Three Dimensional (3D) and Ultra High Definition (UHD) Contents (IEEE 3333.1.2). The scope covers the following.
* Deep learning models for QoE assessment (multilayer perceptrons, convolutional neural networks, deep generative models)
* Deep metrics of visual experience from High Definition (HD), UHD, 3D, High Dynamic Range (HDR), Virtual Reality (VR) and Mixed Reality (MR) contents
* Deep analysis of clinical (electroencephalogram (EEG), electrocardiogram (ECG), electrooculography (EOG), and so on) and psychophysical (subjective test and simulator sickness questionnaire (SSQ)) data for QoE assessment
* Deep personalized preference assessment of visual contents
* Building image and video databases for performance benchmarking purpose if necessary
|
This standard establishes various traditional and deep learning-based methods for visual saliency prediction, visual contents analysis, and subjective assessment for quantifying the visual discomfort and quality of experience (QoE) of 3D image and video.
|
|
This standard establishes methods for quality assessment of 3D, UHD and HDR contents based on physiological mechanisms such as perceptual quality and visual attention. This standard identifies and quantifies the following:
-- Causes and visual attention of perceptual quality degradation for 3D, UHD and HDR image and video contents:
-- Compression distortion, such as multi-view image and video compression,
-- Interpolation distortion by intermediate view rendering, such as 3D, UHD and HDR warping, view synthesis,
-- Structural distortion, such as bit errors on wireless/wired transmission errors,
-- Visual attention according to the quality degradation.
-- Deep learning based model for saliency detection and QoE assessment.
Key items are needed to characterize the 3D, UHD and HDR database in terms of the human visual system. These key factors are constructed in conjunction with the visual factors used to perceptual quality and visual attention.
|
|
Existing Standards |
The world is witnessing a rapid advance in stereoscopic 3D (S3D), and ultra-highdefinition (UHD) technology. As a result, the need for accurate quality and visual-comfort assessment techniques to foster the display device industry as well as signal-processing area. In this standard, thorough assessments with respect to the human visual system (HVS) for S3D and UHD contents shall be presented. Moreover, several image and video databases are also publicly provided for any research purpose.
|