PhD Defense: Chris Morris, Multi-Modal Saliency Fusion for Illustrative Image Enhancement
Computer Science and Electrical Engineering
University of Maryland, Baltimore County
Ph.D. Dissertation Defense
Multi-Modal Saliency Fusion for Illustrative Image Enhancement
Christopher J. Morris
10:30-12:30, Wednesday, 15 January 2013, ITE 365 & 352
Digitally manipulated or augmented images are increasingly prevalent. Multisensor systems produce augmented images that integrate data into a single context. Mixed-reality images are generated from insertion of computer generated objects into a natural scene. Digital processing for application-specific tasks (e.g., compression or network transmission) can create images distorted with processing artifacts. Digital image augmentation can lead to the inclusion of artifacts that influence the perception of the image.
Visual cues (e.g., depth or size cues) may no longer be perceptually consistent in an augmented image. A feature deemed important in its local context may no longer be in the broader integrated context. Inserted synthetic objects may not possess the appropriate visual cues for proper perception of the overall scene. Finer cues that distinguish critical features may be lost in compressed images. Enhancing augmented images to add or restore visual cues can improve the image’s perceptibility.
This dissertation presents a framework for illustrating images to enhance critical features. The enhancements improve the perception and comprehension of the augmented image. The framework uses a linear combination of image (2D), surface topology (3D), and task based saliency measures to identify the critical features in the image. The use of multi-modal saliency allows the visualization designer to adjust the definition of critical features based on the attributes of the scene and the task at hand. Upon identification, the features are enhanced using a non-photorealistic rendering (NPR) deferred illustration technique. The enhancements, inspired by an analysis of artists’ techniques, bolster the features’ perceptual cues.
To measure the amount of similar salient features between the enhanced image and the original image, the framework describes the Saliency Similarity Metric (SSM). The SSM is feedback with which to make informed decisions to tune the visualization. The benefits of illustrative enhancement are analyzed using objective and subjective evaluations. Using conventional metrics, illustrative enhancements improve the perceptual image quality of images distorted by noise or compression artifacts. User survey results reveal that enhancements must be carefully applied for perceptual improvement. The framework can be effectively utilized in mobile rendering, augmented reality, and sensor fusion applications.
Committee: Drs. Penny Rheingans (chair), Dan Bailey, Jian Chen, Thomas Jackman (Desert Research Institute), Anupam Joshi and Marc Olano
Posted: January 11, 2014, 10:46 AM