talk: Acoustic-Tactile Rendering of Visual Information for the Visually Impaired, 2:30 Mon 11/11, ITE325b
Computer Science and Electrical Engineering
University of Maryland, Baltimore County
Acoustic-Tactile Rendering of Visual
Information for the Visually Impaired
Thrasyvoulos N. Pappas
Electrical Engineering and Computer Science
Northwestern University
2:30pm Monday, 11 November 2013, ITE 325B, UMBC
After a brief overview of research in the Department of Electrical Engineering and Computer Science at Northwestern University, we will focus on one particular research problem, the use of hearing and touch for conveying graphical and pictorial information to visually impaired (VI) people. This problem combines visual, acoustic, and tactile signal analysis with and understanding of human perception and interface design. The main idea is that the user actively explores a two-dimensional layout consisting of one or more objects with the finger on a touch screen. The objects are displayed via sounds and raised-dot tactile patterns. The finger acts as a pointing device and provides kinesthetic feedback. The touch screen is partitioned into regions, each representing an element of a visual scene or graphical display. A key element of our research is the use of spatial sound to facilitate active exploration of the layout. We use the head-related transfer function (HRTF) for rendering sound directionality and variations of sound intensity and tempo for rendering proximity. Our research has addressed object shape and size perception, as well as the of a 2-D layout of simple objects with identical size and shape. We have also considered the rendering of a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a “virtual cane.” Subjective experiments with visually-blocked subjects demonstrate the effectiveness of the proposed approaches. Our research findings are also expected to have an impact in other applications where vision cannot be used, e.g., for GPS navigation while driving, fire-fighter operations in thick smoke, and military missions conducted under the cover of darkness.
Thrasos Pappas received the Ph.D. degree in electrical engineering and computer science from MIT in 1987. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He joined the EECS Department at Northwestern in 1999. His research interests are in human perception and electronic media, and in particular, image and video quality and compression, image and video analysis, content-based retrieval, model-based halftoning, and tactile and multimodal interfaces. Prof. Pappas is a Fellow of the IEEE and SPIE. He has served as editor-in-chief of the IEEE Transactions on Image Processing (2010-12), elected member of the Board of Governors of the Signal Processing Society of IEEE (2004-07), chair of the IEEE Image and Multidimensional Signal Processing Technical Committee (2002-03), and technical program co-chair of ICIP-01 and ICIP-09. Since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging.
Host: Janet C. Rutledge, Ph.D.
Posted: October 30, 2013, 2:22 PM