GPU Computing
NVIDIA Director of Research says what's next for GPUs
GPU Computing: Past, Present, and Future
Dr. David Luebke, Director of Research, NVIDIA1:00-2:00pm Friday, Feb. 4, ITE 227
Modern GPUs have outgrown their graphics heritage in many ways to emerge as the world's most successful parallel computing architecture. The GPUs that consumers buy to play video games provide a level of massively parallel computation in a single chip that was once the preserve of supercomputers. The raw computational horsepower of these chips has expanded their reach well beyond graphics. Today's GPUs not only render video game frames, they also accelerate astrophysics, video transcoding, image processing, protein folding, seismic exploration, computational finance, radioastronomy, heart surgery, self-driving cars - the list goes on and on.
When thinking about the future of GPUs it is important to reflect on the past. How did this peripheral grow into a processing powerhouse found everywhere from medical clinics to radiotelescopes to supercomputers? Why the graphics card and not the modem, or the mouse? Have GPUs really outgrown graphics and will they thus evolve into pure HPC processors? (hint: no)
This talk is intended as a sort of "state of the union" for GPU computing. I'll briefly cover the dual heritage of GPUs, both in terms of supercomputing and the evolution of fixed function graphics pipelines. I'll discuss "computational graphics", the evolution of graphics itself into a general-purpose computational problem, and how that impacts GPU design and GPU computing. Finally I'll describe the important problems and research topics facing GPU computing practitioners and researchers.
Posted: January 28, 2011, 3:11 PM