IFA and ITS-CI host NVIDA GPU Workshop

Event Details: NVIDIA, Institute for Astronomy and Cyberinfrastructure are pleased to be organizing a 2-day High Performance Computing and Programming event.

Event Location: University of Hawaii, Information Technology Center, Room ITC-105

Sign Up Here

Why you should attend: NVIDIA GPUs are the world’s fastest and most efficient accelerators delivering world record scientific application performance. NVIDIA’s CUDA Technology is the most pervasive parallel computing model, used by over 250 scientific applications and over 150,000 developers worldwide. This Programming Workshop will focus on introducing scientific computing programming utilizing NVIDIA GPUs to accelerate applications across a diverse set of domains.

Presented by NVIDIA instructor Dr. Jonathan Bentz, the workshop will introduce programming techniques using CUDA and OpenACC paradigms as well as optimization, profiling and debugging methods for GPU programming. An introduction to Deep Learning using GPUs will also be covered.

Who it’s for: Graduate Students, Postdocs, Researchers, and Professors

Agenda: February 4th Day 1: 9AM to 4:30PM

Introduction to GPU programming

• High Level Overview of GPU architecture
• OpenACC: An introduction on compiler directives to specify loops and regions of code in standard C, C++ and Fortran to be offloaded from a host CPU to an attached accelerator
• Hands-On examples to focus on data locality
• GPU-Accelerated Libraries: discussion including AmgX, cuSolver, cuBLAS and cuDNN
• Basics of GPU Programming; An introduction to the CUDA C/C++ Language
• 4 Hands-On examples will Illustrate simple kernel launches and using threads

Agenda: February 5th Day 2: 9AM to 4:30PM

Performance and Optimization

• Overview of Global and Shared memory usage
• Hands-On examples will illustrate a 1D Stencil and Matrix Transpose
• Using NVIDIA Profiler to identify performance bottlenecks
• Advanced Optimizations using Streams and Concurrency to overlap communication and computation
• Hands-On examples will use CUBLAS with Matrix Multiply
• Conclude with a Deep Learning Overview

Intro to Deep Learning / Machine Learning

• Overview of Global and Shared memory usage (Caffe, Torch, Theano)
• Deep Learning with GPUs and NVIDIA DIGITS
• Live demo using NVIDIA DIGITS

Caffe Lab Examples (time permitting)
Deep learning is a rapidly growing segment born from the fields of artificial intelligence and machine learning. It is increasingly used to deliver near-human level accuracy for image classification, voice recognition, natural language processing, sentiment analysis, recommendation engines, and more. Applications areas include facial recognition, scene detection, advanced medical and pharmaceutical research, and autonomous, self-driving vehicles. This overview will focus on introducing attendees to the use of GPU accelerated deep learning frameworks utilizing NVIDIA GPUs for ideal performance and scalability.

Coffee and Lunch will be provided