NVIDIA held an event “NVIDIA Deep Learning Day 2016 Spring” to introduce the latest trends of deep learning in Tokyo on April 27, and “GPU Technology Conference 2016” held from April 4 to 7 (US time) (GTC 2016) ”, we introduced the outline of deep learning that is evolving day by day and why GPU is required for deep learning.

Mana Murakami, Deep Learning Solutions Architect and CUDA Engineer, Platform Business Headquarters, NVIDIA |

Mana Murakami, a deep learning solutions architect and CUDA engineer at NVIDIA’s Platform Business Headquarters, said that “a model of a well-structured deep neural network (DNN)” and “big data given to DNN” are important for learning deep learning. He says that the combination of “quantity” and “GPU” is important.

Why are GPUs used? Simply put, NVIDIA explains that GPUs can deliver about 10 times the computing performance of CPUs in the deep learning field. In the learning phase of deep learning, there are many parts that perform matrix operations, and GPUs have originally been good at such matrix operations. Therefore, “If you try to process many images in a deep hierarchy, the amount of parameters will be enormous. GPUs will play an active role in such fields,” says Murakami. Specifically, in the learning stage, backward propagation is performed on each image to see what it is, and the weight of the calculation is updated to improve the accuracy. Even if the time per sheet is not so big, if this number becomes huge, it will take a considerable amount of time. For example, it takes about a day to learn about 300MB of images on a workstation equipped with two Titan Xs using AlexNet. If you try to do the same work with the CPU, it will take about 10 days, and the development work will stop during the calculation, but with the GPU, you can learn more during that time, and the development efficiency Will be able to improve.

GPUs are good at matrix operations, and most of the learning phases of deep learning are matrix operations. Learning mainly consists of updating the calculation weights by repeating forward propagation and backward propagation as shown in the slide on the upper right, so that the unknown image can be finally identified. To do. The time for such training will be enormous, and by using the GPU, it will be possible to shorten the time (Source: Lecture material at NVIDIA Deep Learning Day 2016 Spring, all the same below) |

And NVIDIA doesn’t just offer GPUs as hardware. We are continuing to expand the library for deep learning in “CUDA” which started from the integrated development environment for GPU computing, and by providing it as “Deep Learning SDK”, the learning time required for deep learning will be shortened. And facilitation of development have also been realized.

Deep learning SDK is a general term for libraries and software used in frameworks for application design and development, and has been developed for the purpose of easily implementing deep learning applications and running at high speed. There is. Specifically, a CUDA library for deep learning “cuDNN”, various mathematical libraries such as dense and sparse matrices, and Fourier transform, and a multi-GPU collective communication library “NCCL” are provided.

An overview of NVIDIA’s deep learning SDK.Libraries and software used in deep learning application design and development are provided. |