Exciting news! Gradient has launched a FREE GPU plan. Read More
Project Details

PyTorch Tutorial: Data Parallelism

Learn how to use multiple GPUs with PyTorch

By
The Gradient Team

Description

Pytorch only uses one GPU by default. In this tutorial by Soumith Chintala, one of the creators of PyTorch, you'll learn how to use multiple GPUs in PyTorch with the DataParallel class. This will allow you to split each mini-batch of samples into multiple smaller mini-batches, and run the computation for each of these in parallel.