CUDA 10, cuDNN and PyTorch with GPU support Installation

To install CUDA 10.1, cuDNN 10.1 and PyTorch with GPU on Windows 10 follow the following steps in order:

Update current GPU driver

  • Download appropriate updated driver for your GPU from NVIDIA site here
  • You can display the name of GPU which you have and accordingly can select the driver, run folllowng command to get the GPU information on command prompt.
    • wmic path win32_VideoController get name
  • In my case the above command display “NVIDIA Quadro K1200”

Download and install CUDA and cuDNN

  • Download CUDA Toolkit 10.1 from here and install it.
  • The default installation path would be similar to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  • Download cuDNN 10.1 from here and extract the downloaded file.
  • Copy all the folders (bin, include and lib) from the extracted folder and paste to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1

Update/add environment variables

  • Set/Update environment variables as:
    • CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
    • CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
    • CUDA_PATH_V10_1=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  • Open path variable under system variable and make sure that you have added/updated the values as highlighted below in red
  • After doing all the above steps your CUDA 10.1 and cuDNN 10.1 installation is complete, now you have the working GPU

Install PyTorch

  • Go to PyTorch official site and select appropriate command for the installation of PyTorch. The following command is for:
    • PyTorch Build: Stable (1.0)
    • OS: Windows
    • Package: Conda
    • Language: Python 3.6
    • CUDA 10.0
  • conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

If you want to install using pip, the above command will change to :

pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.1-cp36-cp36m-win_amd64.whl
pip3 install torchvision

Verify installation

To verify the installation you can execute the following lines. If everything is fine, you will see GPU device name, otherwise, you will see “CPU” as output

import torch
device = torch.device("cuda:0" if torch.cuda.is_available() else "CPU")
print(device)
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.