Other respondents have already described which commands can be used to check the CUDA version. Here, I’ll describe how to turn the output of those commands into an environment variable of the form «10.2», «11.0», etc.
To recap, you can use
nvcc --version
to find out the CUDA version.
I think this should be your first port of call.
If you have multiple versions of CUDA installed, this command should print out the version for the copy which is highest on your PATH.
The output looks like this:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Thu_Jun_11_22:26:38_PDT_2020
Cuda compilation tools, release 11.0, V11.0.194
Build cuda_11.0_bu.TC445_37.28540450_0
We can pass this output through sed to pick out just the MAJOR.MINOR release version number.
CUDA_VERSION=$(nvcc --version | sed -n 's/^.*release ([0-9]+.[0-9]+).*$/1/p')
If nvcc isn’t on your path, you should be able to run it by specifying the full path to the default location of nvcc instead.
/usr/local/cuda/bin/nvcc --version
The output of which is the same as above, and it can be parsed in the same way.
Alternatively, you can find the CUDA version from the version.txt file.
cat /usr/local/cuda/version.txt
The output of which
CUDA Version 10.1.243
can be parsed using sed to pick out just the MAJOR.MINOR release version number.
CUDA_VERSION=$(cat /usr/local/cuda/version.txt | sed 's/.* ([0-9]+.[0-9]+).*/1/')
Note that sometimes the version.txt file refers to a different CUDA installation than the nvcc --version
. In this scenario, the nvcc version should be the version you’re actually using.
We can combine these three methods together in order to robustly get the CUDA version as follows:
if nvcc --version 2&> /dev/null; then
# Determine CUDA version using default nvcc binary
CUDA_VERSION=$(nvcc --version | sed -n 's/^.*release ([0-9]+.[0-9]+).*$/1/p');
elif /usr/local/cuda/bin/nvcc --version 2&> /dev/null; then
# Determine CUDA version using /usr/local/cuda/bin/nvcc binary
CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.*release ([0-9]+.[0-9]+).*$/1/p');
elif [ -f "/usr/local/cuda/version.txt" ]; then
# Determine CUDA version using /usr/local/cuda/version.txt file
CUDA_VERSION=$(cat /usr/local/cuda/version.txt | sed 's/.* ([0-9]+.[0-9]+).*/1/')
else
CUDA_VERSION=""
fi
This environment variable is useful for downstream installations, such as when pip installing a copy of pytorch that was compiled for the correct CUDA version.
python -m pip install
"torch==1.9.0+cu${CUDA_VERSION/./}"
"torchvision==0.10.0+cu${CUDA_VERSION/./}"
-f https://download.pytorch.org/whl/torch_stable.html
Similarly, you could install the CPU version of pytorch when CUDA is not installed.
if [ "$CUDA_VERSION" = "" ]; then
MOD="+cpu";
echo "Warning: Installing CPU-only version of pytorch"
else
MOD="+cu${CUDA_VERSION/./}";
echo "Installing pytorch with $MOD"
fi
python -m pip install
"torch==1.9.0${MOD}"
"torchvision==0.10.0${MOD}"
-f https://download.pytorch.org/whl/torch_stable.html
But be careful with this because you can accidentally install a CPU-only version when you meant to have GPU support.
For example, if you run the install script on a server’s login node which doesn’t have GPUs and your jobs will be deployed onto nodes which do have GPUs. In this case, the login node will typically not have CUDA installed.
Here you will learn how to check NVIDIA CUDA version in 3 ways: nvcc
from CUDA toolkit, nvidia-smi
from NVIDIA driver, and simply checking a file. Using one of these methods, you will be able to see the CUDA version regardless the software you are using, such as PyTorch, TensorFlow, conda (Miniconda/Anaconda) or inside docker.
Prerequisite
You should have NVIDIA driver installed on your system, as well as Nvidia CUDA toolkit, aka, CUDA, before we start. If you haven’t, you can install it by running sudo apt install nvidia-cuda-toolkit
.
What is CUDA?
CUDA is a general parallel computing architecture and programming model developed by NVIDIA for its graphics cards (GPUs). Using CUDA, PyTorch or TensorFlow developers will dramatically increase the performance of PyTorch or TensorFlow training models, utilizing GPU resources effectively.
In GPU-accelerated technology, the sequential portion of the task runs on the CPU for optimized single-threaded performance, while the computed-intensive segment, like PyTorch technology, runs parallel via CUDA at thousands of GPU cores. When using CUDA, developers can write a few basic keywords in common languages such as C, C++ , Python, and implement parallelism.
Method 1 — Use nvcc
to check CUDA version
If you have installed the cuda-toolkit
software either from the official Ubuntu repositories via sudo apt install nvidia-cuda-toolkit
, or by downloading and installing it manually from the official NVIDIA website, you will have nvcc
in your path (try echo $PATH
) and its location will be /usr/bin/nvcc
(by running which nvcc
).
To check CUDA version with nvcc, run
nvcc --version
You can see similar output in the screenshot below. The last line shows you version of CUDA. The version here is 10.1. Yours may vary, and can be either 10.0, 10.1, 10.2 or even older versions such as 9.0, 9.1 and 9.2. After the screenshot you will find the full text output too.
vh@varhowto-com:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
What is nvcc?
nvcc
is the NVIDIA CUDA Compiler, thus the name. It is the key wrapper for the CUDA compiler suite. For other usage of nvcc
, you can use it to compile and link both host and GPU code.
Check out nvcc
‘s manpage for more information.
Method 2 — Check CUDA version by nvidia-smi
from NVIDIA Linux driver
The second way to check CUDA version is to run nvidia-smi, which comes from downloading the NVIDIA driver, specifically the NVIDIA-utils
package. You can install either Nvidia driver from the official repositories of Ubuntu, or from the NVIDIA website.
$ which nvidia-smi /usr/bin/nvidia-smi $ dpkg -S /usr/bin/nvidia-smi nvidia-utils-440: /usr/bin/nvidia-smi
To check CUDA version with nvidia-smi
, directly run
nvidia-smi
You can see similar output in the screenshot below. The version is at the top right of the output. Here’s my version is CUDA 10.2. You may have 10.0, 10.1 or even the older version 9.0 or 9.1 or 9.2 installed.
Importantly, except for CUDA version. There are more details in the nvidia-smi output, driver version (440.100), GPU name, GPU fan percentage, power consumption/capability, memory usage, can also be found here. You can also find the processes which use the GPU at the moment. This is helpful if you want to see if your model or system is using GPU such as PyTorch or TensorFlow.
Here is the full text output:
vh@varhowto-com:~$ nvidia-smi
Tue Jul 07 10:07:26 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 31% 48C P0 35W / 151W | 2807MiB / 8116MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1582 G /usr/lib/xorg/Xorg 262MiB |
| 0 2481 G /usr/lib/xorg/Xorg 1646MiB |
| 0 2686 G /usr/bin/gnome-shell 563MiB |
| 0 3244 G …AAAAAAAAAAAACAAAAAAAAAA= --shared-files 319MiB |
+-----------------------------------------------------------------------------+
What is nvidia-smi?
nvidia-smi
(NVSMI) is NVIDIA System Management Interface program. It is also known as NVSMI. nvidia-smi
provides monitoring and maintenance capabilities for all of tje Fermi’s Tesla, Quadro, GRID and GeForce NVIDIA GPUs and higher architecture families. For most functions, GeForce Titan Series products are supported with only little detail given for the rest of the Geforce range.
NVSMI is also a cross-platform application that supports both common NVIDIA driver-supported Linux distros and 64-bit versions of Windows starting with Windows Server 2008 R2. Metrics may be used directly by users via stdout, or stored via CSV and XML formats for scripting purposes.
For more information, check out the man
page of nvidia-smi
.
Method 3 — cat /usr/local/cuda/version.txt
cat /usr/local/cuda/version.txt
Note that if you install Nvidia driver and CUDA from Ubuntu 20.04’s own official repository this approach may not work.
Time Needed : 5 minutes
There are basically three ways to check CUDA version. One must work if not the other.
- Perhaps the easiest way to check a file
Run
cat /usr/local/cuda/version.txt
Note: this may not work on Ubuntu 20.04
- Another method is through the
cuda-toolkit
package commandnvcc
.Simple run
nvcc --version
. The cuda version is in the last line of the output. - The other way is from the NVIDIA driver’s
nvidia-smi
command you have installed.Simply run
nvidia-smi
. The version is in the header of the table printed.
In that go to help tab and select System Information. In that, there is a components section as follows. In that under NVCUDA. DLL it shows NVIDIA CUDA 10.2.
3 ways to check CUDA version
- Perhaps the easiest way to check a file. Run cat /usr/local/cuda/version.txt. …
- Another method is through the cuda-toolkit package command nvcc . Simple run nvcc –version . …
- The other way is from the NVIDIA driver’s nvidia-smi command you have installed. Simply run nvidia-smi .
10 авг. 2020 г.
What version of CUDA Do I have Windows?
You can verify that you have a CUDA-capable GPU through the Display Adapters section in the Windows Device Manager. Here you will find the vendor name and model of your graphics card(s). If you have an NVIDIA card that is listed in http://developer.nvidia.com/cuda-gpus, that GPU is CUDA-capable.
How do I know if Cuda is installed?
Verify CUDA Installation
- Verify driver version by looking at: /proc/driver/nvidia/version : …
- Verify the CUDA Toolkit version. …
- Verify running CUDA GPU jobs by compiling the samples and executing the deviceQuery or bandwidthTest programs.
How do I update Cuda drivers Windows 10?
- Step 1: Check the software you will need to install. …
- Step 2: Download Visual Studio Express. …
- Step 3: Download CUDA Toolkit for Windows 10. …
- Step 4: Download Windows 10 CUDA patches. …
- Step 5: Download and Install cuDNN. …
- Step 6: Install Python (if you don’t already have it) …
- Step 7: Install Tensorflow with GPU support.
Which Cuda version should I install?
For those GPUs, CUDA 6.5 should work. Starting with CUDA 9. x, older CUDA GPUs of compute capability 2. x are also not supported.
How do I check my Nvidia driver version?
A: Right-click on your desktop and select NVIDIA Control Panel. From the NVIDIA Control Panel menu, select Help > System Information. The driver version is listed at the top of the Details window. For more advanced users, you can also get the driver version number from the Windows Device Manager.
Is Cuda only for Nvidia?
Unlike OpenCL, CUDA-enabled GPUs are only available from Nvidia.
Is my GPU CUDA capable?
CUDA Compatible Graphics
To check if your computer has an NVIDA GPU and if it is CUDA enabled: Right click on the Windows desktop. If you see “NVIDIA Control Panel” or “NVIDIA Display” in the pop up dialogue, the computer has an NVIDIA GPU. Click on “NVIDIA Control Panel” or “NVIDIA Display” in the pop up dialogue.
Where does Cuda install?
By default, the CUDA SDK Toolkit is installed under /usr/local/cuda/. The nvcc compiler driver is installed in /usr/local/cuda/bin, and the CUDA 64-bit runtime libraries are installed in /usr/local/cuda/lib64.
How do I know Cudnn version?
View cuda, cudnn, ubuntu version
Check the cudnn version cat /usr/local/cuda/include/cudnn. h | grep CUDNN_MAJOR -A 2 3. Check the unbuntu version cat /etc/issue 4.
How do I run a Cuda sample?
Navigate to the CUDA Samples’ nbody directory. Open the nbody Visual Studio solution file for the version of Visual Studio you have installed. Open the “Build” menu within Visual Studio and click “Build Solution”. Navigate to the CUDA Samples’ build directory and run the nbody sample.
What is Cuda and Cudnn?
The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. … It allows them to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning.
What is Cuda 11?
Summary. CUDA 11 provides a foundational development environment for building applications for the NVIDIA Ampere GPU architecture and powerful server platforms built on the NVIDIA A100 for AI, data analytics, and HPC workloads, both for on-premises (DGX A100) and cloud (HGX A100) deployments.
How do I run a Tensorflow GPU?
Steps:
- Uninstall your old tensorflow.
- Install tensorflow-gpu pip install tensorflow-gpu.
- Install Nvidia Graphics Card & Drivers (you probably already have)
- Download & Install CUDA.
- Download & Install cuDNN.
- Verify by simple program.
23 мар. 2019 г.
How do I install CUDA drivers?
- Connect to the VM where you want to install the driver.
- Install the latest kernel package. If needed, this command also reboots the system. …
- If the system rebooted in the previous step, reconnect to the instance.
- Refresh Zypper. sudo zypper refresh.
- Install CUDA, which includes the NVIDIA driver. sudo zypper install cuda.
Swap CUDA Toolkit Versions on Windows
Here I will do a quick run down on how to swap CUDA versions.
For ease, I will be demonstrating switching from CUDA 11.6 to CUDA 11.3, the same methods apply to other versions.
Step 0: Check CUDA Version
Check what version of CUDA you have. You can enter this to any command prompt (cmd, anaconda, etc)
-
if you get something like this:
'nvcc' is not recognized as an internal or external command, operable program or batch file.
This means you don’t have any CUDA installed. You can download your desired CUDA Toolkit version here (everything default would be fine)
A quick rule of thumb:
- NVIDIA GPU >= 30 series —> CUDA 11.0+
- NVIDIA GPU < 30 series —> CUDA 10.2 (CUDA 10.0 & 10.1 kinda outdated, use 10.2 unless specified)
You can also check your GPU compatibility here for NVIDIA GPU < 30 series. If your GPU has CC >= 3.7, then it supports PyTorch.
If you just freshly downloaded CUDA, then you would not need to proceed in the following steps, because you would have the CUDA version you want. You can do another quick check with
nvcc --version
to check your version in any command prompt. -
if you get something like this:
Then it means you have CUDA installed. And in my case, it’s CUDA 11.6. I will be swapping to CUDA 11.3 in the following steps.
Step 1: Locate System Environment Variables
Open up your environment variables. You can search «env» in the search tab, it should look something like this.
Then open it. Then click «Environment Variables»
Then it should open up a winodw like this
Step 2: Change System Variables
Double check on CUDA_PATH
and this window should pop up
Then enter the target version of your CUDA there. In my case it’s changing 11.6 to 11.3
Press ok and proceed next step.
Step 3: Change System Paths
Scroll down and find Path
, double click to open
You should see your current version on the very top. You going have to move your desired version to the very top
So it should look like this after moving
Press ok and you may now close all the windows for environment variables & system properties.
Step 4: Check if succeed
Close the last command prompt, and open a new one. Enter the following command:
If it outputs your desired version, then you have succeed in swapping CUDA version.
Содержание
- How to Check CUDA Version Easily
- Prerequisite
- What is CUDA?
- Method 1 — Use nvcc to check CUDA version
- What is nvcc?
- Method 2 — Check CUDA version by nvidia-smi from NVIDIA Linux driver
- What is nvidia-smi?
- Method 3 — cat /usr/local/cuda/version.txt
- 3 ways to check CUDA version
- Как получить версию cuda?
- 11 ответов
- Check cuda version windows
- 1. Introduction
- 1.1. System Requirements
- 1.2. x86 32-bit Support
- 1.3. About This Document
- 2. Installing CUDA Development Tools
- 2.1. Verify You Have a CUDA-Capable GPU
- 2.2. Download the NVIDIA CUDA Toolkit
- Download Verification
- 2.3. Install the CUDA Software
- Graphical Installation
- Silent Installation
- Extracting and Inspecting the Files Manually
- 2.3.1. Uninstalling the CUDA Software
- 2.4. Using Conda to Install the CUDA Software
- 2.4.1. Conda Overview
- 2.4.2. Installation
- 2.4.3. Uninstallation
- 2.5. Use a Suitable Driver Model
- 2.6. Verify the Installation
- 2.6.1. Running the Compiled Examples
- 3. Pip Wheels
- 4. Compiling CUDA Programs
- 4.1. Compiling Sample Projects
- 4.2. Sample Projects
- 4.3. Build Customizations for New Projects
- 4.4. Build Customizations for Existing Projects
- 5. Additional Considerations
- Notices
- Notice
How to Check CUDA Version Easily
Here you will learn how to check NVIDIA CUDA version in 3 ways: nvcc from CUDA toolkit, nvidia-smi from NVIDIA driver, and simply checking a file. Using one of these methods, you will be able to see the CUDA version regardless the software you are using, such as PyTorch, TensorFlow, conda (Miniconda/Anaconda) or inside docker.
» class=»wp_ulike_btn wp_ulike_put_image wp_post_btn_2895″>
Prerequisite
What is CUDA?
CUDA is a general parallel computing architecture and programming model developed by NVIDIA for its graphics cards (GPUs). Using CUDA, PyTorch or TensorFlow developers will dramatically increase the performance of PyTorch or TensorFlow training models, utilizing GPU resources effectively.
Method 1 — Use nvcc to check CUDA version
To check CUDA version with nvcc, run
You can see similar output in the screenshot below. The last line shows you version of CUDA. The version here is 10.1. Yours may vary, and can be either 10.0, 10.1, 10.2 or even older versions such as 9.0, 9.1 and 9.2. After the screenshot you will find the full text output too.
What is nvcc?
Check out nvcc ‘s manpage for more information.
Method 2 — Check CUDA version by nvidia-smi from NVIDIA Linux driver
The second way to check CUDA version is to run nvidia-smi, which comes from downloading the NVIDIA driver, specifically the NVIDIA-utils package. You can install either Nvidia driver from the official repositories of Ubuntu, or from the NVIDIA website.
You can see similar output in the screenshot below. The version is at the top right of the output. Here’s my version is CUDA 10.2. You may have 10.0, 10.1 or even the older version 9.0 or 9.1 or 9.2 installed.
Importantly, except for CUDA version. There are more details in the nvidia-smi output, driver version (440.100), GPU name, GPU fan percentage, power consumption/capability, memory usage, can also be found here. You can also find the processes which use the GPU at the moment. This is helpful if you want to see if your model or system is using GPU such as PyTorch or TensorFlow.
Here is the full text output:
What is nvidia-smi?
nvidia-smi (NVSMI) is NVIDIA System Management Interface program. It is also known as NVSMI. nvidia-smi provides monitoring and maintenance capabilities for all of tje Fermi’s Tesla, Quadro, GRID and GeForce NVIDIA GPUs and higher architecture families. For most functions, GeForce Titan Series products are supported with only little detail given for the rest of the Geforce range.
NVSMI is also a cross-platform application that supports both common NVIDIA driver-supported Linux distros and 64-bit versions of Windows starting with Windows Server 2008 R2. Metrics may be used directly by users via stdout, or stored via CSV and XML formats for scripting purposes.
Method 3 — cat /usr/local/cuda/version.txt
Note that if you install Nvidia driver and CUDA from Ubuntu 20.04’s own official repository this approach may not work.
3 ways to check CUDA version
Time Needed : 5 minutes
There are basically three ways to check CUDA version. One must work if not the other.
Run cat /usr/local/cuda/version.txt
Note: this may not work on Ubuntu 20.04
Источник
Как получить версию cuda?
есть ли быстрая команда или скрипт для проверки версии установленного CUDA?
Я нашел руководство 4.0 в каталоге установки, но не уверен, является ли фактическая установленная версия такой или нет.
11 ответов
как упоминает Джаред в комментарии, из командной строки:
дает версию компилятора CUDA (которая соответствует версии toolkit).
из кода приложения вы можете запросить версию API среды выполнения с помощью
или версия API драйвера с
как указывает Даниэль, deviceQuery является образцом SDK приложение, которое запрашивает выше, наряду с возможностями устройства.
как отмечают другие, вы также можете проверить содержание version.txt использование (например, на Mac или Linux)
иногда папка называется «Cuda-version».
результат должен быть похож на: CUDA Version 8.0.61
Если вы установили CUDA SDK, вы можете запустить «deviceQuery», чтобы увидеть версию CUDA
вы можете найти CUDA-Z полезным, вот цитата с их сайта:
» эта программа родилась как пародия на другие Z-утилиты, такие как CPU-Z и GPU-Z. CUDA-Z показывает некоторую базовую информацию о графических процессорах с поддержкой CUDA и GPGPUs. Он работает с картами nVIDIA Geforce, Quadro и Tesla, ионными чипсетами.»
на вкладке поддержка есть URL для исходного кода: http://sourceforge.net/p/cuda-z/code/ и загрузка на самом деле не является установщиком, а исполняемым файлом (без установки, поэтому это «быстро»).
эта утилита предоставляет множество информации, и если вам нужно знать, как она была получена, есть источник посмотреть. Есть другие утилиты, похожие на это, которые вы можете искать.
после установки CUDA можно проверить версии по: nvcc-V
Я установил как 5.0, так и 5.5, поэтому он дает
инструменты компиляции Cuda, выпуск 5.5, V5.5,0
эта команда работает как для Windows, так и для Ubuntu.
помимо упомянутых выше, ваш путь установки CUDA (если он не был изменен во время установки) обычно содержит номер версии
делать which nvcc должны дать путь, и это даст вам версию
PS: Это быстрый и грязный способ, вышеуказанные ответы более элегантны и приведут к правильной версии со значительными усилиями
сначала вы должны найти, где установлена Cuda.
Если это установка по умолчанию, такие как здесь расположение должно быть:
в этой папке должен быть файл
откройте этот файл с помощью любого текстового редактора или запустите:
можно узнать cuda версия, набрав в терминале следующее:
кроме того, можно вручную проверьте версию, сначала выяснив каталог установки с помощью:
а то cd в этот каталог и проверьте версию CUDA.
для версии CUDA:
для версии cuDNN:
используйте следующее, чтобы найти путь для cuDNN:
затем используйте это, чтобы получить версию из файла заголовка,
используйте следующее, чтобы найти путь для cuDNN:
затем используйте это, чтобы сбросить версию из файла заголовка,
Источник
Check cuda version windows
The installation instructions for the CUDA Toolkit on MS-Windows systems.
1. Introduction
CUDA В® is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).
This guide will show you how to install and check the correct operation of the CUDA development tools.
1.1. System Requirements
The next two tables list the currently supported Windows operating systems and compilers.
Table 1. Windows Operating System Support in CUDA 11.4
Operating System | Native x86_64 | Cross (x86_32 on x86_64) |
---|---|---|
Windows 10 | YES | NO |
Windows Server 2022 | YES | NO |
Windows Server 2019 | YES | NO |
Windows Server 2016 | YES | NO |
Table 2. Windows Compiler Support in CUDA 11.4
Compiler* | IDE | Native x86_64 | Cross (x86_32 on x86_64) |
---|---|---|---|
MSVC Version 192x | Visual Studio 2019 16.x | YES | YES |
MSVC Version 191x | Visual Studio 2017 15.x (RTW and all updates) | YES | YES |
* Support for Visual Studio 2015 is deprecated in release 11.1.
x86_32 support is limited. See the x86 32-bit Support section for details.
For more information on MSVC versions, Visual Studio product versions, visit https://dev.to/yumetodo/list-of-mscver-and-mscfullver-8nd.
1.2. x86 32-bit Support
Native development using the CUDA Toolkit on x86_32 is unsupported. Deployment and execution of CUDA applications on x86_32 is still supported, but is limited to use with GeForce GPUs. To create 32-bit CUDA applications, use the cross-development capabilities of the CUDA Toolkit on x86_64.
1.3. About This Document
This document is intended for readers familiar with Microsoft Windows operating systems and the Microsoft Visual Studio environment. You do not need previous experience with CUDA or experience with parallel computation.
Basic instructions can be found in the Quick Start Guide. Read on for more detailed instructions.
2.1. Verify You Have a CUDA-Capable GPU
You can verify that you have a CUDA-capable GPU through the Display Adapters section in the Windows Device Manager. Here you will find the vendor name and model of your graphics card(s). If you have an NVIDIA card that is listed in http://developer.nvidia.com/cuda-gpus, that GPU is CUDA-capable. The Release Notes for the CUDA Toolkit also contain a list of supported products.
2.2. Download the NVIDIA CUDA Toolkit
The CUDA Toolkit installs the CUDA driver and tools needed to create, build and run a CUDA application as well as libraries, header files, CUDA samples source code, and other resources.
Download Verification
The download can be verified by comparing the MD5 checksum posted at https://developer.download.nvidia.com/compute/cuda/11.4.2/docs/sidebar/md5sum.txt with that of the downloaded file. If either of the checksums differ, the downloaded file is corrupt and needs to be downloaded again.
To calculate the MD5 checksum of the downloaded file, follow the instructions at http://support.microsoft.com/kb/889768.
2.3. Install the CUDA Software
Graphical Installation
Install the CUDA Software by executing the CUDA installer and following the on-screen prompts.
Silent Installation
Table 3. Possible Subpackage Names
Subpackage Name | Subpackage Description |
---|---|
Toolkit Subpackages (defaults to C:Program FilesNVIDIA GPU Computing ToolkitCUDAv 11.4 ) | |
cudart_ 11.4 | CUDA Runtime libraries. |
cuobjdump_ 11.4 | Extracts information from cubin files. |
cupti_ 11.4 | The CUDA Profiling Tools Interface for creating profiling and tracing tools that target CUDA applications. |
cuxxfilt_ 11.4 | The CUDA cu++ filt demangler tool. |
demo_suite_ 11.4 | Prebuilt demo applications using CUDA. |
documentation_ 11.4 | CUDA HTML and PDF documentation files including the CUDA C++ Programming Guide, CUDA C++ Best Practices Guide, CUDA library documentation, etc. |
memcheck_ 11.4 | Functional correctness checking suite. |
nvcc_ 11.4 | CUDA compiler. |
nvdisasm_ 11.4 | Extracts information from standalone cubin files. |
nvml_dev_ 11.4 | NVML development libraries and headers. |
nvprof_ 11.4 | Tool for collecting and viewing CUDA application profiling data from the command-line. |
nvprune_ 11.4 | Prunes host object files and libraries to only contain device code for the specified targets. |
nvrtc_ 11.4 |
NVRTC runtime libraries. nvtx_ 11.4 NVTX on Windows. visual_profiler_ 11.4 Visual Profiler. sanitizer_ 11.4 Compute Sanitizer API. thrust_ 11.4 CUDA Thrust. cublas_ 11.4
cuBLAS runtime libraries. cufft_ 11.4
cuFFT runtime libraries. curand_ 11.4
cuRAND runtime libraries. cusolver_ 11.4
cuSOLVER runtime libraries. cusparse_ 11.4
cuSPARSE runtime libraries. npp_ 11.4
NPP runtime libraries. nvjpeg_ 11.4
nvJPEG libraries. nsight_compute_ 11.4 Nsight Compute. nsight_nvtx_ 11.4 Older v1.0 version of NVTX. nsight_systems_ 11.4 Nsight Systems. nsight_vse_ 11.4 Installs the Nsight Visual Studio Edition plugin in all VS. visual_studio_integration_ 11.4 Installs CUDA project wizard and builds customization files in VS. occupancy_calculator_ 11.4 Installs the CUDA_Occupancy_Calculator.xls tool. Samples Subpackages (defaults to C:ProgramDataNVIDIA CorporationCUDA Samplesv 11.4 ) samples_ 11.4
Source code for many example CUDA applications using supported versions of Visual Studio.
Note: C:ProgramData is a hidden folder. It can be made visible within the Windows Explorer options at (Tools | Options).
Driver Subpackages Display.Driver The NVIDIA Display Driver. Required to run CUDA applications.
Extracting and Inspecting the Files Manually
Sometimes it may be desirable to extract or inspect the installable files directly, such as in enterprise deployment, or to browse the files before installation. The full installation package can be extracted using a decompression tool which supports the LZMA compression method, such as 7-zip or WinZip.
2.3.1. Uninstalling the CUDA Software
All subpackages can be uninstalled through the Windows Control Panel by using the Programs and Features widget.
2.4. Using Conda to Install the CUDA Software
This section describes the installation and configuration of CUDA when using the Conda installer. The Conda packages are available at https://anaconda.org/nvidia.
2.4.1. Conda Overview
2.4.2. Installation
To perform a basic install of all CUDA Toolkit components using Conda, run the following command:
2.4.3. Uninstallation
To uninstall the CUDA Toolkit using Conda, run the following command:
2.5. Use a Suitable Driver Model
On Windows 7 and later, the operating system provides two under which the NVIDIA Driver may operate:
The TCC driver mode provides a number of advantages for CUDA applications on GPUs that support this mode. For example:
2.6. Verify the Installation
Before continuing, it is important to verify that the CUDA toolkit can find and communicate correctly with the CUDA-capable hardware. To do this, you need to compile and run some of the included sample programs.
2.6.1. Running the Compiled Examples
Start > All Programs > Accessories > Command Prompt
This assumes that you used the default installation directory structure. If CUDA is installed and configured correctly, the output should look similar to Figure 1.
The exact appearance and the output lines might be different on your system. The important outcomes are that a device was found, that the device(s) match what is installed in your system, and that the test passed.
If a CUDA-capable device and the CUDA Driver are installed but deviceQuery reports that no CUDA-capable devices are present, ensure the deivce and driver are properly installed.
Running the bandwidthTest program, located in the same directory as deviceQuery above, ensures that the system and the CUDA-capable device are able to communicate correctly. The output should resemble Figure 2.
The device name (second line) and the bandwidth numbers vary from system to system. The important items are the second line, which confirms a CUDA device was found, and the second-to-last line, which confirms that all necessary tests passed.
If the tests do not pass, make sure you do have a CUDA-capable NVIDIA GPU on your system and make sure it is properly installed.
3. Pip Wheels
NVIDIA provides Python Wheels for installing CUDA through pip, primarily for using CUDA with Python. These packages are intended for runtime use and do not currently include developer tools (these can be installed separately).
Please note that with this installation method, CUDA installation environment is managed via pip and additional care must be taken to set up your host environment to use CUDA outside the pip environment.
4. Compiling CUDA Programs
4.1. Compiling Sample Projects
The bandwidthTest project is a good sample project to build and run. It is located in the NVIDIA CorporationCUDA Samplesv 11.4 1_UtilitiesbandwidthTest directory.
4.2. Sample Projects
The sample projects come in two configurations: debug and release (where release contains no debugging information) and different Visual Studio projects.
A few of the example projects require some additional setup.
4.3. Build Customizations for New Projects
When creating a new CUDA application, the Visual Studio project file must be configured to include CUDA build customizations. To accomplish this, click File-> New | Project. NVIDIA-> CUDA->, then select a template for your CUDA Toolkit version. For example, selecting the «CUDA 11.4 Runtime» template will configure your project for use with the CUDA 11.4 Toolkit. The new project is technically a C++ project (.vcxproj) that is preconfigured to use NVIDIA’s Build Customizations. All standard capabilities of Visual Studio C++ projects will be available.
To specify a custom CUDA Toolkit location, under CUDA C/C++, select Common, and set the CUDA Toolkit Custom Dir field as desired. Note that the selected toolkit must match the version of the Build Customizations.
4.4. Build Customizations for Existing Projects
While Option 2 will allow your project to automatically use any new CUDA Toolkit version you may install in the future, selecting the toolkit version explicitly as in Option 1 is often better in practice, because if there are new CUDA configuration options added to the build customization rules accompanying the newer toolkit, you would not see those new options using Option 2.
5. Additional Considerations
A number of helpful development tools are included in the CUDA Toolkit or are available for download from the NVIDIA Developer Zone to assist you as you develop your CUDA programs, such as NVIDIA В® Nsightв„ў Visual Studio Edition, NVIDIA Visual Profiler, and cuda-memcheck.
For technical support on programming questions, consult and participate in the developer forums at http://developer.nvidia.com/cuda/.
Notices
Notice
This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.
Источник
In this chapter, we will learn how to install CUDA.
For installing the CUDA toolkit on Windows, you’ll need −
- A CUDA enabled Nvidia GPU.
- A supported version of Microsoft Windows.
- A supported version of Visual Studio.
- The latest CUDA toolkit.
Note that natively, CUDA allows only 64b applications. That is, you cannot develop 32b CUDA applications natively (exception: they can be developed only on the GeForce series GPUs). 32b applications can be developed on x86_64 using the cross-development capabilities of the CUDA toolkit. For compiling CUDA programs to 32b, follow these steps −
Step 1 − Add <installpath>bin to your path.
Step 2 − Add -m32 to your nvcc options.
Step 3 − Link with the 32-bit libs in <installpath>lib (instead of <installpath>lib64).
You can download the latest CUDA toolkit from here.
Compatibility
Windows version | Native x86_64 support | X86_32 support on x86_32 (cross) |
---|---|---|
Windows 10 | YES | YES |
Windows 8.1 | YES | YES |
Windows 7 | YES | YES |
Windows Server 2016 | YES | NO |
Windows Server 2012 R2 | YES | NO |
Visual Studio Version | Native x86_64 support | X86_32 support on x86_32 (cross) |
---|---|---|
2017 | YES | NO |
2015 | YES | NO |
2015 Community edition | YES | NO |
2013 | YES | YES |
2012 | YES | YES |
2010 | YES | YES |
As can be seen from the above tables, support for x86_32 is limited. Presently, only the GeForce series is supported for 32b CUDA applications. If you have a supported version of Windows and Visual Studio, then proceed. Otherwise, first install the required software.
Verifying if your system has a CUDA capable GPU − Open a RUN window and run the command − control /name Microsoft.DeviceManager, and verify from the given information. If you do not have a CUDA capable GPU, or a GPU, then halt.
Installing the Latest CUDA Toolkit
In this section, we will see how to install the latest CUDA toolkit.
Step 1 − Visit − https://developer.nvidia.com and select the desired operating system.
Step 2 − Select the type of installation that you would like to perform. The network installer will initially be a very small executable, which will download the required files when run. The standalone installer will download each required file at once and won’t require an Internet connection later to install.
Step 3 − Download the base installer.
The CUDA toolkit will also install the required GPU drivers, along with the required libraries and header files to develop CUDA applications. It will also install some sample code to help starters. If you run the executable by double-clicking on it, just follow the on-screen directions and the toolkit will be installed. This is the graphical way of installation, and the downside of this method is that you do not have control on what packages to install. This can be avoided if you install the toolkit using CLI. Here is a list of possible packages that you can control −
nvcc_9.1 | cuobjdump_9.1 | nvprune_9.1 | cupti_9.1 |
demo_suite_9.1 | documentation_9.1 | cublas_9.1 | gpu-library-advisor_9.1 |
curand_dev_9.1 | nvgraph_9.1 | cublas_dev_9.1 | memcheck_9.1 |
cusolver_9.1 | nvgraph_dev_9.1 | cudart_9.1 | nvdisasm_9.1 |
cusolver_dev_9.1 | npp_9.1 | cufft_9.1 | nvprof_9.1 |
cusparse_9.1 | npp_dev_9.1 | cufft_dev_9.1 | visual_profiler_9.1 |
For example, to install only the compiler and the occupancy calculator, use the following command −
<PackageName>.exe -s nvcc_9.1 occupancy_calculator_9.1
Verifying the Installation
Follow these steps to verify the installation −
Step 1 − Check the CUDA toolkit version by typing nvcc -V in the command prompt.
Step 2 − Run deviceQuery.cu located at: C:ProgramDataNVIDIA CorporationCUDA Samplesv9.1binwin64Release to view your GPU card information. The output will look like −
Step 3 − Run the bandWidth test located at C:ProgramDataNVIDIA CorporationCUDA Samplesv9.1binwin64Release. This ensures that the host and the device are able to communicate properly with each other. The output will look like −
If any of the above tests fail, it means the toolkit has not been installed properly. Re-install by following the above instructions.
Uninstalling
CUDA can be uninstalled without any fuss from the ‘Control Panel’ of Windows.
At this point, the CUDA toolkit is installed. You can get started by running the sample programs provided in the toolkit.
Setting-up Visual Studio for CUDA
For doing development work using CUDA on Visual Studio, it needs to be configured. To do this, go to − File → New | Project… NVIDIA → CUDA →. Now, select a template for your CUDA Toolkit version (We are using 9.1 in this tutorial). To specify a custom CUDA Toolkit location, under CUDA C/C++, select Common, and set the CUDA Toolkit Custom Directory.