JM

Table of Contents

GPU Vendor Overview

Each manufacturer has differing architectures and programming models to support their GPUs. The most popular of these is NVIDIA and the CUDA programming model, which is NVIDIA’s proprietary platform for programming and using their GPUs. AMD has ROCm, which is a software stack for programming on their GPUs. Intel has also been developing oneAPI, which they intend to be useful across different hardware applications.

The main difference between CUDA and the other software stacks is that CUDA is proprietary, unlike the open source ROCm and oneAPI. There is also the OpenCL project which aims to provide an API for programming across many devices, including GPUs from different manufacturers and even CPUs.

For this book, we are choosing CUDA as our vehicle to explore GPU programming, as it is the most fully-featured and supported of the software stacks. Many modern HPC clusters will have NVIDIA GPUs available, and they have the vast majority of the market share of the desktop GPU market.

Skills from learning to use CUDA, should be applicable to other software stacks, but know that there is a learning curve when transitioning from one platform to another.