Run Cerebras Model Zoo on a GPU#

Overview#

You can run models in the Cerebras Model Zoo on GPUs as well. However, specific packages must be installed to run the model code on a GPU. Make sure to install these packages in a virtual environment (virtualenv) or a conda environment.

Prerequisites#

A CUDA capable GPU is required to run the Cerebras Model Zoo on a GPU.

Procedure#

Once all the CUDA requirements get installed, create a virtualenv on your system with Python version 3.8 or newer, activate the virtualenv, and install Pytorch using the following commands:

virtualenv -p python3.8 /path/to/venv_gpu
source /path/to/venv_pt/bin/activate
pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu117

The requirements.txt file is located in the Cerebras Model Zoo. The CUDA related packages will automatically be installed with PyTorch 2.0.

To test whether PyTorch can adequately access the GPU, start a Python session and run the following commands:

>>> import torch
>>> torch.__version__
'2.0.1+cu117-with-pypi-cudnn' # Confirm that the PT version is `2.0.1`
>>> torch.cuda.is_available()
True # Should return `True`
>>> torch.cuda.device_count()
1 # Number of devices present
>>> torch.cuda.get_device_name(0)
# Should return the proper GPU type

Conclusion#

Running models from the Cerebras Model Zoo on a GPU requires adherence to specific setup procedures, including ensuring the presence of a CUDA-capable GPU and installing the necessary CUDA-related packages. By setting up a dedicated virtual environment and installing PyTorch with the appropriate CUDA version, users can leverage GPU acceleration for their models. The process, from setting up the environment to verifying PyTorch’s access to the GPU, is designed to ensure that users can efficiently transition their computational workload to a GPU, optimizing the performance and efficiency of their model training and inference tasks.