Debian, CUDA, PytorchA simple way to setup an AI playground on Debian |
Created | 2025-03-14 | |
---|---|---|---|
Updated |
This is a very simple and straightforward way to setup Pytorch with CUDA support on Debian, with intention of using it for LLM experiments.
This is being executed on a fresh install of Debian 12 stable, and tested on RTX 3090.
Be sure to add contrib non-free
to apt sources list
before starting:
sudo nano /etc/apt/sources.list /etc/apt/sources.list.d/*
Then we can install CUDA following the instructions from the NVIDIA website:
wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda-repo-debian12-12-8-local_12.8.1-570.124.06-1_amd64.deb
sudo dpkg -i cuda-repo-debian12-12-8-local_12.8.1-570.124.06-1_amd64.deb
sudo cp /var/cuda-repo-debian12-12-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-8
Update paths (add to profile or bashrc):
export PATH=/usr/local/cuda-12.8/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
I additionally ran sudo apt-get -y install cuda
as a
simple way to install nvidida
driver. This is not needed if you already have the driver
installed.
sudo reboot
and you are done with CUDA.
Verify GPU setup:
nvidia-smi
nvcc --version
Compile & run nvidia samples (nBody example is enough) to verify CUDA setup:
sudo apt-get -y install build-essential cmake
sudo apt-get -y install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev libglfw3-dev libgles2-mesa-dev libglx-dev libopengl-dev
git clone https://github.com/nvidia/cuda-samples
cd cuda-samples/Samples/5_Domain_Specific/nbody
cmake . && make
./nbody -benchmark && ./nbody -fullscreen
If the example runs on GPU, you re done.
Create a pyproject.toml file:
[project]
name = "playground"
version = "0.0.1"
requires-python = ">=3.13"
dependencies = [
"transformers",
"torch>=2.6.0",
"accelerate>=1.4.0",
]
[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/nightly/cu128"
explicit = true
Before starting to setup python environment make sure system is detecting nvidia gpu(s), and CUDA is set up. Verify CUDA version corresponds to the one in the pyproject (at time of writting “pytorch-cu128”)
nvidia-smi
nvcc --version
Then setup venv with uv
uv sync --dev
source .venv/bin/activate
and test transformers and pytorch install
python -c "import torch;print('CUDA available to pytorch: ', torch.cuda.is_available())"
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
[!TIP] huggingface cache dir will get BIG if you download models etc. You can change the cache dirs. I have this set in my bashrc:
export HF_HOME=$HOME/huggingface/misc export HF_DATASETS_CACHE=$HOME/huggingface/datasets export TRANSFORMERS_CACHE=$HOME/huggingface/models
You can also change default location by exporting from script each time you use the library (ie. before importing it):
import os 'HF_HOME'] = '/blabla/cache/' os.environ[
Home • Things we do • Articles • Newsletter • Discussion • ©etc '25 • |