CUDA Libraries & Packages | Generated by AI
Here’s a quick rundown of each package you’re installing:
-
nvidia-cusparselt-cu12 NVIDIA’s cuSPARSELt library for ultra-efficient low-precision sparse matrix operations on CUDA 12.
-
nvidia-cusparse-cu12 cuSPARSE library providing general sparse linear-algebra routines (e.g. sparse solvers) on CUDA 12.
-
nvidia-cudnn-cu12 cuDNN library offering highly optimized primitives (convolutions, pooling, activations, etc.) for deep neural networks on CUDA 12.
-
nvidia-cublas-cu12 cuBLAS library implementing high-performance dense linear-algebra (BLAS) routines on CUDA 12.
-
nvidia-cufft-cu12 cuFFT library for computing fast Fourier transforms on CUDA 12 devices.
-
nvidia-cusolver-cu12 cuSOLVER library with routines for dense and sparse direct solvers, eigenvalue problems, etc., on CUDA 12.
-
nvidia-curand-cu12 cuRAND library for high-quality random number generation on CUDA 12.
-
nvidia-cufile-cu12 cuFile library enabling direct, asynchronous GPU-accelerated file I/O on CUDA 12.
-
nvidia-nvtx-cu12 NVTX (NVIDIA Tools Extension) for annotating and profiling code ranges on CUDA 12.
-
nvidia-nvjitlink-cu12 NVJITLink library to JIT-link CUDA kernels at runtime on CUDA 12.
-
nvidia-cuda-nvrtc-cu12 NVRTC runtime compiler for compiling CUDA C kernels on the fly under CUDA 12.
-
nvidia-cuda-cupti-cu12 CUPTI (CUDA Profiling Tools Interface) for collecting fine-grained profiling and tracing data on CUDA 12.
-
nvidia-cuda-runtime-cu12 The core CUDA runtime library for managing and launching kernels on CUDA 12.
-
nvidia-nccl-cu12 NCCL library providing efficient multi-GPU and multi-node collective communication primitives on CUDA 12.
-
torch The main PyTorch library for tensor operations, automatic differentiation, and building deep-learning models.
-
networkx A Python package for creating, manipulating, and analyzing complex networks and graph structures.
-
mpmath A pure-Python library for arbitrary-precision real and complex arithmetic.
-
sympy A Python library for symbolic mathematics (algebra, calculus, equation solving, etc.).
-
triton A language and compiler that lets you write custom, high-performance GPU kernels more easily than raw CUDA.