This guide should help fellow researchers and hobbyists to easily automate and accelerate there deep leaning training with their own Kubernetes GPU cluster.
-
Updated
Oct 3, 2022 - Shell
This guide should help fellow researchers and hobbyists to easily automate and accelerate there deep leaning training with their own Kubernetes GPU cluster.
Graphics Processing Unit (GPU) Architecture Guide
Run GPU inference and training jobs on serverless infrastructure that scales with you.
Docker build scripts for TornadoVM on GPUs: https://github.com/beehive-lab/TornadoVM
This project shows how to add a GPU-enabled node pool to an existing AKS cluster and how to autoscale and monitor GPU-enabled worker nodes
GPU-accelerated guppy basecalling and demultiplexing on Linux
Protecting Real-Time GPU Kernels on Integrated CPU-GPU SoC Platforms
RTX 5090 & RTX 5060 Docker container with PyTorch + TensorFlow. First fully-tested Blackwell GPU support for ML/AI. CUDA 12.8, Python 3.11, Ubuntu 24.04. Works with RTX 50-series (5090/5080/5070/5060) and RTX 40-series.
🚀 Automated deployment stack for AMD MI300 GPUs with optimized ML/DL frameworks and HPC-ready configurations
You can run passively cooled single slot Tesla GPU/KI cards in a HP Proliant with a modified iLO ROM that take care of the fan and consequently temperature control.
Step-by-step guide to install and configure AlphaFold 3 using a Conda Python 3.11 environment. No system-wide installations required. ✅ Miniconda setup & dependencies ✅ Repository cloning & model setup ✅ Database configuration & execution script 🔹 Requirements: Linux, NVIDIA GPU (Ampere+), CUDA, ~700GB disk space.
A Hadoop Version with GPU support for better AI job scheduling
My homelab on TalosOS, and a variety of other hardware.
Script to install vGPU driver on Proxmox. 1. Install NVIDIA GRID vGPU on Proxmox VE 2. Install Intel SR-IOV Graphics on Proxmox VE
A minimal docker-compose setup for deploying gpu-computing environments.
Add a description, image, and links to the gpu-computing topic page so that developers can more easily learn about it.
To associate your repository with the gpu-computing topic, visit your repo's landing page and select "manage topics."