Stars
Z80-μLM is a 2-bit quantized language model small enough to run on an 8-bit Z80 processor. Train conversational models in Python, export them as CP/M .COM binaries, and chat with your vintage compu…
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
opensource self-hosted sandboxes for ai agents
A Python module to bypass Cloudflare's anti-bot page.
A TTS model capable of generating ultra-realistic dialogue in one pass.
prime is a framework for efficient, globally distributed training of AI models over the internet.
Cryptographically verifiable GPU matmul challenge
peer-to-peer compute and intelligence network that enables decentralized AI development at scale
Solidity contracts for the decentralized Prime Network protocol
OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
SGLang is a high-performance serving framework for large language models and multimodal models.
High-speed Large Language Model Serving for Local Deployment
A series of large language models trained from scratch by developers @01-ai
Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.
Letta is the platform for building stateful agents: AI with advanced memory that can learn and self-improve over time.
The dataset and code for paper: TheoremQA: A Theorem-driven Question Answering dataset
Cross-Platform, GPU Accelerated Whisper 🏎️
Generative Agents: Interactive Simulacra of Human Behavior
🦜🔗 The platform for reliable agents.
A binary lifter and analysis framework for Ethereum smart contracts





