Projects with this topic
-
🔧 🔗 https://github.com/vllm-project/vllmA high-throughput and memory-efficient inference and serving engine for LLMs
Updated -
🔧 🔗 https://github.com/sgl-project/sglangSGLang is a fast serving framework for large language models and vision language models.
Updated -
🔧 🔗 https://github.com/andrewkchan/yalmYet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O
Updated -
https://github.com/janhq/nitro.git now: https://github.com/janhq/cortex.git Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM, ONNX). Powers
👋 JanUpdated -
🔧 🔗 https://github.com/flashinfer-ai/debug-printDebug print operator for cudagraph debugging
Updated -
Real-time inference for Stable Diffusion - 0.88s latency. Covers AITemplate, nvFuser, TensorRT, FlashAttention. Join our Discord communty: https://discord.com/invite/TgHXuSJEk6
Updated