Projects with this topic
Sort by:
-
🔧 🔗 https://github.com/vllm-project/vllmA high-throughput and memory-efficient inference and serving engine for LLMs
Updated
A high-throughput and memory-efficient inference and serving engine for LLMs