Projects with this topic
-
🔧 🔗 https://github.com/vllm-project/vllmA high-throughput and memory-efficient inference and serving engine for LLMs
Updated -
🔧 🔗 https://github.com/tensorzero/tensorzero TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluation, and experimentationUpdated -
🔧 🔗 https://github.com/vllm-project/vllm-ascend Community maintained hardware plugin for vLLM on AscendUpdated -
🔧 🔗 https://github.com/BerriAI/litellmPython SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
🕸 ️🔗 docs.litellm.ai/docs/Updated -
Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
Updated -
A Blazing Fast AI Gateway. Route to 100+ LLMs with 1 fast & friendly API. https://github.com/Portkey-AI/gateway
Updated -
🔧 🔗 https://github.com/langfuse/langfuse-js🪢 Langfuse JS/TS SDKs - Instrument your LLM app and get detailed tracing/observability. Works with any LLM or frameworkUpdated -
🔧 🔗 https://github.com/langfuse/langfuse🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LUpdated -
https://github.com/distantmagic/paddler Stateful load balancer custom-tailored for llama.cpp
Updated -
OpenLIT is an open-source LLM Observability tool built on OpenTelemetry.
📈 🔥 Monitor GPU performance, LLM traces with input and output metadata, and metrics like cost, tokens, and user interactions along with complete APM for LLM Apps.🖥 ️Updated -
Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, persist, and execute on your own infrastructure. burr.dagworks.io
Updated -
-
🔧 🔗 https://github.com/harishsg993010/neural-state-manipulator A tool for manipulating the internal neural activations of language modelsUpdated -
https://github.com/ComposioHQ/composio Composio equip's your AI agents & LLMs with 100+ high-quality integrations via function calling
Updated -
🤖 𝗟𝗲𝗮𝗿𝗻 for 𝗳𝗿𝗲𝗲 how to 𝗯𝘂𝗶𝗹𝗱 an end-to-end 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗿𝗲𝗮𝗱𝘆 𝗟𝗟𝗠 & 𝗥𝗔𝗚 𝘀𝘆𝘀𝘁𝗲𝗺 using 𝗟𝗟𝗠𝗢𝗽𝘀 best practices: ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 11 𝘩𝘢𝘯𝘥𝘴-𝘰𝘯 𝘭𝘦𝘴𝘴𝘰𝘯𝘴 https://github.com/decodingml/llm-twin-courseUpdated -
🔧 🔗 https://github.com/langfuse/mcp-server-langfuse Model Context Protocol (MCP) Server for Langfuse Prompt Management. This server allows you to access and manage your Langfuse prompts through MCPUpdated -
🔧 🔗 https://github.com/langfuse/oss-llmops-stack Modular, open source LLMOps stack that separates concerns: LiteLLM unifies LLM APIs, manages routing and cost controls, and ensures high-availability, whUpdated -
🔧 🔗 https://github.com/p3nGu1nZz/CommandoCommando is an AI Manager software that enables you to use commands to manage and deploy AI systems locally or remotely.
Updated -
-
https://github.com/topoteretes/PromethAI-Backend Open-source framework that gives you AI Agents that help you navigate decision-making, get personalized goals and execute them
Updated