Projects with this topic
-
🔧 🔗 https://github.com/HKUDS/MiniRAG"MiniRAG: Making RAG Simpler with Small and Free Language Models"
Updated -
Obsidian Local LLM Helper
🔧 🔗 https://github.com/manimohans/obsidian-local-llm-helper Seamlessly integrate your local LLM with Obsidian. Process large text chunks, transform content with AI, chat with your notes and maintain data privacy — all without leaving your notes.Updated -
Mini-RAG
🔧 🔗 https://github.com/jjwheatley/mini-rag Local Retrieval Augmented Generation for your Obsidian notesUpdated -
Ragaman
🔧 🔗 https://github.com/npiv/ragaman A RAG (Retrieval-Augmented Generation) system for notes with support for REST API and Model Context Protocol (MCP).Updated -
🔧 🔗 https://github.com/AkariAsai/OpenScholar This repository includes the official implementation of OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs.Updated -
🔧 🔗 https://github.com/llmware-ai/llmwareUnified framework for building enterprise RAG pipelines with small, specialized models
Updated -
🔧 🔗 https://github.com/Cinnamon/kotaemonAn open-source RAG-based tool for chatting with your documents.
🕸 ️🔗 https://cinnamon.github.io/kotaemon/Updated -
🔧 🔗 https://github.com/AsyncFuncAI/alphaxiv-open AlphaXIV open-source alternative: Chat with any arXiv paper.Updated -
🔧 🔗 https://github.com/MODSetter/SurfSense Open Source Alternative to NotebookLM / Perplexity / Glean, connected to external sources such as search engines (Tavily, Linkup), Slack, Linear, Notion, YouTube, GitHub and more.Updated -
🔧 🔗 https://github.com/decodingml/production-llm-rag-courseSecond Brain Semantic AI Engine: Powered by LLMs & RAG
Updated -
🔧 🔗 https://github.com/Gurubase/gurubase Gurubase lets you add an "Ask AI" button to your technical docs, turning your content into an AI assistant. It uses web pages, PDFs, YouTube videos, and GitHub repos as sources to generate instant, accurate answers with references. Deploy it via Slack, Discord, GitHub or a web widget.Updated -
🤖 𝗟𝗲𝗮𝗿𝗻 for 𝗳𝗿𝗲𝗲 how to 𝗯𝘂𝗶𝗹𝗱 an end-to-end 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗿𝗲𝗮𝗱𝘆 𝗟𝗟𝗠 & 𝗥𝗔𝗚 𝘀𝘆𝘀𝘁𝗲𝗺 using 𝗟𝗟𝗠𝗢𝗽𝘀 best practices: ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 11 𝘩𝘢𝘯𝘥𝘴-𝘰𝘯 𝘭𝘦𝘴𝘴𝘰𝘯𝘴 https://github.com/decodingml/llm-twin-courseUpdated -
🔧 🔗 https://github.com/Ollama-Agent-Roll-Cage/oarc-rag An Ultra-fast lightweight vector database for augmented data retrieval in AI/ML.Updated -
This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.
Updated -
🔧 🔗 https://github.com/getzep/zepclizepcli - a command line tool for managing the Zep service.
Updated -
🔧 🔗 https://github.com/om-ai-lab/OmAgent Build multimodal language agents for fast prototype and productionUpdated -
https://github.com/run-llama/sec-insights A real world full-stack application using LlamaIndex
Updated -
🔧 🔗 https://github.com/AkariAsai/self-rag This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflectionUpdated -
🔧 🔗 https://github.com/AkariAsai/learning_to_retrieve_reasoning_paths The official implementation of ICLR 2020, "Learning to Retrieve Reasoning Paths over Wikipedia Graph for QuestiUpdated -
🔧 🔗 https://github.com/poloclub/mememoA JavaScript library that brings vector search and RAG to your browser!
Updated