Skip to content
Snippets Groups Projects
Unverified Commit d02013fd authored by Sean Hatfield's avatar Sean Hatfield Committed by GitHub
Browse files

[FIX] Document pinning does not count in query mode (#1250)


* if document is pinned, do not give queryRefusalResponse message

* forgot embed.js patch

---------

Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
parent 244ce2e3
No related branches found
No related tags found
No related merge requests found
...@@ -131,7 +131,11 @@ async function streamChatWithForEmbed( ...@@ -131,7 +131,11 @@ async function streamChatWithForEmbed(
// If in query mode and no sources are found, do not // If in query mode and no sources are found, do not
// let the LLM try to hallucinate a response or use general knowledge // let the LLM try to hallucinate a response or use general knowledge
if (chatMode === "query" && sources.length === 0) { if (
chatMode === "query" &&
sources.length === 0 &&
pinnedDocIdentifiers.length === 0
) {
writeResponseChunk(response, { writeResponseChunk(response, {
id: uuid, id: uuid,
type: "textResponse", type: "textResponse",
......
...@@ -140,9 +140,13 @@ async function chatWithWorkspace( ...@@ -140,9 +140,13 @@ async function chatWithWorkspace(
contextTexts = [...contextTexts, ...vectorSearchResults.contextTexts]; contextTexts = [...contextTexts, ...vectorSearchResults.contextTexts];
sources = [...sources, ...vectorSearchResults.sources]; sources = [...sources, ...vectorSearchResults.sources];
// If in query mode and no sources are found, do not // If in query mode and no sources are found from the vector search and no pinned documents, do not
// let the LLM try to hallucinate a response or use general knowledge and exit early // let the LLM try to hallucinate a response or use general knowledge and exit early
if (chatMode === "query" && sources.length === 0) { if (
chatMode === "query" &&
vectorSearchResults.sources.length === 0 &&
pinnedDocIdentifiers.length === 0
) {
return { return {
id: uuid, id: uuid,
type: "textResponse", type: "textResponse",
......
...@@ -160,9 +160,13 @@ async function streamChatWithWorkspace( ...@@ -160,9 +160,13 @@ async function streamChatWithWorkspace(
contextTexts = [...contextTexts, ...vectorSearchResults.contextTexts]; contextTexts = [...contextTexts, ...vectorSearchResults.contextTexts];
sources = [...sources, ...vectorSearchResults.sources]; sources = [...sources, ...vectorSearchResults.sources];
// If in query mode and no sources are found, do not // If in query mode and no sources are found from the vector search and no pinned documents, do not
// let the LLM try to hallucinate a response or use general knowledge and exit early // let the LLM try to hallucinate a response or use general knowledge and exit early
if (chatMode === "query" && sources.length === 0) { if (
chatMode === "query" &&
sources.length === 0 &&
pinnedDocIdentifiers.length === 0
) {
writeResponseChunk(response, { writeResponseChunk(response, {
id: uuid, id: uuid,
type: "textResponse", type: "textResponse",
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment