From 09ec97cdc108777a7982381d9fff36c84284ecb1 Mon Sep 17 00:00:00 2001
From: Jerry Liu <jerryjliu98@gmail.com>
Date: Thu, 23 Nov 2023 08:41:39 -0800
Subject: [PATCH] add llamapacks to docs  (#9109)

---
 docs/community/integrations.md                |  11 +-
 docs/community/llama_packs/root.md            |  64 +++
 .../llama_hub/llama_pack_resume.ipynb         | 394 ++++++++++++++++++
 3 files changed, 464 insertions(+), 5 deletions(-)
 create mode 100644 docs/community/llama_packs/root.md
 create mode 100644 docs/examples/llama_hub/llama_pack_resume.ipynb

diff --git a/docs/community/integrations.md b/docs/community/integrations.md
index dc95de32a1..1ddeb179fa 100644
--- a/docs/community/integrations.md
+++ b/docs/community/integrations.md
@@ -6,14 +6,15 @@ LlamaIndex has a number of community integrations, from vector stores, to prompt
 
 LlamaHub hosts a full suite of LlamaPacks -- templates for features that you can download, edit, and try out! This offers a quick way to learn about new features and try new techniques.
 
-Just run the following command in your terminal (this terminal command is installed with the `llama-index` python package!):
+The full set of LlamaPacks is available on [LlamaHub](https://llamahub.ai/). Check out our dedicated page below.
 
-```bash
-llamaindex-cli download-llamapack ZephyrQueryEnginePack --download-dir ./zephyr_pack
+```{toctree}
+---
+maxdepth: 1
+---
+llama_packs/root.md
 ```
 
-The full set of LlamaPacks is available on [LlamaHub](https://llamahub.ai/). Here's a [sample notebook on how to use a LlamaPack](/examples/llama_hub/llama_packs_example.ipynb).
-
 ## Data Loaders
 
 The full set of data loaders are found on [LlamaHub](https://llamahub.ai/)
diff --git a/docs/community/llama_packs/root.md b/docs/community/llama_packs/root.md
new file mode 100644
index 0000000000..1ec38bd60f
--- /dev/null
+++ b/docs/community/llama_packs/root.md
@@ -0,0 +1,64 @@
+# Llama Packs 🦙📦
+
+## Concept
+
+Llama Packs are a community-driven hub of **prepackaged modules/templates** you can use to kickstart your LLM app.
+
+This directly tackles a big pain point in building LLM apps; every use case requires cobbling together custom components and a lot of tuning/dev time. Our goal is to accelerate that through a community led effort.
+
+They can be used in two ways:
+
+- On one hand, they are **prepackaged modules** that can be initialized with parameters and run out of the box to achieve a given use case (whether that’s a full RAG pipeline, application template, and more). You can also import submodules (e.g. LLMs, query engines) to use directly.
+- On another hand, LlamaPacks are **templates** that you can inspect, modify, and use.
+
+**All packs are found on [LlamaHub](https://llamahub.ai/).** Go to the dropdown menu and select "LlamaPacks" to filter by packs.
+
+**Please check the README of each pack for details on how to use**. [Example pack here](https://llamahub.ai/l/llama_packs-voyage_query_engine).
+
+See our [launch blog post](https://blog.llamaindex.ai/introducing-llama-packs-e14f453b913a) for more details.
+
+## Usage Pattern
+
+You can use Llama Packs through either the CLI or Python.
+
+CLI:
+
+```bash
+llamaindex-cli download-llamapack <pack_name> --download-dir <pack_directory>
+```
+
+Python:
+
+```python
+from llama_index.llama_pack import download_llama_pack
+
+# download and install dependencies
+pack_cls = download_llama_pack("<pack_name>", "<pack_directory>")
+```
+
+You can use the pack in different ways, either to inspect modules, run it e2e, or customize the templates.
+
+```python
+# every pack is initialized with different args
+pack = pack_cls(*args, **kwargs)
+
+# get modules
+modules = pack.get_modules()
+display(modules)
+
+# run (every pack will have different args)
+output = pack.run(*args, **kwargs)
+```
+
+Importantly, you can/should also go into `pack_directory` to inspect the source files/customize it. That's part of the point!
+
+## Module Guides
+
+Some example module guides are given below. Remember, go on [LlamaHub](https://llamahub.ai) to access the full range of packs.
+
+```{toctree}
+---
+maxdepth: 1
+---
+/examples/llama_hub/llama_packs_example.ipynb
+```
diff --git a/docs/examples/llama_hub/llama_pack_resume.ipynb b/docs/examples/llama_hub/llama_pack_resume.ipynb
new file mode 100644
index 0000000000..f1badaf6a1
--- /dev/null
+++ b/docs/examples/llama_hub/llama_pack_resume.ipynb
@@ -0,0 +1,394 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "92dad89e-d84a-4d85-85e1-6beaed293605",
+   "metadata": {},
+   "source": [
+    "# Llama Pack - Resume Screener 📄\n",
+    "\n",
+    "<a href=\"https://colab.research.google.com/github/jerryjliu/llama_index/blob/main/docs/examples/llama_hub/llama_pack_resume.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
+    "\n",
+    "This example shows you how to use the Resume Screener Llama Pack.\n",
+    "You can find all packs on https://llamahub.ai\n",
+    "\n",
+    "The resume screener is designed to analyze a candidate's resume according to a set of criteria, and decide whether the candidate is a fit for the job.\n",
+    "\n",
+    "in this example we'll evaluate a sample resume (e.g. Jerry's old resume)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "e4722b0b-ff5e-4e71-990b-94ec65f8b359",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "!pip install llama-index llama-hub"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "0221e488-0d1f-4890-b081-27530fcac5f3",
+   "metadata": {},
+   "source": [
+    "### Setup Data\n",
+    "\n",
+    "We'll load some sample Wikipedia data for OpenAI, Sam, Mira, and Emmett. Why? No reason in particular :) "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "3abe6c66-5107-4952-b670-e60153ff916a",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from llama_index.readers import WikipediaReader\n",
+    "\n",
+    "loader = WikipediaReader()\n",
+    "documents = loader.load_data(\n",
+    "    pages=[\"OpenAI\", \"Sam Altman\", \"Mira Murati\", \"Emmett Shear\"],\n",
+    "    auto_suggest=False,\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "98bb38b7-0235-406c-9954-ab46809eef17",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# do sentence splitting on the first piece of text\n",
+    "from llama_index.node_parser import SentenceSplitter"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "a9700e6b-525d-46e2-940d-1768a42291b2",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "sentence_splitter = SentenceSplitter(chunk_size=1024)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "29d74a5e-6326-462c-a6f5-0694ad6388cb",
+   "metadata": {},
+   "source": [
+    "We get the first chunk from each essay."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "d3539635-e328-42fa-b712-43616841c959",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# get the first 1024 tokens for each entity\n",
+    "openai_node = sentence_splitter.get_nodes_from_documents([documents[0]])[0]\n",
+    "sama_node = sentence_splitter.get_nodes_from_documents([documents[1]])[0]\n",
+    "mira_node = sentence_splitter.get_nodes_from_documents([documents[2]])[0]\n",
+    "emmett_node = sentence_splitter.get_nodes_from_documents([documents[3]])[0]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "471e36b8-81d9-4afa-b53b-c810fbc84627",
+   "metadata": {},
+   "source": [
+    "We'll also download Jerry's resume in 2019."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "2a8aec6b-239c-45d5-a919-05cc52600fa1",
+   "metadata": {},
+   "source": [
+    "## Download Resume Screener Pack from LlamaHub\n",
+    "\n",
+    "Here we download the resume screener pack class from LlamaHub.\n",
+    "\n",
+    "We'll use it for two use cases:\n",
+    "- whether the candidate is a good fit for a front-end / full-stack engineering role.\n",
+    "- whether the candidate is a good fit for the CEO of OpenAI."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "d40792de-2518-40a2-8468-c020d0decf18",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "from llama_index.llama_pack import download_llama_pack"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "fc1819c2-0e8c-4a55-8e4b-aa17619b25b8",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "ResumeScreenerPack = download_llama_pack(\n",
+    "    \"ResumeScreenerPack\", \"./resume_screener_pack\"\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "7e957c52-8ae0-48aa-9041-641efa0de774",
+   "metadata": {},
+   "source": [
+    "### Screen Candidate for MLE Role\n",
+    "\n",
+    "We take a job description on an MLE role from Meta's website."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "d44f76f0-ac12-4f8c-ba8c-e6d1680a26d8",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "meta_jd = \"\"\"\\\n",
+    "Meta is embarking on the most transformative change to its business and technology in company history, and our Machine Learning Engineers are at the forefront of this evolution. By leading crucial projects and initiatives that have never been done before, you have an opportunity to help us advance the way people connect around the world.\n",
+    " \n",
+    "The ideal candidate will have industry experience working on a range of recommendation, classification, and optimization problems. You will bring the ability to own the whole ML life cycle, define projects and drive excellence across teams. You will work alongside the world’s leading engineers and researchers to solve some of the most exciting and massive social data and prediction problems that exist on the web.\\\n",
+    "\"\"\""
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "a018abe4-8e0b-4f5d-aee2-9f2873fb245d",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "resume_screener = ResumeScreenerPack(\n",
+    "    job_description=meta_jd,\n",
+    "    criteria=[\n",
+    "        \"2+ years of experience in one or more of the following areas: machine learning, recommendation systems, pattern recognition, data mining, artificial intelligence, or related technical field\",\n",
+    "        \"Experience demonstrating technical leadership working with teams, owning projects, defining and setting technical direction for projects\",\n",
+    "        \"Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.\",\n",
+    "    ],\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "bc406e7c-1639-4315-b627-495c18ebb47d",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "response = resume_screener.run(resume_path=\"jerry_resume.pdf\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "acfeed3c-dc05-4e69-af9c-bb4cbe409798",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "### CRITERIA DECISION\n",
+      "Jerry Liu has more than 2 years of experience in machine learning and artificial intelligence. He worked as a Machine Learning Engineer at Quora Inc. for a year and has been an AI Research Scientist at Uber ATG since 2018. His work involves deep learning, information theory, and 3D geometry, among other areas.\n",
+      "True\n",
+      "### CRITERIA DECISION\n",
+      "Jerry Liu has demonstrated technical leadership in his roles at Uber ATG and Quora Inc. He has led and mentored multiple projects on multi-agent simulation, prediction, and planning. He also researched and productionized GBDT’s for new users at Quora, contributing to a 5% increase in new user active usage.\n",
+      "True\n",
+      "### CRITERIA DECISION\n",
+      "Jerry Liu has a Bachelor of Science in Engineering (B.S.E.) in Computer Science from Princeton University. He graduated Summa Cum Laude and was a member of Phi Beta Kappa, Tau Beta Pi, and Sigma Xi.\n",
+      "True\n",
+      "#### OVERALL REASONING ##### \n",
+      "Jerry Liu meets all the screening criteria for the Machine Learning Engineer position at Meta. He has the required experience in machine learning and artificial intelligence, has demonstrated technical leadership, and has a relevant degree.\n",
+      "True\n"
+     ]
+    }
+   ],
+   "source": [
+    "for cd in response.criteria_decisions:\n",
+    "    print(\"### CRITERIA DECISION\")\n",
+    "    print(cd.reasoning)\n",
+    "    print(cd.decision)\n",
+    "print(\"#### OVERALL REASONING ##### \")\n",
+    "print(str(response.overall_reasoning))\n",
+    "print(str(response.overall_decision))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "4cd4d149-c862-4591-bd98-9d8f55278c7c",
+   "metadata": {},
+   "source": [
+    "### Screen Candidate for FE / Typescript roles"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "d94efb7f-b170-4833-be2a-eb5911fed816",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "resume_screener = ResumeScreenerPack(\n",
+    "    job_description=\"We're looking to hire a front-end engineer\",\n",
+    "    criteria=[\n",
+    "        \"The individual needs to be experienced in front-end / React / Typescript\"\n",
+    "    ],\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "08db7399-d43b-4139-8809-f8f493329f76",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "response = resume_screener.run(resume_path=\"jerry_resume.pdf\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "7669250b-d0d0-4825-a429-c1d1c4f34ee1",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "The candidate does not meet the specific criteria of having experience in front-end, React, or Typescript.\n",
+      "False\n"
+     ]
+    }
+   ],
+   "source": [
+    "print(str(response.overall_reasoning))\n",
+    "print(str(response.overall_decision))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "e31684e4-61ef-4796-8f13-5d7291d2c5dd",
+   "metadata": {},
+   "source": [
+    "### Screen Candidate for CEO of OpenAI\n",
+    "\n",
+    "Jerry can't write Typescript, but can he be CEO of OpenAI?"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "57efbd93-7249-4b04-b5be-d74e5953d004",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "job_description = f\"\"\"\\\n",
+    "We're looking to hire a CEO for OpenAI.\n",
+    "\n",
+    "Instead of listing a set of specific criteria, each \"criteria\" is instead a short biography of a previous CEO.\\\n",
+    "\n",
+    "For each criteria/bio, outline if the candidate's experience matches or surpasses that of the candidate.\n",
+    "\n",
+    "Also, here's a description of OpenAI from Wikipedia: \n",
+    "{openai_node.get_content()}\n",
+    "\"\"\"\n",
+    "\n",
+    "profile_strs = [\n",
+    "    f\"Profile: {n.get_content()}\" for n in [sama_node, mira_node, emmett_node]\n",
+    "]\n",
+    "\n",
+    "\n",
+    "resume_screener = ResumeScreenerPack(\n",
+    "    job_description=job_description, criteria=profile_strs\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "409b0a27-f550-471d-83a3-c5b39e6def71",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "response = resume_screener.run(resume_path=\"jerry_resume.pdf\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "35384368-bcb5-4422-a92d-1a1cf7aab853",
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "### CRITERIA DECISION\n",
+      "The candidate, Jerry Liu, has a strong background in AI research and has led multiple projects in this field. However, he does not have the same level of executive leadership experience as Samuel Harris Altman, who served as CEO of OpenAI and president of Y Combinator. Altman also has experience leading an advanced AI research team at Microsoft, which Liu does not have.\n",
+      "False\n",
+      "### CRITERIA DECISION\n",
+      "While Jerry Liu has a strong background in AI and machine learning, his experience does not match or surpass that of Mira Murati. Murati served as the chief technology officer of OpenAI and briefly as its interim CEO. She led the company's work on several major projects and oversaw multiple teams. Liu does not have the same level of leadership or executive experience.\n",
+      "False\n",
+      "### CRITERIA DECISION\n",
+      "Jerry Liu's experience does not match or surpass that of Emmett Shear. Shear co-founded Justin.tv and served as the CEO of Twitch, demonstrating significant entrepreneurial and executive leadership experience. He also served as a part-time partner at venture capital firm Y Combinator and briefly as interim CEO of OpenAI. Liu, while having a strong background in AI research, does not have the same level of leadership or executive experience.\n",
+      "False\n",
+      "#### OVERALL REASONING ##### \n",
+      "While Jerry Liu has a strong background in AI research and has led multiple projects in this field, his experience does not match or surpass that of the previous CEOs in terms of executive leadership and entrepreneurial experience.\n",
+      "False\n"
+     ]
+    }
+   ],
+   "source": [
+    "for cd in response.criteria_decisions:\n",
+    "    print(\"### CRITERIA DECISION\")\n",
+    "    print(cd.reasoning)\n",
+    "    print(cd.decision)\n",
+    "print(\"#### OVERALL REASONING ##### \")\n",
+    "print(str(response.overall_reasoning))\n",
+    "print(str(response.overall_decision))"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "681bde1f-c254-4398-b65f-6ac1aacbf067",
+   "metadata": {},
+   "source": [
+    "...sadly not"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "llama_index_v2",
+   "language": "python",
+   "name": "llama_index_v2"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
-- 
GitLab