Skip to content
Snippets Groups Projects
Commit 38646b06 authored by Sanyam Bhutani's avatar Sanyam Bhutani
Browse files

Update README.md

parent 766184ab
No related branches found
No related tags found
No related merge requests found
......@@ -12,7 +12,7 @@ Requirements: GPU server or an API provider for using 70B, 8B and 1B Llama model
Note: For our GPU Poor friends, you can also use the 8B and lower models for the entire pipeline. There is no strong recommendation. The pipeline below is what worked best on first few tests. You should try and see what works best for you!
Here is the current outline:
Here is step by step (pun intended) thought for the task:
- Step 1: Pre-process PDF: Use `Llama-3.2-1B` to pre-process and save a PDF
- Step 2: Transcript Writer: Use `Llama-3.1-70B` model to write a podcast transcript from the text
......@@ -21,7 +21,20 @@ Here is the current outline:
### Steps to running the notebook:
So right now there is one issue: Parler needs transformers 4.43.3 or earlier and to generate you need latest, so I am just switching on fly
- Install the requirements from [here]() by running inside the folder:
```
git clone
cd
pip install -r requirements.txt
```
- Decide on a PDF to use for Notebook 1, it can be any link but please remember to update the first cell of the notebook with the right link
-
So right now there is one issue: Parler needs transformers 4.43.3 or earlier and to generate you need latest, so I am just switching on fly in the notebooks.
TODO-MORE
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment