Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Llama Recipes
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Container Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
mirrored_repos
MachineLearning
meta-llama
Llama Recipes
Repository graph
Repository graph
You can move around the graph by using the arrow keys.
ba447971f0e743c2d01b4a74e1aa1cdf9224a728
Select Git revision
Branches
20
+refs/pull/572/head
Adding_open_colab
Fix-broken-format-in-preview-for-RAG-chatbo-example
Getting_to_know_Llama
LG3-nits-after-launch
MM-Rag-fresh
Multi-Modal-RAG-Demo
Remove-Tokenizer-onPrem-vllm-InferenceThroughput
Tool_Calling_Demos
add-FAQ
add-promptguard-to-safety-checkers
adding_examples_with_aws
albertodepaola-patch-1
amitsangani-patch-1
archive-main
aws-do-fsdp
azure-api-example
benchmark-inference-throughput-cloud-api
chat-completion-fix
chat_pipeline
Tags
4
v0.0.4.post1
v0.0.4
v0.0.3
v0.0.2
24 results
Begin with the selected commit
Created with Raphaël 2.2.0
14
Oct
11
10
9
8
9
8
6
5
4
3
2
1
30
Sep
28
26
25
24
23
22
21
10
9
6
3
30
Aug
29
28
27
24
23
21
20
16
15
14
13
12
11
10
9
8
6
2
31
Jul
29
26
25
24
23
22
19
18
17
16
15
11
10
9
8
5
3
2
1
30
Jun
28
27
26
25
24
21
20
18
17
13
12
11
10
7
6
5
4
3
31
May
30
29
28
27
23
22
21
20
17
16
15
14
13
12
11
10
11
10
9
8
7
6
5
3
2
1
Fix issues with fake_llama
[WIP]add fake tokenizer
Fix src/tests/test_train_utils.py
Fix test_finetuning
Fix tests for custom dataset, grammar, batching, chat_completion
Remove trust_remote_code in favor of setting env variable
Fix test_grammar_dataset.py
Fix test_custom_dataset.py
fix Colab link in quickstart_peft_finetuning.ipynb (#720)
Fix link to LLM finetuning overview (#719)
fix Colab link in quickstart_peft_finetuning.ipynb
Fix link to overview
Updated spell check word list to include Crusoe terms and referenced libraries.
Update README.md
Update README.md
added the passing of hugging-face token from the argument
Changes the UI from textbox to chatbox with max_tokens, rop_k, temperature and top_p sliders there.
Changed readme for usage of multimodal inferencing of gradio UI by passsing hugigng-face token from the arguments
Added passing of Hugging-face token from the arguments
Change Gradio -> gradio
Merge branch 'meta-llama:main' into main
Added instructions in README.md for using the Gradio UI
Fix the bug when continue the peft. (#717)
Modified requirements.txt by adding the gradio dependency
Implemented memory management to release GPU resources after inference
Added basic LlamaInference class structure, model loading functionality, image processing and text generation
added a file to start with Inferencing on llama3.2 vision using gradio UI
Removed unused files and cleaned up terraform.
Initial commit for Crusoe recipes, beginning with vLLM tutorial on benchmarking fp8.
Add high level ReadMe
Fix Methods and fix prompts
Add methods
add drop down menu
Add Gradio App and Re-fact Part 3 nb
added missing word and corrected spelling (#707)
added missing word and corrected spelling
add docs headers
Update README.md
add modal many llama human eval example
Update requirements.txt (#664)
Loading