Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • +refs/pull/572/head
  • Adding_open_colab
  • Fix-broken-format-in-preview-for-RAG-chatbo-example
  • Getting_to_know_Llama
  • LG3-nits-after-launch
  • MM-Rag-fresh
  • Multi-Modal-RAG-Demo
  • Remove-Tokenizer-onPrem-vllm-InferenceThroughput
  • Tool_Calling_Demos
  • add-FAQ
  • add-promptguard-to-safety-checkers
  • adding_examples_with_aws
  • albertodepaola-patch-1
  • amitsangani-patch-1
  • archive-main
  • aws-do-fsdp
  • azure-api-example
  • benchmark-inference-throughput-cloud-api
  • chat-completion-fix
  • chat_pipeline
  • v0.0.4.post1
  • v0.0.4
  • v0.0.3
  • v0.0.2
24 results
Created with Raphaël 2.2.014Oct1110989865432130Sep282625242322211096330Aug2928272423212016151413121110986231Jul2926252423221918171615111098532130Jun282726252421201817131211107654331May30292827232221201716151413121110111098765321Fix issues with fake_llama[WIP]add fake tokenizerFix src/tests/test_train_utils.pyFix test_finetuningFix tests for custom dataset, grammar, batching, chat_completionRemove trust_remote_code in favor of setting env variableFix test_grammar_dataset.pyFix test_custom_dataset.pyfix Colab link in quickstart_peft_finetuning.ipynb (#720)Fix link to LLM finetuning overview (#719)fix Colab link in quickstart_peft_finetuning.ipynbFix link to overviewUpdated spell check word list to include Crusoe terms and referenced libraries.Update README.mdUpdate README.mdadded the passing of hugging-face token from the argumentChanges the UI from textbox to chatbox with max_tokens, rop_k, temperature and top_p sliders there.Changed readme for usage of multimodal inferencing of gradio UI by passsing hugigng-face token from the argumentsAdded passing of Hugging-face token from the argumentsChange Gradio -> gradioMerge branch 'meta-llama:main' into mainAdded instructions in README.md for using the Gradio UIFix the bug when continue the peft. (#717)Modified requirements.txt by adding the gradio dependencyImplemented memory management to release GPU resources after inferenceAdded basic LlamaInference class structure, model loading functionality, image processing and text generationadded a file to start with Inferencing on llama3.2 vision using gradio UIRemoved unused files and cleaned up terraform.Initial commit for Crusoe recipes, beginning with vLLM tutorial on benchmarking fp8.Add high level ReadMeFix Methods and fix promptsAdd methodsadd drop down menuAdd Gradio App and Re-fact Part 3 nbadded missing word and corrected spelling (#707)added missing word and corrected spelling add docs headersUpdate README.mdadd modal many llama human eval exampleUpdate requirements.txt (#664)
Loading