Skip to content
Snippets Groups Projects
Commit bc569731 authored by Suraj's avatar Suraj
Browse files

update rai readme

parent 3516a5ce
No related branches found
No related tags found
No related merge requests found
...@@ -145,6 +145,7 @@ Contains examples are organized in folders by topic: ...@@ -145,6 +145,7 @@ Contains examples are organized in folders by topic:
[use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3 [use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3
[3p_integrations](./recipes/3p_integrations)|Partner owned folder showing common applications of Meta Llama3 [3p_integrations](./recipes/3p_integrations)|Partner owned folder showing common applications of Meta Llama3
[responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs
[experimental](./experimental)|Meta Llama implementations of experimental LLM techniques
### `src/` ### `src/`
......
# Meta Llama Guard # Trust and Safety with Llama
Meta Llama Guard models provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/meta-llama/PurpleLlama/). The [Purple Llama](https://github.com/meta-llama/PurpleLlama/) project provides tools and models to improve LLM security. This folder contains examples to get started with PurpleLlama tools.
**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Llama-Guard-3-8B). | Tool/Model | Description | Get Started
|---|---|---|
[Llama Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3) | Provide guardrailing on inputs and outputs | [Inference](./llama_guard/inference.py), [Finetuning](./llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb)
[Prompt Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/prompt-guard) | Model to safeguards against jailbreak attempts and embedded prompt injections | [Notebook](./prompt_guard/prompt_guard_tutorial.ipynb)
[Code Shield](https://github.com/meta-llama/PurpleLlama/tree/main/CodeShield) | Tool to safeguard against insecure code generated by the LLM | [Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb)
### Running locally
The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
### Running on the cloud
The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints. ### Running on hosted APIs
The notebooks [input_output_guardrails.ipynb](./input_output_guardrails_with_llama.ipynb), [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment