<ahref="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/callbacks/LangfuseCallbackHandler.ipynb"target="_parent"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a>
%% Cell type:markdown id:c0d8b66c tags:
# Langfuse Callback Handler
[Langfuse](https://langfuse.com/docs) is an open source LLM engineering platform to help teams collaboratively debug, analyze and iterate on their LLM Applications.
The `LangfuseCallbackHandler` is integrated with Langfuse and empowers you to seamlessly track and monitor performance, traces, and metrics of your LlamaIndex application. Detailed traces of the LlamaIndex context augmentation and the LLM querying processes are captured and can be inspected directly in the Langfuse UI.
The Langfuse SDKs queue and batches events in the background to reduce the number of network requests and improve overall performance. Before exiting your application, make sure all queued events have been flushed to Langfuse servers.
%% Cell type:code id:4e28876c tags:
``` python
# ... your LlamaIndex calls here ...
langfuse_callback_handler.flush()
```
%% Cell type:markdown id:6b86f1b5 tags:
Done!✨ Traces and metrics from your LlamaIndex application are now automatically tracked in Langfuse. If you construct a new index or query an LLM with your documents in context, your traces and metrics are immediately visible in the Langfuse UI. Next, let's take a look at how traces will look in Langfuse.
Check out the full [Langfuse documentation](https://langfuse.com/docs) for more details on Langfuse's tracing and analytics capabilities and how to make most of this integration.