RAGLight integrates with Langfuse to give you full visibility over your RAG pipeline. Every call to generate() or generate_streaming() produces a structured trace showing exactly what happened at each step.
Retrieve
See which documents were retrieved, from which collection, with which query.
Rerank
Inspect the reranking step when a CrossEncoder is active.
Generate
Trace the LLM call — prompt, model, latency, and token counts.
Tracing is configured via LangfuseConfig, a dataclass that holds your Langfuse credentials.
Copy
from raglight.config.langfuse_config import LangfuseConfiglangfuse_config = LangfuseConfig( public_key="pk-lf-...", secret_key="sk-lf-...", host="http://localhost:3000", # or your Langfuse Cloud URL)
Pass this config to your pipeline — the rest is automatic.
By default, a UUID is generated once per RAG instance and reused for every generate() call. This groups all turns of the same conversation under a single Langfuse session.You can pin a custom session ID:
When LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_HOST (or LANGFUSE_BASE_URL) are all set, tracing is enabled automatically. If any of these are missing, RAGLight disables Langfuse entirely — no connection attempt is made to localhost:3000.