DocsObservabilitySDKsInstrumentation

Instrumentation

Langfuse SDKs build on OpenTelemetry so you can mix native integrations, wrappers, and fully custom spans. Use the tabs below to see the equivalent Python and JS/TS patterns side-by-side.

Native integrations

Langfuse supports native integrations for popular LLM and agent libraries. For the full catalog, see the Integrations gallery.

Custom instrumentation patterns

You can also create custom instrumentation patterns using the Langfuse SDK.

All custom patterns are interoperable—you can nest a decorator-created observation inside a context manager or mix manual spans with native integrations.

Observe decorator

Use the @observe() decorator to automatically capture inputs, outputs, timings, and errors of a wrapped function.

from langfuse import observe
 
@observe()
def my_data_processing_function(data, parameter):
    return {"processed_data": data, "status": "ok"}
 
@observe(name="llm-call", as_type="generation")
async def my_async_llm_call(prompt_text):
    return "LLM response"

Parameters: name, as_type, capture_input, capture_output, transform_to_string. Special kwargs such as langfuse_trace_id or langfuse_parent_observation_id let you stitch into existing traces.

Capturing large inputs/outputs may add overhead. Disable IO capture per decorator (capture_input=False, capture_output=False) or via the LANGFUSE_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED env var.

Context managers & callbacks

Context managers ensure spans are started, nested, and ended automatically.

from langfuse import get_client, propagate_attributes
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(
    as_type="span",
    name="user-request-pipeline",
    input={"user_query": "Tell me a joke"},
) as root_span:
    with propagate_attributes(user_id="user_123", session_id="session_abc"):
        with langfuse.start_as_current_observation(
            as_type="generation",
            name="joke-generation",
            model="gpt-4o",
        ) as generation:
            generation.update(output="Why did the span cross the road?")
 
    root_span.update(output={"final_joke": "..."})

Manual observations

Use start_span() / start_generation() when you need manual control without changing the active context.

from langfuse import get_client
 
langfuse = get_client()
 
span = langfuse.start_span(name="manual-span")
span.update(input="Data for side task")
child = span.start_span(name="child-span")
child.end()
span.end()
⚠️

Spans created via start_span() / start_generation() must be ended explicitly via .end().

Update observations

Update observation objects directly or use context-aware helpers.

from langfuse import get_client
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(as_type="generation", name="llm-call") as gen:
    gen.update(
        input={"prompt": "Why is the sky blue?"},
        output="Rayleigh scattering",
        usage_details={"input_tokens": 5, "output_tokens": 50},
    )
 
with langfuse.start_as_current_observation(as_type="span", name="data-processing"):
    langfuse.update_current_span(metadata={"step1_complete": True})

Add attributes to observations

from langfuse import get_client, propagate_attributes
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(as_type="span", name="user-workflow"):
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        metadata={"experiment": "variant_a"},
        version="1.0",
    ):
        with langfuse.start_as_current_observation(as_type="generation", name="llm-call"):
            pass
Note on Attribute Propagation
We use Attribute Propagation to propagate specific attributes (userId, sessionId, version, tags, metadata) across all observations in an execution context. We will use all observations with these attributes to calculate attribute-level metrics. Please consider the following when using Attribute Propagation:
  • Values must be strings ≤200 characters
  • Metadata keys: Alphanumeric characters only (no whitespace or special characters)
  • Call early in your trace to ensure all observations are covered. This way you make sure that all Metrics in Langfuse are accurate.
  • Invalid values are dropped with a warning

Cross-service propagation

from langfuse import get_client, propagate_attributes
import requests
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(as_type="span", name="api-request"):
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        as_baggage=True,
    ):
        requests.get("https://service-b.example.com/api")
⚠️

When baggage propagation is enabled, attributes are added to all outbound HTTP headers. Only use it for non-sensitive values needed for distributed tracing.

Trace-level metadata & inputs/outputs

Trace input/output default to the root observation. Override them explicitly when needed (e.g., for evaluations).

from langfuse import get_client
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(as_type="span", name="complex-pipeline") as root_span:
    root_span.update(input="Step 1 data", output="Step 1 result")
    root_span.update_trace(
        input={"original_query": "User question"},
        output={"final_answer": "Complete response", "confidence": 0.95},
    )
from langfuse import observe, get_client
 
langfuse = get_client()
 
@observe()
def process_user_query(user_question: str):
    answer = call_llm(user_question)
    langfuse.update_current_trace(
        input={"question": user_question},
        output={"answer": answer},
    )
    return answer

Trace and observation IDs

Langfuse uses W3C Trace Context IDs. Access current IDs or create deterministic ones.

from langfuse import get_client, Langfuse
 
langfuse = get_client()
 
with langfuse.start_as_current_observation(as_type="span", name="my-op") as current_op:
    trace_id = langfuse.get_current_trace_id()
    observation_id = langfuse.get_current_observation_id()
    print(trace_id, observation_id)
 
external_request_id = "req_12345"
deterministic_trace_id = Langfuse.create_trace_id(seed=external_request_id)
from langfuse import get_client
 
langfuse = get_client()
 
existing_trace_id = "abcdef1234567890abcdef1234567890"
existing_parent_span_id = "fedcba0987654321"
 
with langfuse.start_as_current_observation(
    as_type="span",
    name="process-downstream-task",
    trace_context={
        "trace_id": existing_trace_id,
        "parent_span_id": existing_parent_span_id,
    },
):
    pass

Client lifecycle & flushing

Flush or shut down the client to ensure all buffered data is delivered—especially in short-lived jobs.

from langfuse import get_client
 
langfuse = get_client()
# ... create traces ...
langfuse.flush()
langfuse.shutdown()
Was this page helpful?