LangSmith makes it easy to attach feedback to traces. This feedback can come from users, annotators, automated evaluators, etc., and is crucial for monitoring and evaluating applications.

Use create_feedback() / createFeedback()

Here we’ll walk through how to log feedback using the SDK.
Child runs You can attach user feedback to ANY child run of a trace, not just the trace (root run) itself. This is useful for critiquing specific steps of the LLM application, such as the retrieval step or generation step of a RAG pipeline.
Non-blocking creation (Python only) The Python client will automatically background feedback creation if you pass trace_id= to create_feedback(). This is essential for low-latency environments, where you want to make sure your application isn’t blocked on feedback creation.
from langsmith import trace, traceable, Client

@traceable
def foo(x):
    return {"y": x * 2}

@traceable
def bar(y):
    return {"z": y - 1}

client = Client()

inputs = {"x": 1}
with trace(name="foobar", inputs=inputs) as root_run:
    result = foo(**inputs)
    result = bar(**result)
    root_run.outputs = result
    trace_id = root_run.id
    child_runs = root_run.child_runs

# Provide feedback for a trace (a.k.a. a root run)
client.create_feedback(
    key="user_feedback",
    score=1,
    trace_id=trace_id,
    comment="the user said that ..."
)

# Provide feedback for a child run
foo_run_id = [run for run in child_runs if run.name == "foo"][0].id
client.create_feedback(
    key="correctness",
    score=0,
    run_id=foo_run_id,
    # trace_id= is optional but recommended to enable batched and backgrounded 
    # feedback ingestion.
    trace_id=trace_id,
)
You can even log feedback for in-progress runs using create_feedback() / createFeedback(). See this guide for how to get the run ID of an in-progress run. To learn more about how to filter traces based on various attributes, including user feedback, see this guide.