default
project (though you can easily change that).
LANGCHAIN_*
in other places. These are all equivalent, however the best practice is to use LANGSMITH_TRACING
, LANGSMITH_API_KEY
, LANGSMITH_PROJECT
.The LANGSMITH_PROJECT
flag is only supported in JS SDK versions >= 0.2.16, use LANGCHAIN_PROJECT
instead if you are using an older version.from langsmith.wrappers import wrap_openai
and use it to wrap the OpenAI client (openai_client = wrap_openai(OpenAI())
).
What happens if you call it in the following way?
from langsmith import traceable
and use it decorate the overall function (@traceable
).
What happens if you call it in the following way?
Metadata
tab when inspecting the run. It should look something like this
@traceable(metadata={"llm": "gpt-4o-mini"})
to the rag
function.
Keeping track of metadata in this way assumes that it is known ahead of time. This is fine for LLM types, but less desirable for other types of information - like a User ID. In order to log information that, we can pass it in at run time with the run ID.
Monitor
tab in a project, you will see a series of monitoring charts. Here we track lots of LLM specific statistics - number of traces, feedback, time-to-first-token, etc. You can view these over time across a few different time bins.
llm
. We can group the monitoring charts by ANY metadata attribute, and instantly get grouped charts over time. This allows us to experiment with different LLMs (or prompts, or other) and track their performance over time.
In order to do this, we just need to click on the Metadata
button at the top. This will give us a drop down of options to choose from to group by: