messages
with a list of dictionaries/objects. Each dictionary/object must contain the keys role
and content
with string values. The output must return an object that, when serialized, contains the key choices
with a list of dictionaries/objects. Each must contain the key message
with a dictionary/object that contains the keys role
and content
with string values.
To make your custom LLM traces appear well-formatted in the LangSmith UI, your trace inputs and outputs must conform to a format LangSmith recognizes:
role
and content
."assistant"
role may optionally contain tool_calls
. These tool_calls
may be in OpenAI format or LangChain’s format."messages"
key with a list of messages in the above format.
tools
for the model to call.choices
with a value that is a list of dictionaries/objects. Each dictionary/object must contain the key message
, which maps to a message object with the keys role
and content
.message
with a value that is a message object with the keys role
and content
.role
and content
.metadata
fields to help LangSmith identify the model - which if recognized, LangSmith will use to automatically calculate costs. To learn more about how to use the metadata
fields, see this guide.
ls_provider
: The provider of the model, eg “openai”, “anthropic”, etc.ls_model_name
: The name of the model, eg “gpt-4o-mini”, “claude-3-opus-20240307”, etc.ls_model_name
is not present in extra.metadata
, other fields might be used from the extra.metadata
for estimating token counts. The following fields are used in the order of precedence:metadata.ls_model_name
inputs.model
inputs.model_name
ls_model_name
provided. It also calculates costs automatically by using the model pricing table. To learn how LangSmith calculates token-based costs, see this guide.
However, many models already include exact token counts as part of the response. If you have this information, you can override the default token calculation in LangSmith in one of two ways:
usage_metadata
field on the run’s metadata.usage_metadata
field in your traced function outputs.langsmith>=0.3.43
(Python) and langsmith>=0.3.30
(JS/TS).usage_metadata
key to the function’s response to set manual token counts and costs.
prompt
with a string value. Other inputs are also permitted. The output must return an object that, when serialized, contains the key choices
with a list of dictionaries/objects. Each must contain the key text
with a string value. The same rules for metadata
and usage_metadata
apply as for chat-style models.