This tutorial uses the SDK for prompt engineering, if you are interested in using the UI instead, read this guide.
1. Setup
First, install the required packages:2. Create a prompt
To create a prompt in LangSmith, define the list of messages you want in your prompt and then wrap them using theChatPromptTemplate
function (Python) or TypeScript function. Then all you have to do is call push_prompt
(Python) or pushPrompt
(TypeScript) to send your prompt to LangSmith!
3. Test a prompt
To test a prompt, you need to pull the prompt, invoke it with the input values you want to test and then call the model with those input values. your LLM or application expects.4. Iterate on a prompt
LangSmith makes it easy to iterate on prompts with your entire team. Members of your workspace can select a prompt to iterate on, and once they are happy with their changes, they can simply save it as a new commit. To improve your prompts:- We recommend referencing the documentation provided by your model provider for best practices in prompt creation, such as Best practices for prompt engineering with the OpenAI API and Gemini’s Introduction to prompt design.
- To help with iterating on your prompts in LangSmith, we’ve created Prompt Canvas — an interactive tool to build and optimize your prompts. Learn about how to use Prompt Canvas.
push_prompt
(Python) or pushPrompt
(TypeScript) methods as when you first created the prompt.
5. Next steps
- Learn more about how to store and manage prompts using the Prompt Hub in these how-to guides
- Learn more about how to use the playground for prompt engineering in these how-to guides