Class PromptTemplate<RunInput, PartialVariableName>

Schema to represent a basic prompt for an LLM.

Example

import { PromptTemplate } from "langchain/prompts";

const prompt = new PromptTemplate({
inputVariables: ["foo"],
template: "Say {foo}",
});

Type Parameters

  • RunInput extends InputValues = any

  • PartialVariableName extends string = any

Hierarchy

Implements

Constructors

Properties

PromptValueReturnType: StringPromptValue
inputVariables: Extract<keyof RunInput, string>[]

A list of variable names the prompt template expects

partialVariables: PartialValues<PartialVariableName>

Partial variables

template: string

The prompt template

templateFormat: "f-string" = "f-string"

The format of the prompt template. Options are 'f-string'

Default Value

'f-string'

validateTemplate: boolean = true

Whether or not to try validating the template on initialization

Default Value

true

outputParser?: BaseOutputParser<unknown>

How to parse the output of calling an LLM on this formatted prompt

Methods

  • Formats the prompt template with the provided values.

    Parameters

    • values: TypedPromptInputValues<RunInput>

      The values to be used to format the prompt template.

    Returns Promise<string>

    A promise that resolves to a string which is the formatted prompt.

  • Merges partial variables and user variables.

    Parameters

    • userVariables: TypedPromptInputValues<RunInput>

      The user variables to merge with the partial variables.

    Returns Promise<InputValues<PartialVariableName | Extract<keyof RunInput, string>>>

    A Promise that resolves to an object containing the merged variables.

  • Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

    Parameters

    • input: RunInput
    • Optional options: Partial<BaseCallbackConfig>
    • Optional streamOptions: Omit<LogStreamCallbackHandlerInput, "autoClose">

    Returns AsyncGenerator<RunLogPatch, any, unknown>

  • Take examples in list format with prefix and suffix to create a prompt.

    Intended to be used a a way to dynamically create a prompt from examples.

    Parameters

    • examples: string[]

      List of examples to use in the prompt.

    • suffix: string

      String to go after the list of examples. Should generally set up the user's input.

    • inputVariables: string[]

      A list of variable names the final prompt template will expect

    • exampleSeparator: string = "\n\n"

      The separator to use in between examples

    • prefix: string = ""

      String that should go before any examples. Generally includes examples.

    Returns PromptTemplate<any, any>

    The final prompt template generated.

  • Load prompt template from a template f-string

    Type Parameters

    • RunInput extends InputValues = Symbol

    • T extends string = string

    Parameters

    • template: T
    • __namedParameters: Omit<PromptTemplateInput<RunInput, string>, "template" | "inputVariables"> = {}

    Returns PromptTemplate<RunInput extends Symbol
        ? ParamsFromFString<T>
        : RunInput, any>

Generated using TypeDoc