Class SimpleSequentialChain

Simple chain where a single string output of one chain is fed directly into the next.

Example

import { SimpleSequentialChain, LLMChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";

// This is an LLMChain to write a synopsis given a title of a play.
const llm = new OpenAI({ temperature: 0 });
const template = `You are a playwright. Given the title of play, it is your job to write a synopsis for that title.

Title: {title}
Playwright: This is a synopsis for the above play:`
const promptTemplate = new PromptTemplate({ template, inputVariables: ["title"] });
const synopsisChain = new LLMChain({ llm, prompt: promptTemplate });


// This is an LLMChain to write a review of a play given a synopsis.
const reviewLLM = new OpenAI({ temperature: 0 })
const reviewTemplate = `You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.

Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:`
const reviewPromptTemplate = new PromptTemplate({ template: reviewTemplate, inputVariables: ["synopsis"] });
const reviewChain = new LLMChain({ llm: reviewLLM, prompt: reviewPromptTemplate });

const overallChain = new SimpleSequentialChain({chains: [synopsisChain, reviewChain], verbose:true})
const review = await overallChain.run("Tragedy at sunset on the beach")
// the variable review contains resulting play review.

Hierarchy

Implements

Constructors

Properties

Array of chains to run as a sequence. The chains are run in order they appear in the array.

inputKey: string = "input"
outputKey: string = "output"
trimOutputs: boolean

Whether or not to trim the intermediate outputs.

verbose: boolean

Whether to print out response text.

callbacks?: Callbacks
memory?: BaseMemory
metadata?: Record<string, unknown>
tags?: string[]

Accessors

Methods

  • Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

    Parameters

    Returns AsyncGenerator<RunLogPatch, any, unknown>

Generated using TypeDoc