n8n Basic LLM Chain Node: Your First Step Into AI Workflows

Table of Contents

n8n Basic LLM Chain Node: Your First Step Into AI Workflows

The Basic LLM Chain is one of n8n’s foundational AI nodes — a clean, direct way to send a prompt to a language model and get a response back, without any of the routing and decision-making complexity of an AI agent. If you want to add AI-generated text to a workflow — summarize content, classify data, rewrite copy, extract structured information, or answer a question — the Basic LLM Chain is often the right tool.

In this guide we cover how the Basic LLM Chain works, how to configure it, prompt design tips, chaining it with other nodes, and real examples where it fits perfectly in production workflows.

What Is the Basic LLM Chain?

The Basic LLM Chain node sends a prompt to a connected language model and returns the model’s text response as output. Unlike the AI Agent node — which can decide to call tools, loop through reasoning steps, and maintain memory across turns — the Basic LLM Chain is stateless and single-shot: one prompt in, one response out. This simplicity is exactly what makes it the right choice for many automation tasks.

You connect a language model (via a sub-node like the OpenAI Chat Model, Anthropic Claude, or any other supported model) to the Basic LLM Chain. The chain node handles formatting the prompt, calling the model API, and returning the output as a standard n8n item that flows to the next node. No agent loop, no tool calling, no memory management — just clean prompt-to-response processing.

Setting Up the Basic LLM Chain

Add the Basic LLM Chain node to your workflow, then connect a Chat Model sub-node to it by clicking the model connection point. The Chat Model sub-node is where you configure which AI provider and model to use (GPT-4o, Claude 3.5, Gemini, etc.) and authenticate with your API key. The Basic LLM Chain node itself is where you write your prompt.

The prompt field supports n8n expressions, so you can dynamically insert values from earlier nodes in the workflow. For example, if a previous node fetched a customer email, you can reference it in the prompt as {{ $json.email_body }} to have the model process that specific content. This dynamic prompt construction is what makes the Basic LLM Chain powerful — you’re not sending the same static prompt every time; the prompt adapts to each item flowing through the workflow.

Prompt Design for the Basic LLM Chain

The quality of the Basic LLM Chain’s output depends almost entirely on prompt quality. A few principles make a significant difference. First, be specific about the task — “summarize this” produces generic results; “summarize this in 3 bullet points, focusing on action items and decisions, suitable for someone who wasn’t at the meeting” produces exactly what you need. Second, specify the output format — if downstream nodes expect JSON, tell the model to output JSON. If you need a plain string, say so. Consistent output format makes the next node’s job much easier.

Third, include examples when the task is nuanced — a prompt that shows one or two input-output pairs (few-shot prompting) dramatically improves accuracy on classification and extraction tasks. Fourth, use the system message field (if your model supports it) for persistent instructions about role and behavior, and reserve the human message for the dynamic, per-item content. This separation keeps prompts clean and reusable.

Chaining Multiple LLM Calls

One of the most powerful patterns with the Basic LLM Chain is chaining multiple calls together — the output of one chain becomes input to the next. For example: a first chain extracts structured data from raw text, a second chain validates and corrects that extraction, and a third chain formats the result for a specific audience. Each step is simple and focused, but together they accomplish complex multi-stage AI processing.

This is more reliable than trying to do everything in one massive prompt. Smaller, focused prompts with clear output formats are easier to debug, easier to test, and more consistent. When a chain produces bad output, you know exactly which step failed and what the input was, rather than having to untangle a complex single prompt that tried to do too much at once.

When to Use Basic LLM Chain vs. AI Agent

The choice between Basic LLM Chain and AI Agent comes down to whether the task requires dynamic decision-making. If you know exactly what the model needs to do — transform this text, classify this input, extract these fields — use the Basic LLM Chain. It’s faster, cheaper (fewer API calls), more predictable, and easier to debug.

Use the AI Agent when the task requires the model to decide what steps to take, call external tools based on the situation, or handle open-ended conversations where the right action depends on context. Many sophisticated AI workflows use both: an agent for high-level orchestration and decision-making, calling Basic LLM Chain nodes as sub-components for specific, well-defined processing steps within those decisions.

Practical Use Cases

The Basic LLM Chain earns its place in many real workflows. In a customer support triage pipeline: incoming support tickets pass through a chain that classifies the issue type and urgency, with the classification used to route the ticket to the right team automatically. In a content enrichment workflow: product descriptions fetched from a database pass through a chain that rewrites them in brand voice before publishing to a website.

In a data extraction pipeline: unstructured email bodies pass through a chain that extracts key fields (names, dates, amounts, action items) as structured JSON for database insertion. In a translation workflow: customer feedback collected in multiple languages passes through a chain that translates each entry to English before sentiment analysis. In a report generator: raw metric data from an analytics API passes through a chain that writes a human-readable summary paragraph, which then gets included in an automated weekly report email. Each of these is a clean, single-purpose LLM chain doing one job well.

Join Our AI Community

Get access to the JSON workflow files from this article, weekly live sessions, and a community of builders working through the same challenges. Everything is free and the community is active.

Free Community

Join 1,000+ AI Automation Builders

Weekly tutorials, live calls & direct access to Ryan & Matt.

Join Free →

Keep Learning