n8n Nodes List: Every Core AI Node You Should Know

Table of Contents

The n8n Nodes List Most Beginners Ignore

Most n8n users are overreliant on the AI agent node. You see it everywhere on YouTube, in community workflows, and across social media — someone wants to classify text, so they reach for the AI agent. They want to extract a name from a document, so they spin up an AI agent. The problem is that n8n has a full library of purpose-built AI nodes, and understanding the complete n8n nodes list can dramatically simplify your workflows while making them more reliable and accurate.

In this guide, we walk through the most important n8n nodes for AI tasks — the core n8n nodes that every serious automation builder should understand. These are not obscure utilities buried in the documentation. They are production-ready nodes with specific jobs, and using the right one for the right task is what separates clean workflows from messy ones.

The AI Agent Node: Powerful But Overused

The AI agent is the first node on any n8n nodes list for a reason — it is the Swiss Army knife of AI automation. You can attach a chat model, plug in memory, connect dozens of tools, and define a custom output format. For complex, multi-step orchestration where the model needs to reason across multiple steps or use external tools, the AI agent is the right choice.

Where it goes wrong is in simpler scenarios. When someone just needs a language model to process text and produce output — no memory, no tools — the AI agent adds unnecessary complexity. The agent also requires a system message and a prompt for best results, though n8n hides the system message field by default. If you are using an AI agent and you are not filling out the system message field, you are leaving accuracy on the table.

The AI agent also now integrates with Chat Hub, n8n’s latest feature for building chat interfaces directly inside a workflow. For any chat-based use case, you will likely see the AI agent combined with a chat trigger and a streaming response.

Basic LLM Chain: The Overlooked Alternative

Directly below the AI agent in terms of complexity is the basic LLM chain. This is the stripped-down version — you get a model, a prompt, and an optional output parser, but no memory and no tools. That is exactly the point.

If you are building a workflow where you send text to a language model, get a response, and move on, the basic LLM chain is the correct choice from the n8n nodes list. Using an AI agent in this scenario is like bringing a forklift to move a chair. The basic LLM chain is leaner, clearer, and less likely to produce unexpected behavior. Choose the node that matches what you actually need.

Text Classifier: Replace the Agent for Classification Tasks

The text classifier is one of the most underused core n8n nodes. Its job is exactly what the name says — it classifies incoming text into predefined categories. You define the categories, write a description for each one, and the node routes your workflow accordingly.

A practical example: a support ticket comes in. Is it a bug report or a feature request? The text classifier handles this in a single node. You can define as many categories as needed, though accuracy decreases as categories multiply. The node outputs directly to a branching structure, similar to a switch statement, so your workflow can take different paths based on the classification result.

One important note from the n8n nodes list: if you need higher accuracy for a specific domain — medical text, legal documents, sentiment-heavy content — Hugging Face provides fine-tuned models that outperform general-purpose LLMs for classification. You would connect those through an HTTP request node, but the performance improvement can be significant.

Sentiment Analysis: Positive, Negative, or Neutral at Scale

Sentiment analysis works similarly to the text classifier but focuses specifically on the emotional tone of text. By default it provides positive, neutral, and negative branches, though you can customize the categories. This node is particularly useful for monitoring brand mentions, analyzing customer feedback, or filtering support messages before routing them to different queues.

Like text classification, the sentiment analysis node runs on top of a large language model, which means the system prompt matters. Writing a clear, specific system prompt dramatically improves accuracy. And again, if you are working in a domain with specialized vocabulary, Hugging Face models tuned for sentiment can outperform the default LLM approach.

Information Extractor: Structured Data Without Regex

The information extractor is one of the most practical entries in the n8n nodes list. It pulls specific data points from unstructured text and returns them in a structured format. You define a schema — field names, types, and descriptions of what the model should look for — and the node does the extraction.

Before AI-powered extraction, pulling a salary figure or a person’s name from a block of text required complex regular expressions. The information extractor handles this naturally. You simply define the field, set the type, write a brief description, and the node returns clean, structured output. This is especially useful in document processing pipelines — pulling patient data from medical notes, extracting contract terms, or parsing email content into structured fields.

Summarization Chain: Map Reduce and Refine Strategies

The summarization chain handles one of the most common AI automation tasks: taking a large body of text and producing a concise summary. What makes this node distinctive in the n8n nodes list is that it exposes different summarization strategies depending on the size and nature of your content.

The simplest strategy, “stuff,” passes all the text to the model at once. This works for shorter documents. For longer content, you have two more powerful options. Map reduce summarizes each chunk of the document individually and then summarizes those summaries together — efficient and scalable. Refine takes a different approach: it summarizes the first chunk, then reads the next chunk and decides whether to update the summary, continuing through the full document. Refine tends to produce more coherent results for narrative content, while map reduce is faster for factual documents. The n8n team recommends map reduce as the default, with refine as a quality upgrade when you have time to spare.

Q&A Chain and Guardrails: RAG and Security

The Q&A chain is n8n’s built-in approach to retrieval-augmented generation. Rather than answering from model knowledge alone, the Q&A chain pulls relevant content from a vector store — Pinecone, for example — and grounds the model’s response in that retrieved context. This is the right node when you need answers that are tied to a specific corpus of documents rather than general LLM knowledge. Setting it up requires a few additional nodes: a vector store, an embeddings model, and a document loader, but the n8n team provides example templates to get started.

Guardrails serve a completely different purpose — they protect your workflow from misuse. You can place a guardrails node before or after other AI nodes to screen for PII, check for jailbreak attempts, filter NSFW content, enforce topical alignment, or reject specific keywords. For any production chatbot, guardrails are not optional. A customer-facing automation without input and output validation is a security risk. The node offers both sanitization (cleaning the input) and violation detection (stopping the workflow if a rule is triggered), with pass and fail branches for handling each outcome.

Evaluation Nodes and Model-Specific Integrations

The evaluation suite in n8n is a set of nodes for measuring workflow accuracy. You can evaluate outputs based on correctness, helpfulness, string similarity, categorization accuracy, or custom metrics. This is particularly useful for classification workflows where you want to track how often the model gets the right answer across a test set. Setting up evaluation requires a trigger node specific to evaluations and a data source — either a data table or Google Sheets — to pull your test cases from.

Beyond the model-agnostic nodes in the core n8n nodes list, there are also model-specific integration nodes for Anthropic, Google Gemini, and OpenAI. These give you direct access to capabilities that go beyond basic text generation. The Anthropic node includes document analysis, file upload, image analysis, and prompt improvement. Gemini adds audio analysis, video analysis, and image generation. OpenAI offers sixteen distinct actions. These nodes are worth knowing when you need to go beyond what an LLM chain can do — for example, analyzing a PDF with Claude directly through the Anthropic node rather than building a custom HTTP pipeline.

HTTP Request: Connecting to APIs Not in the n8n Nodes List

The final entry worth discussing is not an AI node at all, but it belongs in any complete review of the n8n nodes list: the HTTP request node. Not every API has a native n8n integration. When you need to connect to a service that does not appear in the node library — a niche AI provider, a proprietary data source, a specialized model endpoint — the HTTP request node is how you get there.

Mastering the HTTP request node is essential for any serious n8n builder. There are dedicated guides covering the basics, REST API calls, and pagination. If a service has an API, the HTTP request node can reach it. This is particularly relevant as new AI tools and model providers launch continuously — waiting for a native node integration is not always an option.

Which n8n Nodes Should You Learn First?

If you are building your understanding of the n8n nodes list from scratch, start with the nodes that replace the AI agent in common scenarios. The basic LLM chain handles most simple generation tasks. The text classifier and sentiment analysis cover most classification and routing needs. The information extractor handles structured extraction. Together, these four core n8n nodes replace the AI agent in the majority of real-world workflows.

From there, add the summarization chain for document processing, the Q&A chain for RAG-based applications, and guardrails for anything customer-facing. Learn the model-specific nodes for tasks that require direct integration with Anthropic, Gemini, or OpenAI capabilities. And make sure the HTTP request node is always in your toolkit for everything else.

The n8n nodes list is longer than most people realize — and using the right node for each job is what makes workflows maintainable, accurate, and production-ready.

Join Our AI Community

Get access to the JSON workflow files from this article, weekly live sessions, and a community of builders working through the same challenges. Everything is free and the community is active.

Free Community

Join 1,000+ AI Automation Builders

Weekly tutorials, live calls & direct access to Ryan & Matt.

Join Free →

Keep Learning