n8n Sentiment Analysis Node: Classify Text Tone in Your Workflows
The n8n sentiment analysis node lets you take any piece of text and automatically classify the emotional tone behind it — positive, negative, or neutral — without writing a single line of code. It is one of the most practical AI nodes in n8n for real-world automation: monitoring brand mentions, filtering support tickets, routing customer feedback, or analyzing social media comments at scale.
This guide covers the full range of settings and use cases for the sentiment analysis node, including the advanced approach using Hugging Face models for domain-specific accuracy that the default LLM-based approach cannot match.
How the n8n Sentiment Analysis Node Works
The sentiment analysis node takes an input text field and classifies it into sentiment categories. By default you get three branches: positive, neutral, and negative. Every item processed flows into exactly one branch based on the model’s classification, and your workflow can take different actions depending on which branch it lands in.
The node runs on top of a large language model, which means you can improve its accuracy by writing a specific system prompt. The default behavior without a system prompt works adequately for general text, but for any production use case — customer support messages, product reviews, social media posts — writing a clear system prompt that describes the context dramatically improves classification quality.
Key Settings: Categories, Detailed Results, and System Prompt
Three settings in the sentiment analysis node are worth understanding before you build with it. First, categories: the default positive, neutral, and negative can be customized or expanded. You can add additional sentiment categories beyond the default three, though accuracy decreases as you add more nuanced distinctions.
Second, include detailed results: when enabled, this adds two additional fields to each output — strength and confidence. Strength represents the intensity of the sentiment (a mildly positive response vs. a strongly positive one), while confidence reflects how certain the model is about its classification. Both are estimated values generated by the LLM and described in n8n’s documentation as rough indicators rather than precise measurements, but they are useful for filtering edge cases or flagging low-confidence classifications for human review.
Third, system prompt: this is the single highest-leverage setting for improving accuracy. Describe the type of text you are analyzing, the domain it comes from, and any context the model needs to classify correctly.
Neutral, Positive, and Negative: What Flows Into Each Branch
The neutral branch handles text that is ambiguous, mixed, or neither clearly positive nor negative. A review that says “the video was okay, I wish it had gone into more detail” is a good example — it is not negative, but it is not enthusiastic either. The node will typically assign this a strength around 0.5 and a high confidence score because the neutrality is clear even if the sentiment is not strong.
Strong language in either direction produces high confidence scores. Where the sentiment analysis node can struggle is with sarcasm, irony, or domain-specific jargon — a phrase that is positive in one industry context might be neutral or negative in another. This is exactly where Hugging Face models become valuable.
Customizing Sentiment Categories
The default three categories cover most use cases, but you can add more by editing the categories field directly. Common extensions include “very positive” and “very negative” for finer-grained routing, or domain-specific labels like “frustrated,” “satisfied,” or “confused” for customer service pipelines.
The practical limit depends on your use case. Two to four categories work well with LLM-based classification. Beyond that, accuracy decreases noticeably — the distinctions between “slightly negative” and “moderately negative” are genuinely hard for a general LLM to make consistently. For fine-grained classification at scale, a specialized Hugging Face model trained for your specific domain will consistently outperform a general LLM with many custom categories.
Advanced: Hugging Face Models for Domain-Specific Sentiment
Hugging Face hosts thousands of sentiment analysis models fine-tuned on specific domains and data types. A model trained on Twitter data handles informal language, abbreviations, and internet slang far better than a general-purpose LLM. A model trained on financial news understands terms like “bullish” and “bearish” in context. A model trained on product reviews understands e-commerce language patterns.
In n8n, you access Hugging Face models through the HTTP request node. The process involves making a POST request to the Hugging Face Inference API with your text as the payload. The endpoint format is standard across models, so once you have set up the HTTP request node for one model, switching to a different Hugging Face model is just a URL change. The Twitter-specific sentiment model is a classic starting point — well-tested and widely used. For financial content or any specialized domain, searching Hugging Face with your topic plus “sentiment” and sorting by download count will surface the most popular actively maintained options.
When to Use LLM Sentiment vs. Hugging Face
The n8n sentiment analysis node — using the built-in LLM approach — is the right starting point for most workflows. It is fast to set up, requires no additional API credentials beyond your existing language model connection, and works well for general-purpose text. For prototyping, internal tools, and use cases where absolute accuracy is not critical, it is the practical choice.
Hugging Face models are worth the extra setup when you have a specific domain, when you are processing high volumes of text where small accuracy improvements compound, or when you need performance that a general LLM cannot match. Test both approaches on a sample of your actual data before committing — the performance difference varies significantly depending on the domain and the specific models compared.
Building a Sentiment Routing Workflow in n8n
A typical sentiment analysis workflow follows a simple pattern: data source → sentiment analysis node → branch-specific actions. The data source might be a webhook receiving incoming messages, a schedule trigger pulling from a database, or an HTTP request fetching social media mentions. The node classifies each item and routes it to the appropriate branch. Each branch then takes the action appropriate to that sentiment — logging negative feedback to Slack, storing positive reviews for testimonials, or queuing neutral responses for human review.
For higher-volume or more critical workflows, adding a confidence threshold filter before the branch actions is worth considering. Items where the model’s confidence is below a threshold — say, 0.7 — can be routed to a manual review queue rather than being processed automatically. This gives you the speed of automation while keeping a human in the loop for the cases the model is genuinely uncertain about.
