How to Use OpenRouter with n8n: Model Selector, Fallback Models, and More

Table of Contents

How to Use OpenRouter with n8n: Model Selector, Fallback Models, and More

If you’re running every AI task in your n8n workflow through the same model, you’re probably overpaying and taking on unnecessary risk. Simple tasks like extracting a date from text don’t need the same heavyweight model as complex document analysis. And if you’re locked into a single provider, a rate limit or outage can bring your entire workflow to a halt.

In this tutorial, we’ll cover three practical approaches to using multiple AI models in n8n — starting with OpenRouter as a unified gateway, then building a model selector to route tasks dynamically, and finally setting up fallback models so your workflows stay resilient even when a provider goes down.

What Is OpenRouter and Why Should n8n Users Care?

OpenRouter is a unified API interface that gives you access to hundreds of AI models — including Claude, GPT, Gemini, and many others — through a single API key. Instead of managing separate credentials for Anthropic, OpenAI, and Google, you authenticate once with OpenRouter and get access to everything.

The pitch is straightforward: better prices, better uptime, and no subscriptions. For n8n users specifically, this means you can switch between models without reconfiguring credentials every time, and you gain access to a massive catalog of models you might not have explored yet.

Setting Up Your OpenRouter Account and API Key

Getting started with OpenRouter is quick. Head to the OpenRouter website and click Get API Key. You can sign in with Google or GitHub, or create an account with an email address. After accepting the legal consent screen, you’ll be prompted to create your first API key.

Give your key a descriptive name (useful if you create separate keys for different projects), set a credit limit if you want a hard spending cap, and choose an expiration if needed. Once created, copy the key immediately — OpenRouter won’t show it to you again.

Back in n8n, go to your credentials and create a new OpenRouter credential by pasting in your API key. Label it clearly so you can identify it later, especially if you’re creating temporary keys for testing. Once saved, you’re ready to connect it to any workflow.

Approach 1: Using OpenRouter as Your Chat Model

The simplest way to use OpenRouter in n8n is to connect it directly as the chat model for an AI agent. To find it, open any AI agent node, click on the Chat Models option, and search for “Open Router.” You’ll see it listed under the advanced usage category.

Once connected, you can browse and select from hundreds of models in the settings panel. For example, Claude Opus 4.6 appears under the Anthropic section, GPT models show up under OpenAI, and Google Gemini models are available as well. The key advantage is that all of these models are accessible through the single API key you set up — no additional credentials required.

Each model also exposes configuration options like frequency penalty, max tokens, presence penalty, sampling temperature, and top-p. Keep in mind that not all options apply to every model — as model capabilities evolve rapidly, some settings may have no effect depending on which model you’ve selected.

The Power of a Single API Key

One of the most practical benefits of OpenRouter is credential simplification. In a standard n8n setup, each AI provider requires its own credential: Anthropic credentials for Claude, OpenAI credentials for GPT, Google credentials for Gemini. If you’re building workflows that might need to swap providers, that’s a lot of setup overhead.

With OpenRouter, you configure one credential and gain access to the entire catalog. Switching from Claude to Gemini to GPT is just a matter of selecting a different model from the dropdown — no new credentials, no new API keys, no additional configuration. For teams managing multiple workflows or experimenting with different models, this alone is a significant time saver.

OpenRouter also provides a model explorer on their website where you can filter by input type (text, images, audio, video), output type, context window size, and pricing tier. This is a great starting point when you’re trying to choose the right model for a specific task before committing to it in your workflow.

Approach 2: Using the Model Selector for Dynamic Routing

The model selector is a built-in n8n node that lets you choose which AI model to use based on the data flowing through your workflow. Instead of hardcoding a single model for every task, you define logic that routes different types of inputs to different models — all within the same AI agent.

To find it, open an AI agent node and scroll to the very bottom of the Chat Models list. You’ll see Model Selector at the bottom with the description: “Use this node to select one of the connected models to this node based on the workflow data.”

When you add a model selector, it appears as a larger oval connector on your canvas — similar to the human-in-the-loop node — visually indicating that it’s handling a decision rather than a direct connection. You can configure it to support anywhere from 2 to 10 different models simultaneously.

Configuring Model Selector Logic

Each model slot in the selector has its own condition set. If the conditions for model 1 are met, that model runs. If not, it falls through to model 2, and so on. The conditions support string, number, datetime, boolean, array, and object comparisons, and you can combine them with AND or OR logic.

A simple example from the video: route SQL queries to Claude (Anthropic) and everything else to GPT (OpenAI). The condition checks whether the input contains both “select” and “from” — basic indicators of a SQL query. If matched, Anthropic handles the request; otherwise OpenAI takes over.

In practice, your routing logic will depend on your specific use case. You might route code-related tasks to a model known for coding accuracy, summarization tasks to a faster cheaper model, and complex reasoning tasks to a flagship model. The model selector makes this kind of tiered approach easy to implement without splitting your workflow into separate branches.

Approach 3: Fallback Models for Resilient Workflows

The third approach is the simplest to implement and one of the most valuable for production workflows: fallback models. The idea is straightforward — if your primary model provider experiences an outage or hits rate limits, your workflow automatically switches to a backup model instead of failing entirely.

To enable this in n8n, open an AI agent node and look for the Enable Fallback Model toggle. Once enabled, a second model slot appears below your primary chat model. Set your preferred model as the primary and your backup as the fallback.

For example, you might set Claude Sonnet 4.5 as your primary model and GPT-5 as your fallback. Under normal conditions, all requests go to Anthropic. If Anthropic is unavailable, n8n automatically routes to OpenAI without any manual intervention — your workflow continues running and your automations keep delivering results.

This is especially valuable for workflows that run on schedules or are triggered by external events. A monitoring workflow that fails silently because a provider was down for 20 minutes is far worse than one that transparently switched to a backup and kept running.

When to Use Each Approach

Use OpenRouter when you want access to a broad catalog of models without managing multiple API credentials. It’s the right choice when you’re experimenting with different providers, when your team uses multiple models across workflows, or when you want the flexibility to switch models quickly without reconfiguring credentials.

Use the Model Selector when different parts of your workflow have meaningfully different AI requirements. If you know that code generation tasks perform better with one model and summarization tasks perform better with another, the model selector lets you optimize each task type without duplicating your workflow logic.

Use Fallback Models whenever you’re running workflows in production. The setup takes less than a minute and dramatically improves reliability. Even if your primary model is available 99.9% of the time, that remaining 0.1% can cause real problems for automated workflows. A fallback eliminates that single point of failure.

These three approaches aren’t mutually exclusive. You can use OpenRouter as both your primary and fallback model provider, and combine that with a model selector to build workflows that are flexible, cost-efficient, and resilient.

Homework: Apply These Concepts to an Existing Workflow

Take a workflow you’ve already built and add at least one of these concepts to it. The easiest starting point is the fallback model — it takes about two seconds to enable and immediately improves your workflow’s error handling. From there, consider whether a model selector could help you optimize costs or performance for different task types, and explore whether OpenRouter makes sense as a unified credential layer for your setup.

Share your results in the Ryan & Matt Data Science Skool community as a post or bring it to the Wednesday group call. Seeing how others apply these concepts to real workflows is one of the best ways to solidify the learning.

Join Our AI Community

Get access to the JSON workflow files from this article, weekly live sessions, and a community of builders working through the same challenges. Everything is free and the community is active.

Free Community

Join 1,000+ AI Automation Builders

Weekly tutorials, live calls & direct access to Ryan & Matt.

Join Free →

Keep Learning