n8n AI Workflow Automation: Self-Hosted, Visual, Code-First

The complete technical guide to n8n -- the fair-code workflow automation platform with native AI capabilities. From visual workflow building and custom JavaScript/Python code nodes to AI agent orchestration with tool calling and memory, MCP integration, self-hosting with Docker Compose, 400+ integrations, enterprise governance, and production scaling.

What Is n8n?

n8n (pronounced "nodemation") is a fair-code workflow automation platform that combines visual workflow building with full code extensibility. Unlike purely no-code tools, n8n gives you a visual canvas for designing workflows while allowing you to drop into JavaScript or Python at any node. It is self-hostable, meaning you can run it on your own infrastructure with complete control over your data -- or use n8n Cloud for a managed experience. The project has amassed over 100,000 GitHub stars and a community of 200,000+ members, making it one of the most popular open-source automation platforms in existence.

n8n operates on a fair-code license (Sustainable Use License), which means the source code is publicly available, you can self-host it freely, and you can modify it for your own use. The key distinction from traditional open-source is that fair-code restricts commercial redistribution -- you cannot resell n8n as a service. For individual developers, teams, and enterprises running it internally, it behaves like open-source software with no execution limits, no workflow limits, and no artificial restrictions.

The platform includes 400+ built-in integrations (nodes) covering APIs, databases, communication tools, cloud services, and AI providers. With n8n 2.0 (January 2026), native LangChain integration was introduced, adding 70+ AI-specific nodes, persistent agent memory, and support for self-hosted LLM backends. n8n workflows execute as directed graphs where each node processes data and passes it to the next, supporting branching, merging, loops, error handling, and sub-workflow delegation.

Key Features

AI

AI Agent Nodes

Built-in AI Agent node implements LangChain's tool-calling interface with a ReAct-style reasoning loop (think, act, observe, iterate). Connect any LLM -- OpenAI, Anthropic, Google Gemini, local Ollama models -- and attach tool nodes that the agent can invoke autonomously. Supports multi-agent setups and RAG-augmented workflows.

AI

Memory Nodes

Modular memory system with multiple backends: Simple Memory for basic session context, Window Buffer Memory for sliding conversation windows, and persistent storage via PostgreSQL or Redis for production deployments. Chat Memory Manager nodes maintain conversation state across workflow executions for stateful AI agents.

CODE

Visual Editor + Code Nodes

Drag-and-drop visual canvas for designing workflows, with the ability to insert JavaScript or Python code at any point. Code nodes have full access to npm packages and Python libraries. The visual editor provides real-time execution previews, debugging tools, and data inspection between nodes.

CODE

Sub-Workflow Chaining

Break complex automations into reusable sub-workflows called via the Execute Workflow node. Sub-workflows can be triggered with parameters, return results, and run in parallel. This enables modular workflow architecture where common patterns (data validation, notification, logging) are defined once and reused across workflows.

INFRA

Self-Hosting with Docker

Deploy n8n on any infrastructure using Docker and Docker Compose. A single YAML file defines n8n, its database (PostgreSQL), and optional services (Redis for scaling). Self-hosted deployments have zero execution limits and full data sovereignty. The Docker image supports ARM and x86 architectures.

INFRA

400+ Integrations

Native nodes for OpenAI, Anthropic, Google Gemini, Slack, PostgreSQL, MySQL, MongoDB, GitHub, GitLab, Stripe, Shopify, HubSpot, Salesforce, Notion, Airtable, Google Sheets, AWS S3, and hundreds more. The HTTP Request node connects to any REST or GraphQL API. Webhook nodes accept inbound requests.

MCP

MCP Integration

Two dedicated nodes for Model Context Protocol integration: the MCP Server Trigger exposes n8n workflows as tools that external AI agents can discover and call, and the MCP Client Tool connects n8n agents to external MCP servers for tool discovery and execution during workflow runs.

ENT

Enterprise Features

SSO via SAML and LDAP, role-based access control (RBAC), audit logs, encrypted credential stores, Git-based version control for workflows, multi-environment promotion (dev/staging/prod), workflow history with rollback, and SOC 2 compliance. Available on self-hosted Enterprise or n8n Cloud Business plans.

CODE

Error Handling and Retry

Built-in error workflows that trigger when any workflow fails. Per-node retry policies with configurable backoff. Try/catch patterns using the Error Trigger node. Execution logs with full input/output data for debugging. Automatic retry of failed executions with exponential backoff.

INFRA

Webhook and Cron Triggers

Workflows trigger on HTTP webhooks, cron schedules, email receipts, file changes, database events, message queue messages, or manual execution. Webhook nodes generate unique URLs that accept POST/GET requests with automatic payload parsing. Cron expressions support second-level granularity.

AI

RAG and Vector Stores

Retrieval Augmented Generation nodes connect AI agents to vector databases (Pinecone, Qdrant, Supabase pgvector, Weaviate). Document loaders ingest PDFs, web pages, and text files. Text splitters chunk content for embedding. The agent searches external knowledge bases during reasoning for grounded, accurate responses.

ENT

Community and Marketplace

Thousands of shared workflow templates in the n8n community. Custom node packages published via npm. Active community forums with 200,000+ members. Official documentation with interactive examples. Weekly new node releases and regular platform updates.

Visual Editor vs Code-First: When to Use Each

n8n gives you both paradigms in one platform. The visual editor is ideal for workflows where the logic is primarily about connecting services -- receiving a webhook, enriching data from an API, sending a notification. You drag nodes onto a canvas, draw connections between them, and configure each node through forms. The visual representation makes it easy to understand flow at a glance, debug execution paths, and onboard non-technical team members.

Code-first mode is the right choice when your workflow involves complex data transformations, custom business logic, or operations that native nodes do not cover. n8n's Code node supports full Node.js (JavaScript/TypeScript) and Python with access to external packages. You can import npm modules, call APIs directly, perform matrix operations, run regex parsing, or implement any algorithm. Code nodes integrate seamlessly with visual nodes -- a workflow can mix drag-and-drop nodes with code blocks freely.

The practical recommendation: start with the visual editor for rapid prototyping and use code nodes only where native nodes fall short. Most production workflows end up as a hybrid -- 80% visual nodes for standard integrations and 20% code nodes for custom logic. This gives you the speed of no-code with the power of full programming when you need it.

Self-Hosting with Docker Compose

The recommended way to self-host n8n is with Docker Compose. A single docker-compose.yml file defines n8n, a PostgreSQL database for workflow storage, and optionally Redis for queue-based scaling. The Community Edition is free with no execution limits -- you only pay for your infrastructure (a $10-20/month VPS is sufficient for most workloads).

# docker-compose.yml - Production n8n deployment
version: '3.8'

services:
  n8n:
    image: n8nio/n8n:latest
    restart: always
    ports:
      - "5678:5678"
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=${N8N_USER}
      - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
      - N8N_HOST=${N8N_HOST}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${N8N_HOST}/
      - GENERIC_TIMEZONE=America/Bogota
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy

  postgres:
    image: postgres:16-alpine
    restart: always
    environment:
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U n8n"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  n8n_data:
  postgres_data:

For production deployments, place n8n behind a reverse proxy (Nginx, Caddy, or Traefik) with TLS termination. Set the WEBHOOK_URL environment variable to your public domain so that webhook triggers generate correct callback URLs. Use environment files (.env) for secrets and never commit credentials to version control.

# .env - Secrets (never commit this file)
POSTGRES_PASSWORD=your-secure-password-here
N8N_USER=admin
N8N_PASSWORD=your-n8n-password-here
N8N_HOST=n8n.yourdomain.com

# Start the stack
docker compose up -d

# View logs
docker compose logs -f n8n

# Backup the database
docker compose exec postgres pg_dump -U n8n n8n > backup.sql

# Update n8n to latest version
docker compose pull n8n
docker compose up -d n8n

Building an AI Agent Workflow: Step by Step

n8n's AI Agent node implements a ReAct (Reasoning + Acting) loop where the model reasons about the task, selects a tool to call, observes the result, and iterates until it reaches a final answer. Here is how to build a complete AI agent workflow from scratch.

Step 1: Create the trigger. Start with a Webhook node or Chat Trigger node. The Chat Trigger provides a built-in chat interface for testing. For production, use a Webhook node that receives requests from your application, Slack bot, or any HTTP client.

Step 2: Add the AI Agent node. Drag an AI Agent node onto the canvas. This is the orchestrator -- it receives the user message, reasons about it, and decides which tools to invoke. Configure it with an LLM (OpenAI GPT-4o, Anthropic Claude, Google Gemini, or a self-hosted model via Ollama).

Step 3: Attach tools. Connect tool nodes to the agent. Tools are sub-workflows or built-in nodes that the agent can call: a Calculator tool for math, an HTTP Request tool for API calls, a Code tool for custom logic, or a database query tool. Each tool has a name and description that the agent reads to decide when to use it.

Step 4: Add memory. Connect a memory node to persist conversation context. Simple Memory stores in-session history. For production, use a PostgreSQL-backed or Redis-backed memory node so conversations survive restarts and can be shared across workflow instances.

Step 5: Configure output. The agent's final response flows to output nodes -- a Respond to Webhook node to return the answer to the caller, a Slack node to post to a channel, an Email node to send a formatted response, or any combination. Add error handling nodes to catch failures gracefully.

// Example: AI Agent workflow structure in n8n
//
// [Chat Trigger] --> [AI Agent] --> [Respond to Webhook]
//                        |
//                   +----+----+----+
//                   |    |    |    |
//              [Memory] [Tool1] [Tool2] [Tool3]
//              (Redis)  (HTTP)  (Code)  (DB Query)
//
// The AI Agent node receives the user message,
// uses its connected LLM to reason about which tools
// to call, executes them, and returns the final answer.
// Memory persists conversation context across calls.

MCP Integration Capabilities

n8n supports the Model Context Protocol (MCP) through two dedicated nodes that bridge workflow automation with the AI tool ecosystem. This integration means n8n workflows can both consume and provide tools in the MCP standard, making n8n a first-class participant in the emerging agent interoperability layer.

MCP Server Trigger: This node exposes n8n workflows as MCP-compatible tools. You add the MCP Server Trigger to a workflow, connect tool nodes to it, and external AI agents (Claude Code, Claude Desktop, or any MCP client) can discover and execute those tools via the standard MCP protocol. This effectively turns any n8n workflow into a tool that AI agents can call -- a database lookup, a multi-step data pipeline, a notification workflow, or any custom business logic.

MCP Client Tool: This node connects n8n AI agents to external MCP servers. Configure it with an SSE or Streamable HTTP endpoint, and your n8n agent workflows can discover and call tools from any MCP server during execution. This means an n8n agent can access the full ecosystem of MCP servers -- filesystem operations, Git, Slack, databases, browser automation -- without building custom integrations for each.

The combination is powerful: n8n can act as both an MCP server (exposing workflow capabilities to external AI agents) and an MCP client (consuming tools from external MCP servers). This bidirectional integration positions n8n as a workflow orchestration layer within the broader MCP ecosystem, where complex multi-step operations are packaged as callable tools for any AI agent.

400+ Integrations

AI / LLM

AI and LLM Providers

OpenAI (GPT-4o, o1, o3), Anthropic (Claude Opus, Sonnet, Haiku), Google Gemini, Cohere, Hugging Face, Ollama (self-hosted), Groq, Mistral, and Azure OpenAI. Native nodes for chat completions, embeddings, text classification, and image generation.

DEV

Developer Tools

GitHub, GitLab, Bitbucket, Jira, Linear, Sentry, PagerDuty, Datadog, and custom webhook receivers. Automate code reviews, deploy pipelines, incident response, and project management workflows directly from version control events.

DATA

Databases and Storage

PostgreSQL, MySQL, MongoDB, Redis, SQLite, Microsoft SQL Server, Snowflake, BigQuery, Supabase, Firebase, Airtable, Google Sheets, and S3-compatible object storage. Full CRUD operations, schema inspection, and bulk data processing.

COMM

Communication

Slack, Microsoft Teams, Discord, Telegram, WhatsApp (via Twilio), Gmail, Outlook, SendGrid, Mailchimp, and custom SMTP. Send messages, manage channels, process incoming messages as triggers, and build chatbot interfaces.

Enterprise Features and Pricing

n8n offers a tiered pricing model designed around deployment preferences. The Community Edition (self-hosted) is entirely free with unlimited workflow executions, unlimited workflows, and no artificial restrictions. You deploy it on your own infrastructure using Docker, Kubernetes, or any server, and you own your data completely. This is the entry point for most developers and small teams.

The n8n Cloud plans provide a managed experience: the Starter plan begins at a low monthly cost with included executions, the Pro plan adds team collaboration and increased limits, and the Enterprise plan adds SSO, audit logs, and dedicated support. Cloud plans eliminate infrastructure management but introduce per-execution pricing that can scale with volume.

The Self-Hosted Enterprise plan combines the data sovereignty of self-hosting with enterprise features: SAML/LDAP SSO, role-based access control with granular permissions, audit logging for compliance, Git-based workflow version control, multi-environment promotion (dev to staging to production), encrypted credential stores, workflow history with rollback, and priority support. Contact n8n sales for license pricing. Self-hosting breaks even versus cloud at roughly 20,000 executions per month -- above that threshold, self-hosted savings compound significantly.

Comparison: n8n vs Zapier vs Make vs Custom Code

Each platform serves a different audience and use case. The right choice depends on your technical depth, data sensitivity, budget, and AI integration needs.

Feature n8n Zapier Make Custom Code Agents
License / Source Fair-code (source available) Proprietary SaaS Proprietary SaaS Your code, your license
Self-Hosting Yes (Docker, K8s, VPS) No No Yes (full control)
Integrations 400+ native + HTTP node 7,000+ native 1,500+ native Unlimited (manual)
AI Agent Support Native LangChain, 70+ AI nodes Basic AI actions AI modules, agent flows Full framework control
Code Execution JS/Python + npm packages Limited (Code by Zapier) Limited JavaScript Any language / framework
Pricing Model Free self-hosted; cloud from ~$24/mo Per-task pricing; from $29.99/mo Per-operation; from $10.59/mo Infrastructure + dev time
MCP Support Native (Server + Client nodes) No native support No native support Via SDK integration
Best For Developers, regulated industries, AI workflows Non-technical teams, fast setup SMBs, visual multi-step logic Maximum flexibility, unique needs

Production Deployment and Scaling

For production deployments, n8n supports horizontal scaling through queue mode. In this architecture, a main instance handles the UI, webhook reception, and scheduling, while separate worker instances process workflow executions from a Redis-backed queue. This decouples workflow triggering from execution, allowing you to scale workers independently based on load.

# docker-compose.yml - Queue mode for horizontal scaling
services:
  n8n-main:
    image: n8nio/n8n:latest
    environment:
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      # ... other env vars
    command: n8n start

  n8n-worker:
    image: n8nio/n8n:latest
    environment:
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
    command: n8n worker
    deploy:
      replicas: 3  # Scale workers as needed

  redis:
    image: redis:7-alpine
    restart: always

  postgres:
    image: postgres:16-alpine
    # ... same config as before

Monitoring and observability: n8n exposes Prometheus-compatible metrics at /metrics for execution counts, durations, queue depths, and error rates. Integrate with Grafana dashboards for real-time visibility. Set up alerts for queue backlog growth, execution failures, and memory consumption. Workflow execution logs are stored in the database and accessible through the UI or API.

Backup strategy: Your n8n state lives in two places -- the PostgreSQL database (workflow definitions, credentials, execution history) and the filesystem volume (encryption keys, custom nodes). Back up both regularly. Use pg_dump for database snapshots and volume snapshots for the n8n data directory. For disaster recovery, test restoring from backups monthly.

Security hardening: Enable basic auth or SSO for the web UI. Run n8n behind a reverse proxy with TLS. Restrict the /webhook/ path to expected IP ranges if possible. Use encrypted credential storage (the default since n8n 0.200+). Regularly update the n8n Docker image for security patches. In Kubernetes deployments, use network policies to isolate n8n pods and limit egress to required services only.

Why n8n Stands Out

100k+ GitHub stars

One of the most popular open-source automation platforms, with a vibrant community of 200,000+ members contributing nodes, templates, and workflow patterns. New integrations and features ship weekly.

Zero execution limits

Self-hosted Community Edition runs unlimited workflows with unlimited executions for free. No per-task pricing, no artificial throttling, no vendor lock-in. Your data stays on your infrastructure.

Native AI + MCP

70+ AI nodes with LangChain integration, persistent agent memory, RAG support, and bidirectional MCP integration. Build AI agents that can call any MCP tool and expose n8n workflows as MCP tools for external agents.

Visual + Code hybrid

The only major automation platform that gives you both a visual drag-and-drop editor and full JavaScript/Python code execution with npm package access in the same workflow. Best of both worlds.

n8n 2.15: Python, Agent Delegation, and Vector Memory

n8n 2.15 (April 2026) adds native Python execution in Code nodes, removing the JavaScript-only limitation that previously forced Python-heavy teams to use external services. Python Code nodes run in sandboxed sub-processes with access to pip packages configured at the instance level. This makes n8n viable for data science workflows, ML model inference, and pandas-based transformations directly inside the visual editor.

Agent-to-agent delegation enables AI Agent nodes to invoke other AI Agent sub-workflows as tools. A supervisor agent can delegate sub-tasks -- research, code generation, data analysis -- to specialized agent workflows, each with their own LLM configuration, tools, and memory. This brings multi-agent orchestration patterns into the visual workflow paradigm without requiring external frameworks like LangGraph.

Two new memory backends expand production options: Redis Chat Memory provides fast, persistent conversation history with configurable TTL for high-throughput chatbot deployments, while Postgres/Supabase Vector Memory stores embeddings alongside conversation context, enabling RAG-augmented agents that retrieve relevant past interactions using semantic search. Both integrate as drop-in memory nodes in existing agent workflows.

Related Guides