How to Run n8n Locally with Docker and Expose It via ngrok

Whether you're experimenting with AI agents or integrating with tools like WhatsApp and Slack, here's how you can set up your local AI dev environment using the N8N Self-hosted
GeraDeluxer

GeraDeluxer

May 9, 2025

Running n8n, a powerful low-code automation tool, in your local machine is simple with Docker. But what if you want to test webhooks or integrations that need a public URL? That’s where ngrok comes in.

This guide shows you the essential steps to get started quickly, using a pre-built Docker Compose file that also integrates PostgreSQL, Ollama, and Qdrant.

🧰 Prerequisites

  • Docker and Docker Compose installed

  • A free ngrok account (optional but recommended)

  • Git

N8N Step-by-Step Setup

Clone the Starter Kit

plaintext
1git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git 2cd self-hosted-ai-starter-kit

Create Your .env File

plaintext
1cp .env.example .env

You’ll need to set values like:

plaintext
1POSTGRES_USER=root 2POSTGRES_PASSWORD=password 3POSTGRES_DB=n8n 4 5N8N_ENCRYPTION_KEY=super-secret-key 6N8N_USER_MANAGEMENT_JWT_SECRET=even-more-secret 7N8N_DEFAULT_BINARY_DATA_MODE=filesystem 8 9WHATSAPP_VERIFY_TOKEN=n8n-whatsapp-verification-token-2025

Start ngrok

Register and follow the instalation instructions Linux,Windows, Mac

Use ngrok to expose your local n8n instance to the internet:

plaintext
1ngrok http 5678

Copy the generated HTTPS URL, e.g.:

plaintext
1https://1ced-2806-2f0-4880.ngrok-free.app

Set this in your .env file under WEBHOOK_URL:

plaintext
1WEBHOOK_URL=https://your-ngrok-url.ngrok-free.app

Run Docker Compose

Launch all services:

plaintext
1# pull latest images 2docker compose --profile cpu pull 3# create and start containers 4docker compose create && docker compose --profile cpu up 5# stop containers 6docker compose --profile cpu down 7# warning, this will remove all volumes and delete all data 8docker compose --profile cpu down -v

✅ This will give you a public HTTPS URL to test webhooks or share your local automation setup.

Extra: Ollama 3.2 and nomic-embed-text

markdown
1- "sleep 3; OLLAMA_HOST=ollama:11434 ollama pull llama3.2; OLLAMA_HOST=ollama:11434 ollama pull nomic-embed-text"

Access n8n

Launch n8n in your browser at:
👉 http://localhost:5678
Or use your public ngrok URL to test external webhooks!

👉https://<your-ngrok-subdomain>.ngrok-free.app

🧾 Full Docker Compose File

The full configuration is available on GitHub:
👉 docker-compose.yaml