Deploy InfluxDB with n8n & Docker – PUQ Template
This guide explains how to use the PUQ “Docker InfluxDB” n8n workflow template to fully automate the lifecycle of InfluxDB containers using Docker, SSH, and a secured webhook. You will learn how the template works, what each part does, and how to connect it to WHMCS or WISECP for multi-tenant InfluxDB hosting.
What you will learn
By the end of this tutorial-style article, you should be able to:
- Understand the overall architecture of the PUQ Docker InfluxDB n8n template
- Configure the webhook and SSH connection used by the workflow
- Use the template to create, start, stop, suspend, and terminate InfluxDB containers
- Customize resource limits and storage for each tenant
- Integrate the workflow with WHMCS or WISECP using simple JSON API requests
- Apply security and operational best practices for running this automation in production
Why this n8n template is useful
If you host InfluxDB for multiple customers, managing containers manually can quickly become painful. This PUQ template turns those repetitive tasks into a consistent, API-driven process that you can call from your billing system.
With this n8n workflow in place, you can:
- Receive authenticated API calls from WHMCS, WISECP, or any other system that can send HTTP POST requests
- Automatically generate Docker Compose files and nginx configuration for each customer
- Create and mount per-tenant disk images for persistent InfluxDB data
- Run key management actions like:
- Start and stop containers
- Inspect containers, view logs, and collect stats
- Change passwords and handle ACL-related operations
- Change package (resources) and handle suspend / unsuspend
In practice, this gives you a production-ready automation layer that connects your billing platform to your Docker infrastructure with minimal custom coding.
Architecture overview
The template is built around three main components that work together:
1. API entry point (Webhook)
A Basic Auth protected n8n webhook receives JSON POST requests. It acts as the public API endpoint that WHMCS or WISECP calls when a customer is created, suspended, unsuspended, or when a service action is triggered.
2. SSH Executor
An n8n SSH credential is used to run bash scripts on your Docker host. These scripts perform the actual system-level work, including:
- Creating and mounting disk images
- Running Docker and Docker Compose commands
- Updating nginx-proxy configuration and reloading nginx
3. Template Logic in n8n
The workflow contains a set of n8n nodes that:
- Interpret incoming commands from the webhook
- Generate docker-compose.yml content dynamically
- Create nginx vhost files for each tenant domain
- Manage the full lifecycle of each InfluxDB container
Think of the webhook as the “front door,” the SSH executor as the “hands-on operator” on your server, and the n8n logic as the “brain” that decides what to do next.
Prerequisites
Before using the template, make sure the following are in place:
- An n8n instance where you can:
- Create a webhook
- Configure SSH credentials
- Import and edit the PUQ template
- A Docker host that has:
- Docker and Docker Compose v2 or newer
- An
nginx-proxycontainer - A
letsencryptcompanion container for certificates
- Basic familiarity with:
- Docker and Docker Compose
- nginx and reverse proxy concepts
- Linux filesystem tools like
fallocate,mkfs.ext4, andfstab
Core concepts and key nodes
To understand how the template works, it helps to look at the main nodes and what each is responsible for.
Parameters node
This node centralizes important variables that are reused throughout the workflow. Typical parameters include:
- server_domain: The main domain of your server
- clients_dir: Directory where per-client data is stored, for example
/opt/docker/clients - mount_dir: Mount point for loopback images, for example
/mnt
The template also includes screen_left and screen_right values used to safely format docker stats output. These should not be changed, since other nodes rely on them for parsing and presenting container statistics correctly.
API (Webhook) node
The webhook node is the entry point for external systems:
- It expects a JSON body in the HTTP request
- It requires HTTP Basic Authentication using an
httpBasicAuthcredential configured in n8n - It reads the
commandfield from the JSON payload and routes the request to the appropriate logic
Internally, the workflow uses this command to decide between two main branches:
- Container Actions (for lower-level container control)
- Service Actions (for full lifecycle events like create, suspend, terminate)
Container Actions branch
This branch handles direct operations on an existing container, such as:
- Start and stop
- Mount and unmount storage
- Inspect container details
- Fetch logs
- ACL-related operations and similar management tasks
For each action, an n8n node prepares a shell script in a field often named sh. That script is then executed on the Docker host by the SSH node. The scripts include defensive checks, for example verifying that:
- The container exists
- The mount point is available
- Required files or directories are present
Each script returns either a clear success status or a JSON-formatted error explaining what went wrong. This makes it easier for the calling system (like WHMCS) to understand and display meaningful error messages.
Service Actions branch
Service actions are higher-level operations that affect the entire lifecycle of a tenant’s InfluxDB service. These typically include:
test_connection– Check that the infrastructure and SSH access work correctlycreate(deploy) – Provision a new InfluxDB container and its storagesuspend– Disable or stop the service without destroying dataunsuspend– Reactivate a previously suspended serviceterminate– Remove the container and associated configurationchange_package– Adjust resources like CPU, RAM, or disk allocation
During a create operation, the workflow:
- Builds a docker-compose manifest using the Deploy-docker-compose node
- Creates a loopback disk image, formats it as ext4, and mounts it for persistent storage
- Writes nginx vhost configuration files into a per-client directory so that
nginx-proxycan route traffic to the container
Deploy-docker-compose node
This node is responsible for generating the docker-compose.yml file for each InfluxDB tenant. The template:
- Defines an InfluxDB container with environment variables for:
- Initial username
- Initial password
- Organization
- Bucket
- Applies CPU and memory limits according to the API payload (
ram,cpu) - Mounts directories from the per-tenant image into:
/var/lib/influxdb2/etc/influxdb2
Because this node centralizes the docker-compose template, it is also the main place you will modify if you want to extend the workflow to other services later.
Step-by-step: how the workflow runs
1. Billing system sends a request
Your billing system (for example, WHMCS or WISECP) sends an HTTP POST request to the n8n webhook URL, for example:
/webhook/docker-influxdb
The request must:
- Use Basic Auth with the credentials configured in n8n
- Include a JSON body with at least a
commandfield
2. Webhook validates and routes the command
The webhook node checks authentication and parses the JSON. Based on the command, it routes the flow:
- Commands like
container_start,container_stopgo to the Container Actions branch - Commands like
create,suspend,terminatego to the Service Actions branch
3. n8n builds the required shell script
For the chosen action, n8n nodes assemble a bash script string that will perform the necessary steps. Examples:
- For
create:- Create a disk image with
fallocate - Format it with
mkfs.ext4 - Update
/etc/fstaband runmount -a - Write docker-compose.yml and nginx config
- Run
docker compose up -d
- Create a disk image with
- For
container_start:- Check that the container and compose file exist
- Run
docker compose startordocker startas appropriate
4. SSH node executes the script on the Docker host
The SSH Executor node connects to your Docker host using the configured n8n SSH credential. It then runs the generated script. The script is designed to:
- Exit with clear messages
- Write logs and error information where needed
- Return structured output that n8n can send back to the caller
5. Workflow returns a structured response
When the script finishes, the workflow returns a JSON response to the original HTTP request. Typically this includes:
- A status field such as
successorerror - Details about what was done or what failed
- Any additional data like container stats, logs, or disk usage information
Example API payloads
Create a new InfluxDB tenant
Send a POST request with Basic Auth to your webhook path, for example /webhook/docker-influxdb, with a JSON body like:
{ "command": "create", "domain": "customer.example.com", "username": "customer1", "password": "S3cureP@ss", "disk": 10, "ram": 1, "cpu": 0.5
}
Fields:
command: Action to perform, here it iscreatedomain: The customer’s domain that will be used in nginx and docker labelsusername/password: Initial InfluxDB credentialsdisk: Disk size in GB for the loopback imageram: Memory limit in GBcpu: CPU limit in cores, for example0.5for half a core
Start an existing container
To start a tenant’s container, send:
{ "command": "container_start", "domain": "customer.example.com"
}
Other commands such as container_stop, suspend or terminate follow the same pattern, with the command field indicating the desired action.
Security considerations
Because this workflow controls containers and runs commands over SSH, security is critical. Keep these points in mind:
- Protect the webhook:
- Use strong Basic Auth credentials
- Limit access to known IP addresses from your billing system if possible
- Always expose the webhook over HTTPS
- Limit SSH permissions:
- Create a dedicated SSH user for n8n on the Docker host
- Grant only the required permissions
- Use
sudorules insudoersto allow specific commands without a password, not full root access
- Control resource parameters:
- Validate or cap
ramandcpuvalues from incoming API requests - Use sane defaults to avoid noisy neighbor issues
- Validate or cap
- Handle credentials securely:
- Store InfluxDB passwords and other secrets securely
- Use TLS for the webhook endpoint to protect credentials in transit
Operational best practices
- Back up tenant data:
- Per-tenant image files live under
clients_dir, for example:/opt/docker/clients/customer.example.com/data.img - Include these images in your backup strategy
- Per-tenant image files live under
- Monitor disk usage:
- Set alerts on the host filesystem where images and mounts are stored
- Use the template’s built-in commands that report image and mount sizes
- Consider a jump host:
- If your Docker hosts are on a private network, run the SSH executor against a jump host that can reach them
- Test in staging first:
- Run create and terminate flows in a non-production environment
- Verify:
/etc/fstabentries are correct- Mount and unmount operations behave as expected
- nginx vhosts are created and reloaded successfully
