AI Template Search
N8N Bazar

Find n8n Templates with AI Search

Search thousands of workflows using natural language. Find exactly what you need, instantly.

Start Searching Free
Sep 26, 2025

n8n Travel Itinerary Builder Template

n8n Travel Itinerary Builder Template – Technical Reference Automate travel planning with a production-ready n8n workflow template that combines webhooks, text splitting, vector embeddings, a Supabase vector store, LangChain agent orchestration, and Google Sheets logging. This reference explains the architecture of the Travel Itinerary Builder template, how each node participates in the data flow, and […]

n8n Travel Itinerary Builder Template

n8n Travel Itinerary Builder Template – Technical Reference

Automate travel planning with a production-ready n8n workflow template that combines webhooks, text splitting, vector embeddings, a Supabase vector store, LangChain agent orchestration, and Google Sheets logging. This reference explains the architecture of the Travel Itinerary Builder template, how each node participates in the data flow, and how to configure and extend it for advanced use cases.

1. Workflow Overview

The Travel Itinerary Builder is an n8n workflow that transforms a structured travel request into a personalized, day-by-day itinerary. It is designed for travel startups, agencies, and technical hobbyists who want to:

  • Collect user preferences programmatically via an HTTP endpoint
  • Persist contextual travel content in a Supabase vector store
  • Use Cohere embeddings and an OpenAI-backed LangChain agent to generate itineraries
  • Log all requests and responses in Google Sheets for analytics and review

The workflow is fully event-driven. A POST request to an n8n Webhook node initiates a sequence that includes text splitting, embedding, vector storage, retrieval, agent reasoning, and final logging.

2. Architecture & Data Flow

At a high level, the workflow coordinates the following components:

  • Webhook node – Ingests incoming JSON payloads with travel preferences
  • Text Splitter node – Segments long text into overlapping chunks for embedding
  • Cohere Embeddings node – Encodes text chunks into high-dimensional vectors
  • Supabase Insert node – Writes embeddings and metadata to a vector-enabled table
  • Supabase Query + Tool nodes – Expose the vector store as a retriever tool to LangChain
  • Memory node – Maintains short-term conversational context for the agent
  • Chat (OpenAI) node – Provides the core large language model for itinerary generation
  • Agent (LangChain) node – Orchestrates tools, memory, and the LLM with a tailored prompt
  • Google Sheets node – Appends each request and generated itinerary to a logging sheet

The end-to-end flow is:

  1. Client sends POST request to /travel_itinerary_builder
  2. Workflow parses the payload and prepares any text content for embedding
  3. Text is split, embedded with Cohere, and stored in Supabase under the index travel_itinerary_builder
  4. When generating, the agent queries Supabase via a Tool node for relevant chunks
  5. Agent uses retrieved context, memory, and business rules to construct a structured itinerary
  6. Result plus metadata is appended to Google Sheets and returned to the client

3. Node-by-Node Breakdown

3.1 Webhook Node – Inbound Request Handling

Purpose: Entry point for external clients to trigger itinerary generation.

Endpoint: /travel_itinerary_builder (HTTP POST)

Expected JSON payload structure (example):

{  "user_id": "123",  "destination": "Lisbon, Portugal",  "start_date": "2025-10-10",  "end_date": "2025-10-14",  "travelers": 2,  "interests": "food, historical sites, beaches",  "budget": "moderate"
}

Key fields:

  • user_id – Identifier for the requester, used for logging and potential personalization
  • destination – City or region for the trip
  • start_date, end_date – ISO-8601 dates defining the travel window
  • travelers – Number of travelers, used to inform recommendations
  • interests – Free-text description of preferences (e.g. food, museums, beaches)
  • budget – Qualitative budget level (e.g. low, moderate, high)

Configuration notes:

  • Method should be set to POST.
  • Make sure the Webhook URL is reachable from your client (use a tunnel like ngrok for local development).
  • Validate that Content-Type: application/json is set by the caller.

Edge cases & error handling:

  • If required fields are missing or malformed, handle validation either in the Webhook node or a subsequent Function node before proceeding to embeddings.
  • Consider returning explicit HTTP error codes (4xx) when validation fails.

3.2 Text Splitter Node

Purpose: Segment long text inputs into smaller chunks suitable for embedding and retrieval.

Typical input sources:

  • Extended notes from the user (e.g. special constraints or detailed preferences)
  • Pre-loaded travel guides or descriptions associated with the destination

Key parameters:

  • chunkSize: 400
  • chunkOverlap: 40

Behavior:

  • Splits long text into chunks of approximately 400 characters.
  • Overlaps consecutive chunks by 40 characters to preserve continuity and local context.

Configuration tips:

  • Increase chunkSize if context feels too fragmented or the LLM is missing cross-sentence relationships.
  • Decrease chunkSize if you hit embedding size limits or latency becomes an issue.
  • Adjust chunkOverlap to balance redundancy against storage and query cost.

3.3 Cohere Embeddings Node

Purpose: Convert each text chunk into a dense vector representation suitable for similarity search.

Input: Chunked text from the Text Splitter node.

Output: An array of numeric vectors, one per chunk.

Configuration:

  • Credentials: Cohere API key configured in n8n credentials.
  • Model: Any Cohere embedding model that supports your language and cost constraints.

Performance tips:

  • Select an embedding model that balances cost and accuracy for typical travel content.
  • Batch multiple chunks in a single request when possible to reduce overhead and latency.

Debugging:

  • Inspect the shape and length of the returned vectors if you encounter Supabase insertion errors.
  • Review Cohere error messages for rate limits or invalid credentials.

3.4 Supabase Vector Store – Insert Node

Purpose: Persist embeddings and their associated metadata in a Supabase vector-enabled table.

Index name: travel_itinerary_builder

Input:

  • Embedding vectors from the Cohere node
  • Metadata such as chunk text, user ID, destination, and timestamps

Configuration:

  • Credentials: Supabase project URL and API key configured as n8n credentials.
  • Vector extension: Ensure the Supabase project has the vector extension enabled.
  • Table or index: Point the Insert node to the table used as your vector store, aligned with the index name travel_itinerary_builder.

Recommended metadata fields:

  • user_id – For traceability and personalization
  • destination – To filter or shard by location
  • source – E.g. “user_input” or “guide_document”
  • created_at – Timestamp for lifecycle management

Operational notes:

  • Monitor table size and query performance as the index grows.
  • Implement cleanup or archiving strategies if the vector store becomes very large.

3.5 Supabase Query & Tool Node (Retriever)

Purpose: Retrieve the most relevant chunks from Supabase to inform itinerary generation, and expose this retrieval as a LangChain tool.

Behavior:

  • At generation time, the agent issues a query that is translated into a vector similarity search against the travel_itinerary_builder index.
  • The Tool node wraps this query capability so the LangChain agent can call it dynamically during reasoning.

Configuration notes:

  • Set the number of results to retrieve according to how much context the LLM can handle without becoming overwhelmed.
  • Optionally filter by destination, user ID, or other metadata to narrow down relevant documents.

Debugging tips:

  • Test the Supabase query in isolation to confirm that you get sensible matches for a given destination.
  • Inspect tool output in the agent logs to ensure the retriever is returning the expected chunks.

3.6 Memory Node

Purpose: Provide short-term conversational memory for the LangChain agent.

Usage in this template:

  • Stores the recent conversation or input context so the agent can reference prior steps within the same workflow run.
  • Helps the agent maintain consistency about user preferences, constraints, and previous tool calls.

Configuration considerations:

  • Configure memory window size so it captures relevant context without exceeding token limits.
  • Ensure memory is scoped to a single request to avoid cross-user data leakage.

3.7 Chat (OpenAI) Node

Purpose: Provide the core LLM that generates natural language itinerary content.

Input:

  • Prompt content constructed by the Agent node
  • Retrieved context from the Supabase Tool
  • Memory state with recent exchanges

Configuration:

  • Credentials: OpenAI API key (or an alternative supported LLM provider configured in n8n).
  • Model: Choose a chat-optimized model suitable for multi-step reasoning and structured output.

Behavior:

  • Generates the final itinerary text, including a day-by-day breakdown that respects user preferences and constraints.

Cost control:

  • Use smaller or cheaper models for prototyping and scale up only if quality is insufficient.
  • Limit maximum tokens per response to control usage.

3.8 Agent (LangChain) Node

Purpose: Orchestrate the LLM, memory, and tools (including the Supabase retriever) to build a coherent itinerary under explicit business rules.

Core responsibilities:

  • Define the system prompt and instructions for how to use retrieved context.
  • Instruct the LLM to respect user constraints such as budget, accessibility, and trip pace.
  • Structure the output in a predictable format, typically day-by-day.

Prompt design recommendations:

  • Explicitly instruct the agent to:
    • Use retrieved chunks as factual context.
    • Respect budget levels and avoid suggesting activities that conflict with constraints.
    • Balance different interest categories across days (e.g. food, historical sites, beaches).
  • Specify a clear output schema, for example:
    • Day 1: Morning, Afternoon, Evening
    • Day 2: …

Debugging:

  • Log intermediate tool calls and the memory state to verify that the agent is using the retriever correctly.
  • Iterate on the prompt template if the agent ignores constraints or produces inconsistent structure.

3.9 Google Sheets Node – Logging

Purpose: Persist each itinerary generation event for analytics, auditing, and manual review.

Configuration:

  • Credentials: Google Sheets API credentials configured in n8n.
  • Sheet ID: Target spreadsheet identifier.
  • Tab name: Log
  • Operation: Append row

Typical logged fields:

  • User ID
  • Destination and dates
  • Interests and budget
  • Generated itinerary text
  • Timestamps and any internal run identifiers

Operational tip: Maintain separate sheets for development and production to avoid mixing test data with real analytics.

4. Configuration Checklist

Before enabling the workflow in n8n, verify the following prerequisites:

  • An active n8n instance (self-hosted or n8n cloud) with access to the internet.
  • A Supabase project:
    • Vector extension enabled.
    • Table configured as a vector store with an index name travel_itinerary_builder.
    • API keys created and stored as n8n credentials.
  • A Cohere account:
    • API key configured in n8n for the Embeddings node.
  • An OpenAI API key (or another supported LLM provider) for the Chat node.
  • A Google account with:
    • Sheets API credentials configured in n8n.
    • Target Sheet ID and a tab named Log.
  • A reachable Webhook URL:
    • For local development, use a tunneling solution like ngrok to expose the Webhook endpoint.

5. Node-Specific Guidance & Tuning

5.1 Text Splitter Node

  • Increase chunkSize if the LLM needs more context per chunk.
  • Decrease chunkSize if embedding calls become too large or slow.
  • Adjust chunkOverlap to reduce duplicated information while still preserving continuity between chunks.

5.2 Cohere Embeddings Node

  • Select a model optimized for semantic similarity tasks over descriptive travel content.
  • Use batching when embedding many chunks in one run to reduce network overhead.

5.3 Supabase Vector Store

  • Keep the index name consistent (travel_itinerary_builder) across Insert and Query operations.
  • Persist rich metadata:
    • Chunk source (user input vs. guide)
    • User ID
    • Destination and language
    • Timestamps
  • Monitor storage and query costs as the dataset grows and adjust retention policies if required.

5.4 Agent & Prompting

  • Make constraints explicit:
    • Budget tiers (low, moderate, high)
    • Accessibility needs