Sync Discord Scheduled Events to Google Calendar with n8n
On a Tuesday night, just before another community meetup, Alex stared at three different calendars and sighed.
As the community manager for a fast-growing Discord server, Alex lived in a world of scheduled events, livestreams, office hours, and workshops. Discord showed one list, Google Calendar showed another, and nobody was ever completely sure which one was right. People missed events, joined at the wrong time, or pinged Alex with the same question again and again:
“Is this still happening?”
The problem was simple but painful. Events lived in Discord, while the team and wider community lived in Google Calendar. Every new Discord scheduled event meant another round of manual copy-paste into Google Calendar, plus updates if anything changed. One mistake could mean a no-show speaker or a confused audience.
Alex needed a way to sync Discord scheduled events to Google Calendar automatically, without babysitting two systems all day.
The problem: Two calendars, one overwhelmed community manager
Alex’s pain points will sound familiar to anyone running an active Discord community:
Events were created in Discord, but the team relied on a shared Google Calendar.
Every new event required double data entry, plus manual updates if times or details changed.
Reminders and integrations were all tied to Google Calendar, not Discord.
Even with the best intentions, things slipped through the cracks. Sometimes the Discord event looked perfect, but the Google version was missing a description or had the wrong time. Other times, Alex forgot to update Google Calendar after tweaking a Discord event. The more the community grew, the more fragile this system felt.
Then, during a late-night search for “sync Discord scheduled events to Google Calendar,” Alex discovered something promising: an n8n workflow template built specifically for this problem.
Discovering the n8n workflow template
Alex already knew n8n as a flexible automation tool that could connect APIs and apps without writing full-blown backend code. But this template was different. It was designed to do exactly what Alex needed:
Periodically pull scheduled events from a Discord server.
Use each Discord event id as the Google Calendar event ID.
Check if the event already existed in Google Calendar, then either create or update it.
Only sync new or changed events, not everything every time.
If it worked, Alex could finally centralize everything in Google Calendar while still letting the community team create and manage events directly in Discord. No more copy-paste, no more guessing which calendar was right.
What Alex needed to get started
Before importing the template, Alex gathered the essentials:
An n8n instance, running in the cloud.
A Discord bot token, with the bot already invited to the server and allowed to view scheduled events.
A Google account with OAuth credentials that had the Calendar scope enabled.
The n8n workflow template that would connect Discord scheduled events with Google Calendar.
With that checklist complete, it was time to wire everything together.
Inside the workflow: How the automation actually works
As Alex opened the template in n8n, the pieces started to click into place. The workflow was made up of several nodes, each with a clear role:
On schedule – triggered the workflow at regular intervals so the sync ran automatically.
Set (Configure) – stored the Discord guild_id (server ID) as a variable for later use.
HTTP Request (List scheduled events from Discord) – called the Discord API to fetch scheduled events from the server.
Google Calendar (Get events) – checked if a Google Calendar event already existed with the same ID.
If (Create or update?) – decided whether to create a new event or update an existing one.
Google Calendar (Create event / Update event details) – actually wrote the events into the selected Google Calendar.
It was like watching a well-organized assembly line. The only question was whether Alex could configure each part correctly.
Rising action: Setting up Discord, Google, and n8n
1. Bringing the Discord bot into the story
The first step was to make sure Discord would actually talk to n8n.
Alex opened the Discord Developer Portal, created a new bot, and invited it to the community guild. The bot was granted permissions to view scheduled events, nothing more. Security mattered, and there was no reason to give it extra powers.
The bot token was copied carefully and stored somewhere safe. This token would later be used as an HTTP header credential in n8n so the workflow could authenticate with the Discord API.
2. Adding credentials inside n8n
Next, Alex moved into n8n’s Credentials section and created two connections:
Header Auth for Discord Alex configured a header with:
Name: Authorization
Value: Bot <your_token>
For example: Bot MTEzMTgw...uQdg
Google Calendar OAuth2 Using Google’s client ID and client secret, Alex granted the workflow the required scope: https://www.googleapis.com/auth/calendar
With both credentials in place, n8n was ready to bridge Discord and Google Calendar.
3. Configuring the template nodes
Inside the workflow template, Alex opened the Set (Configure) node and replaced the placeholder with the actual Discord server ID:
guild_id = <your Discord server ID>
Then, in the Google Calendar nodes, Alex selected the target calendar from the list, ensuring all events would land in the same shared place the team already used.
The heart of the Discord side was the HTTP Request node, which called this endpoint:
GET https://discord.com/api/guilds/{{ $json.guild_id }}/scheduled-events?with_user_count=true
Alex made sure this node used the previously created Header Auth credential so the Authorization header included:
Bot <token>
With that, the workflow could now list scheduled events from Discord and pass them down the line.
The turning point: Mapping Discord events into Google Calendar
The real magic came when Alex started to map Discord’s event fields into Google Calendar using n8n expressions.
Each event from Discord had data like start time, end time, title, location, and description. The template guided Alex to connect those fields to the Google Calendar create and update nodes.
Inside the Google Calendar nodes, Alex used expressions to map values from the HTTP response:
Start {{$node["List scheduled events from Discord"].item.json.scheduled_start_time}}
End {{$node["List scheduled events from Discord"].item.json.scheduled_end_time}}
Summary / Title {{$node["List scheduled events from Discord"].item.json.name}}
Location {{$node["List scheduled events from Discord"].item.json.entity_metadata.location}}
Description {{$node["List scheduled events from Discord"].item.json.description}}
Event ID (Google Calendar event ID) {{$node["List scheduled events from Discord"].item.json.id}}
That last mapping was the clever one. By using the Discord event ID as the Google Calendar event ID, Alex made sure that:
The workflow could detect whether an event already existed in Google Calendar.
Future runs would update the same event instead of creating duplicates.
The If node then checked if a Google event with that ID existed. If it did, the workflow updated it. If not, it created a new one.
Handling real-world details: Timezones, null fields, and errors
Before trusting the workflow with live events, Alex needed to make sure it could handle real-world messiness.
Timezone and date formats
Discord’s scheduled event timestamps arrived as ISO 8601 strings, which worked well with Google Calendar. Still, Alex double-checked that the Google Calendar nodes were receiving ISO timestamps and set the timezone explicitly where needed.
If there had been any mismatch, Alex could have used an n8n Function node to transform the timestamps before passing them on.
Handling null or missing fields
Not every Discord event had a location or description. Some were simple voice chats, others were quick ad-hoc sessions. To keep those from causing errors, Alex added simple checks so that missing fields would default to safe values.
Using Set or Function nodes, Alex could supply fallback text like “Online event” or leave fields blank in a controlled way, rather than letting null values break the workflow.
Error handling and rate limits
Alex knew that Discord’s API had rate limits, so setting the schedule to run every few seconds would not be smart. Instead, the workflow was scheduled at a reasonable interval, such as every 5 to 15 minutes, depending on how often new events were created.
For extra robustness, Alex enabled n8n’s error handling features, like:
continueOnFail for non-critical nodes.
Catch nodes to log and manage errors gracefully.
Optional retry logic for temporary network or API issues.
Security practices
Alex kept security in mind throughout the setup:
All sensitive data, such as the Discord bot token and Google OAuth credentials, lived inside n8n credentials, never in plain text inside nodes.
The Discord bot’s permissions were limited to the minimum needed to list scheduled events, nothing more.
Troubleshooting along the way
Not everything worked perfectly on the first run. Alex hit a few bumps, each of which pointed back to common misconfigurations.
401 errors from Discord When the HTTP Request node briefly returned a 401, Alex discovered that the Authorization header was missing the Bot prefix. Fixing it to Bot <token> resolved the issue.
Events not appearing in Google Calendar On another run, nothing showed up in the calendar. The problem turned out to be an incorrect Calendar ID and a missing authorization step in the Google credential. Once Alex re-authorized the credential and selected the right calendar, events began to appear.
Field path confusion To verify the JSON structure from Discord, Alex used n8n’s debug output. Inspecting the raw API response helped confirm that expressions like .scheduled_start_time and .entity_metadata.location were pointing to the right fields.
Life after automation: A calm, synced calendar
After a few successful test runs, Alex scheduled the workflow to run every 10 minutes. The next time someone created a scheduled event in Discord, it quietly appeared in the team’s Google Calendar with the correct title, time, and description.
When an event’s time was updated in Discord, the Google Calendar entry shifted too. No duplicates, no forgotten edits, no frantic last-minute pings.
Alex’s world changed in subtle but powerful ways:
The community could see all upcoming events in a single shared Google Calendar.
The team could rely on their existing Google Calendar reminders and integrations.
Discord remained the source of truth for event creation, while n8n handled the syncing behind the scenes.
Instead of fighting with calendars, Alex could finally focus on what mattered: growing the community and running great events.
Ideas for extending the workflow
Once the core sync was stable, Alex started to think about what else could be automated around Discord events and Google Calendar.
Notifications Send a message to a Slack channel or a Discord text channel whenever an event is created or updated in Google Calendar.
Attendee invites Map Discord usernames or roles to email addresses and add them as attendees in Google Calendar, using a clear permissions model.
Filtering events Sync only certain events, such as those in a specific channel or with a particular status, instead of mirroring everything.
Key n8n expressions Alex used
Here is a quick reference of the core expressions that powered Alex’s workflow, so you can adapt or copy them into your own n8n setup:
// Calendar event ID (use Discord event id)
={{ $('List scheduled events from Discord').item.json.id }}
// Start and end timestamps
={{ $('List scheduled events from Discord').item.json.scheduled_start_time }}
={{ $('List scheduled events from Discord').item.json.scheduled_end_time }}
// Title, location, description
={{ $('List scheduled events from Discord').item.json.name }}
={{ $('List scheduled events from Discord').item.json.entity_metadata.location }}
={{ $('List scheduled events from Discord').item.json.description }}
Resolution: From chaos to a clean, automated calendar
What started as a constant headache for Alex turned into a quiet, reliable automation. By using an n8n workflow template to sync Discord scheduled events to Google Calendar, the gap between community tooling and team visibility finally closed.
The core pieces that made it work were simple but powerful:
Using the Discord event ID as the Google Calendar event ID.
Scheduling the workflow to run automatically on a regular interval.
Mapping Discord fields directly into Google Calendar using n8n expressions.
Handling timezones, null fields, and API errors with care.
Now, when someone on the team asks, “Is this event on the calendar?”, Alex can confidently say, “If it is scheduled in Discord, it is already there.”
Take the next step
If you are running a Discord community and juggling separate calendars, you do not have to stay in that loop. You can follow the same path Alex did:
Import the n8n workflow template.
Add your Discord bot and Google Calendar credentials.
Set your guild_id and target Calendar ID.
Run a test with a single event, then schedule it to run automatically.
If you want help customizing the workflow for filters, notifications, or timezone handling, reach out or subscribe to our tutorials for more step-by-step automation guides.
Call-to-action: Copy the template, set your credentials, and run the workflow. If you get stuck, share your setup details and we will walk you through troubleshooting so your Discord events and Google Calendar stay perfectly in sync.
Use n8n, the Notion API, and an AI language model to deliver a fast, reliable knowledge base assistant on top of your existing documentation. This guide explains the architecture of the n8n template, its key nodes and integrations, and the configuration steps required to deploy an AI chat assistant that answers questions directly from your Notion workspace.
Why automate a Notion knowledge base with n8n?
Notion has become a standard repository for internal documentation, product specs, and operational runbooks. As these spaces grow, manual search and human support do not scale. An n8n-powered assistant built on top of Notion enables you to:
Provide fast, consistent answers from canonical documentation
Reduce repetitive questions to support, IT, and operations teams
Deliver context-aware responses that reference specific Notion pages
Expose the same knowledge layer across Slack, email, or web chat
By orchestrating Notion queries and an AI model through n8n, you get a controllable, auditable automation workflow instead of a black-box chatbot.
Solution architecture at a glance
The template implements a deterministic pipeline that receives a user question, retrieves relevant content from Notion, and returns an AI-generated answer grounded in that content. At a high level, the workflow consists of:
A public chat webhook that triggers the workflow
Metadata retrieval from the target Notion database
Schema normalization for consistent downstream processing
An AI agent node that coordinates tool calls and reasoning
Dedicated tools for searching the Notion database and reading page content
The following sections walk through these components in a logical sequence, then cover setup, optimization, and troubleshooting.
Core workflow components in n8n
1. Chat trigger – entry point for user questions
The workflow begins with a public chat webhook. This node exposes a URL that can be called from a frontend chat widget, Slack command, or any internal tool. It receives the raw user input and passes it into the automation pipeline as the primary question payload.
2. Notion database metadata retrieval
The next stage queries Notion for database metadata. The Get database details node fetches information such as:
Database ID
Available properties and tags
Structural details that inform filtering strategies
This metadata allows the AI agent to understand which fields it can filter on and how the knowledge base is organized. Although this step can be cached for performance, the template runs it on each execution to ensure up-to-date context.
3. Schema formatting and normalization
Before the agent receives the request, a transformation step standardizes the data structure. The Format schema logic ensures that fields such as:
Session ID
Action type
User message text
Database ID
Tag options and other metadata
are normalized into a predictable schema. This reduces complexity in the agent node and makes the overall workflow more robust to future changes or additional integrations.
4. AI agent – orchestration and reasoning layer
The central component of the template is an AI agent node that manages when and how to call Notion tools. It receives:
The normalized user question
Database metadata and tag information
Access to two tools:
Search Notion database
Search inside database record
The agent is instructed via system prompts to:
Only answer using content retrieved from Notion
Be concise and fact-based
Avoid hallucinating or inventing information
Include the source Notion page URL when relevant
Based on the question, the agent decides when to search the database, which records to inspect, and how to synthesize the final answer from the retrieved content.
5. Searching the Notion database
The Search Notion database tool interacts with the Notion API to locate candidate pages. In the template, the search strategy typically includes:
Keyword matching on the question text
Tag-based filters derived from the database metadata
An OR relationship between text and tags to allow partial matches
The search returns a ranked or sorted list of relevant pages. The agent then selects the most promising candidates for deeper inspection.
6. Retrieving content from a specific Notion page
Once a candidate record is identified, the Search inside database record tool fetches the full page content, including blocks and child elements. The agent uses this rich text to:
Extract precise steps or policies
Build an evidence-based answer
Attach the direct Notion URL to the response
This approach ensures that the AI output is traceable back to a specific source document, which is critical in enterprise environments.
Step-by-step setup in n8n
To deploy the template in your environment, follow these steps:
Create a Notion integration In Notion, create a new integration and grant it access to the database that will serve as your knowledge base. Store the integration token securely, as it will be required for n8n credentials.
Prepare or duplicate the Notion database If you are using the provided example, duplicate the Notion knowledge base template into your workspace. Confirm that the integration created in step 1 has explicit access to this duplicated database.
Import the n8n workflow template In your n8n instance, import the Notion knowledge base assistant workflow. Configure Notion credentials for the following nodes:
Get database details
Search notion database
Search inside database record
Ensure that all these nodes reference the correct Notion integration and database.
Connect the AI model Add an OpenAI or compatible LLM credential to the chat model node used by the agent. The template is designed for GPT-style models. Adjust:
Timeout values to accommodate expected latency
Temperature to control determinism and creativity (lower values are recommended for knowledge base use cases)
Test the full chat flow Use the test chat functionality in n8n to send sample questions. Validate that:
Notion database searches return relevant pages
Page content is retrieved correctly
Responses include Notion page URLs when they are used as sources
Activate and integrate the webhook Once validated, activate the workflow. Copy the public chat URL from the webhook node and integrate it into your preferred interface, for example:
A custom web chat UI
Slack or Microsoft Teams bots
Internal portals or tools
Configuration strategies and best practices
Controlling hallucination and enforcing source grounding
For a production-ready knowledge base assistant, controlling hallucination is essential. In this template:
System prompts are written to explicitly forbid inventing facts
The agent is instructed to answer only using content retrieved from Notion
Responses should always reference the underlying Notion page when a page is used
Periodically review prompts and logs, and refine the system messages if you observe off-source or speculative answers.
Optimizing Notion search filters
Search configuration has a direct impact on answer quality. Recommended practices include:
Start with broad keyword matching on the question text
Layer in tag filters or updated date ranges for precision
Use an OR combination of text and tags, as in the template, to handle partial or imperfect queries
Iterate filters based on real query logs and missed results
The template provides a baseline search implementation that you can adapt to your specific schema and naming conventions.
Managing performance and cost-sensitive steps
Certain operations can introduce latency or additional cost. In particular:
Get database details runs on every execution and typically adds around 250-800 ms
Notion API calls for large pages can be relatively slow
LLM calls are often the most expensive and time-consuming step
To optimize performance:
Cache database metadata if live tag updates are not required
Consider a scheduled workflow to periodically refresh metadata into a static store
Use lower temperature and appropriate timeouts on the LLM
Limit the number of pages the agent inspects per query
Example: how the agent responds to a real query
Consider a user asking: “How do I request hardware?”
The agent triggers a Notion database search for the term “hardware” and any related tags.
The search tool returns candidate pages, such as a procurement or IT equipment policy document.
The agent selects the top result and calls the page content tool to retrieve the detailed steps.
Using the retrieved text, the agent summarizes the process and includes a direct link to the relevant Notion page.
Example (shortened) response:
To request hardware, follow the steps outlined in our procurement process: Notion: Procurement Process. The request form is linked under “Request Hardware” on that page. If needed, I can guide you through the form fields or help you contact the procurement team.
Common errors and troubleshooting
“Resource not found” in Get database details
This error typically indicates that the Notion integration does not have permission to access the target database. To resolve:
Open the database in Notion
Share it with the integration used in n8n
Re-run the workflow and confirm the node can read metadata
No matching records returned from Notion
If the agent cannot find relevant pages:
Test alternative keywords, including plural and singular forms
Add synonyms or alternative phrasing to your Notion content and tags
Review the database filters configured in the search tool
The agent includes fallback strategies, such as trying closely related terms, but high-quality tagging and content structure remain critical.
Slow or inconsistent response times
When performance issues arise, identify which node contributes most to the latency:
If Get database details is slow, consider caching metadata
If Notion API calls are slow, review page size and structure
If the LLM is slow, adjust timeout settings or use a more performant model tier
Monitoring execution times per node in n8n will help you target optimizations effectively.
Extending and hardening the assistant
Once the baseline assistant is stable, you can extend it to cover more use cases and governance requirements:
Multi-channel access: Connect the chat webhook to Slack, Microsoft Teams, or a custom web interface.
Role-based access control: Incorporate user identity and permissions so that responses only reference pages the user is allowed to see.
Analytics and observability: Log queries, response times, top search terms, and unanswered questions to guide documentation improvements.
Rich content support: Include images or file attachments from Notion pages in responses where appropriate.
Content and operations best practices
Keep Notion pages concise and structured using headings, lists, and clear sections to improve extractability.
Apply tags consistently across the database to improve search relevance and filtering.
Use explicit versioning and updated dates so the agent can prioritize the most recent information.
Continuously log queries and collect human feedback to refine prompts, filters, and documentation quality.
Conclusion
By combining Notion, n8n, and an AI language model, you can deliver a practical, extensible knowledge base assistant that answers questions with documented facts and verifiable links. The template described here provides a production-ready foundation, including a chat webhook trigger, Notion search tools, schema formatting, and an AI agent that composes grounded responses.
To get started quickly, duplicate the Notion database template if available, connect your Notion and OpenAI credentials in n8n, and test the chat webhook with real queries. Over time, iterate on prompts, search filters, and your Notion structure based on analytics and user feedback.
Next steps
Ready to deploy your Notion AI assistant in production?
Import the n8n workflow template into your environment
Connect your Notion and LLM credentials
Activate the chat webhook and integrate it into your preferred channels
If you require support customizing prompts, adding new integrations such as Slack or email, or implementing analytics, reach out to your automation team or subscribe to ongoing workflow tutorials and best practices.
Ever said “yes” to a Discord event, only to forget about it because it never made it onto your Google Calendar? If you live in your calendar but your community lives in Discord, that gets old fast.
In this guide, we’ll walk through a ready-to-use n8n workflow template that quietly keeps your Discord scheduled events and Google Calendar in sync. No more copy-pasting, no more “What time was that again?” Just one source of truth in your calendar.
We’ll cover what the template does, when you’d want to use it, and exactly how to set it up step by step. Grab a coffee and let’s get your automation running.
At a high level, this workflow acts like a bridge between your Discord server and a Google Calendar. On a schedule that you choose, it:
Calls the Discord API to list all scheduled events in a specific server
Checks Google Calendar to see if each Discord event already exists there (using the Discord event id)
Creates new Google Calendar events when needed
Updates existing Google Calendar events if any details have changed
The end result: every scheduled event on your Discord server appears in your Google Calendar with matching details, and stays updated over time.
When to use this Discord-to-Google Calendar sync
This template is perfect if you:
Run a community, guild, or server that schedules events in Discord
Rely on Google Calendar to plan your day or share availability
Want your team or community to see Discord events in a shared calendar
Are tired of manually recreating every Discord event in Google Calendar
In other words, if Discord is where you organize events but Google Calendar is where you actually look, this workflow saves you from juggling both.
Why this approach works so reliably
The magic here is in how we match Discord events to Google Calendar events. Discord gives each scheduled event a stable id. Instead of trying to match on names or times (which can change), we simply reuse that id as the Google Calendar eventId.
That means:
Each Discord event maps to exactly one Google Calendar event
The workflow can easily tell if an event already exists in Calendar
We avoid duplicates and weird mismatches when details are edited
The logic stays simple: if an event with this ID exists, update it. If not, create it.
What you’ll build in n8n
Here is the core node flow you’ll end up with:
On schedule – Triggers the sync every X minutes
Set / Configure – Stores your Discord guild_id (server ID)
HTTP Request – Lists scheduled events from Discord
Google Calendar (Get events) – Tries to fetch a matching event using the Discord event id
If – Decides whether to create or update in Google Calendar
Google Calendar (Create event) – Creates a new calendar event when needed
Google Calendar (Update event) – Updates an existing calendar event if it already exists
Let’s go through how to set this up from start to finish.
Before you start: prerequisites
You’ll need a few things ready to go:
An n8n instance, either cloud or self-hosted
A Discord bot token with permission to read guild scheduled events
A Google account with OAuth credentials that allow access to Google Calendar
The Google Calendar you want to sync events into, plus its Calendar ID (you’ll select it inside n8n)
Once you have these, you’re ready to wire everything together.
Step 1 – Create and configure your Discord bot
First, you need a Discord bot that can see scheduled events in your server.
Invite the bot to your server with permissions to view scheduled events.
Copy the bot token. You’ll use this in n8n to authenticate API calls.
In n8n, open the HTTP Request node that lists scheduled Discord events and set up authentication:
Use Header Auth
Add a header: Authorization with value Bot YOUR_BOT_TOKEN
In production, you’ll want to store that token in n8n credentials, not directly in the node. We’ll touch on security best practices later.
Step 2 – Configure the schedule trigger
Next up is deciding how often you want the sync to run.
In the On schedule node, choose a cadence that fits your use case, for example:
Every 5 minutes if you want near real-time sync
Every 15 or 30 minutes if you prefer lighter API usage
More frequent runs give faster updates, but also mean more calls to both Discord and Google APIs. Pick a balance that feels right for your server size and activity.
Step 3 – Add your Discord server ID (guild_id)
The workflow needs to know which Discord server to pull events from. That’s where the guild_id comes in.
In n8n:
Add a Set node (you might name it “Configure”).
Create a field called guild_id.
Paste in your Discord server ID.
Not sure where to find the server ID? In Discord:
Go to User Settings > Advanced.
Enable Developer Mode.
Right click your server name and select Copy ID.
Step 4 – List scheduled events from Discord
Now we’ll fetch the actual events from your server using the Discord API.
In your HTTP Request node, configure it to call:
GET https://discord.com/api/guilds/{{guild_id}}/scheduled-events?with_user_count=true
Key details:
Method: GET
URL: use the URL above, with {{guild_id}} coming from your Set node
Headers:
Authorization: Bot <your_token>
Content-Type: application/json (n8n usually adds this automatically when needed)
Query parameter: set with_user_count=true if you want attendee counts in the response
This node will output an array of Discord scheduled event objects, each with fields like id, name, scheduled_start_time, scheduled_end_time, and so on.
Step 5 – Check Google Calendar for each Discord event
With the Discord events in hand, the next step is to see if each one already exists in your Google Calendar.
In n8n, add a Google Calendar node and use the Get operation. For each incoming Discord event, map:
eventId to {{ $json.id }} (the Discord event id)
If the Get operation finds a matching event, you’ll get the event data back. If it fails or returns nothing, that means this Discord event has not been synced to Google Calendar yet.
Step 6 – Decide: create or update the event?
Now we need to branch the workflow based on whether the Google Calendar event already exists.
Add an If node and point it at the output of the Google Calendar Get node. A common condition is to check whether the result has an id value, for example:
{{ $json.id }} isNotEmpty
Interpretation:
If that condition is true, the event exists in Google Calendar, so you’ll update it.
If it’s false, no event was found, so you’ll create a new one.
This simple check keeps the logic clean and avoids messy duplicate handling.
Step 7 – Create new Google Calendar events
On the “create” branch of the If node, add a Google Calendar node with the Create operation. This is where you map Discord fields to Google Calendar fields.
Typical mappings:
Start: {{ $json.scheduled_start_time }}
End: {{ $json.scheduled_end_time }} (or calculate a duration if end time is missing)
Summary (title): {{ $json.name }}
Location: {{ $json.entity_metadata.location }}
Description: {{ $json.description }}
ID / eventId: explicitly set to {{ $json.id }}
That last step is crucial. By assigning the Google Calendar event ID to the Discord event id, future runs of the workflow will always be able to find and update the same event.
Step 8 – Update existing Google Calendar events
On the “update” branch of the If node, add another Google Calendar node, this time using the Update operation.
You’ll again map the Discord event fields to the Google Calendar ones, similar to the Create operation:
eventId: use the existing ID from the Get node (which matches the Discord id)
Start: {{ $json.scheduled_start_time }}
End: {{ $json.scheduled_end_time }}
Summary: {{ $json.name }}
Location: {{ $json.entity_metadata.location }}
Description: {{ $json.description }}
This way, if you edit the time, title, description, or location in Discord, the corresponding Google Calendar event will be updated on the next run.
Tips, gotchas, and troubleshooting
Once the core sync is working, a few details are worth paying attention to.
Time zones
Discord scheduled times are ISO8601 strings.
Google Calendar also expects proper date-time formats with timezone info.
If you see events at the wrong time, normalize times in n8n with a Date/Time node or a small Function node to adjust timezones.
Event IDs
Reusing the Discord id as the Google Calendar eventId keeps matching simple.
Some Google Calendar accounts may limit custom IDs, so test this with your account to be sure.
Permissions
Make sure your Discord bot has permission to view guild scheduled events and is actually in the server.
For Google, your OAuth credentials must include the proper Calendar scopes.
Rate limits
Discord and Google both enforce rate limits.
If you have a large number of events or a very frequent schedule, consider backing off a bit.
You can add retry or backoff logic in n8n if you start hitting rate limit errors.
Edge cases
Deleted or canceled Discord events are not automatically removed from Google Calendar in the basic flow.
If you want strict one-way or two-way sync, add extra logic to handle deletions or cancellations, such as:
Periodically checking for events that no longer exist in Discord
Marking or deleting the matching Google Calendar event
Security best practices
You’re dealing with tokens and credentials here, so a few precautions help keep things safe.
Store the Discord bot token and Google OAuth credentials in n8n credentials, not as plain text in nodes.
Give your Discord bot only the permissions it actually needs.
Enable OAuth refresh token handling in n8n so your Google credentials stay valid over time.
How to test your workflow
Before you let the schedule run on its own, it’s worth testing the setup end to end.
In n8n, run the workflow manually or trigger the On schedule node once.
Create a scheduled event in Discord and wait for the workflow to run.
Check your Google Calendar and confirm:
The event appears
The details match (title, time, description, location)
The event ID is the same as the Discord event id
Edit the event in Discord (for example, change the time or name) and run the workflow again.
Verify that the Google Calendar event updates accordingly.
If something fails, inspect the n8n execution logs and API responses to fix any field mappings or permission issues.
Ideas for advanced enhancements
Once the basic sync is humming along, you can add extra logic to make it even smarter.
Change detection: Compare “last modified” timestamps to avoid unnecessary updates when nothing has changed.
Notifications: Send a Slack message or email whenever a new event is created or an existing one is updated.
Cancellation handling: Detect deleted or canceled Discord events and either remove or mark the corresponding Google Calendar events.
Wrapping up
Using n8n to sync Discord scheduled events to Google Calendar is a simple way to keep your community events visible where you actually plan your life. The core pattern is straightforward: list Discord events, look them up in Google Calendar by ID, then create or update as needed.
You can start with the template as-is, then tweak field mappings, time handling, or notification logic to match how your server runs events.
If you’d like to go further, for example handling cancellations or adding reminders, you can extend the same workflow with a few extra nodes.
Call to action: Clone the template into your n8n instance, create a test scheduled event in Discord, and watch it appear in your Google Calendar automatically. Once you see it working, you’ll never want to do this manually again.
Sync Discord scheduled events to Google Calendar with n8n
If you live in Discord all day but still rely on Google Calendar to keep your life organized, you’ve probably felt that annoying gap between the two. Events get scheduled in Discord, but your calendar stays clueless. This n8n workflow template fixes exactly that problem by automatically syncing Discord scheduled events to Google Calendar, so everything ends up in one clean, central place.
In this guide, we’ll walk through what the workflow does, when you’d want to use it, how to set it up, and what to watch out for. Think of it as having a friend show you around n8n while you sip your coffee, rather than a dry technical manual.
What this n8n workflow actually does
At a high level, this workflow acts like a bridge between your Discord server and your Google Calendar. On a schedule you choose, it:
Calls the Discord API to fetch all scheduled events in a specific server (guild).
Looks in Google Calendar to see if each Discord event already has a matching calendar event.
Creates a new Google Calendar event if it doesn’t exist yet.
Updates the existing Google Calendar event if details have changed in Discord.
The clever trick behind all this is that the workflow uses the Discord event ID as the Google Calendar event ID. That way, n8n can instantly tell whether an event is new or already synced.
Why bother syncing Discord events to Google Calendar?
If your community, team, or audience hangs out in Discord, you’re probably using Discord scheduled events to promote:
Community calls or town halls
Streams, live sessions, or watch parties
Workshops, office hours, or recurring meetups
The problem is, many people still rely on Google Calendar for their day-to-day planning. By syncing Discord events to Google Calendar, you:
Centralize your schedule in one place.
Make it easy for teammates or community members to see events on their phones, tablets, and desktop calendar apps.
Reduce “I forgot” moments because events show up alongside everything else in their calendar.
So if you’ve ever had to manually copy event details from Discord into Google Calendar, this workflow is about to save you a lot of repetitive clicking.
What you’ll need before you start
Before you import the template or build the workflow, make sure you have:
An n8n instance You can use n8n Cloud or a self-hosted setup.
A Discord bot token Create a bot in the Discord Developer Portal, and keep the token handy.
A Google account with Calendar access You’ll need OAuth2 credentials set up in n8n so it can read and write events.
The n8n workflow template You can import the template from the example or from the link at the end of this article.
How the workflow is structured
Let’s quickly outline the main pieces of the workflow so the setup steps make more sense:
On schedule – runs the workflow at a fixed interval, like every 5 minutes or once an hour.
Configure (Set) – stores your guild_id so the workflow knows which Discord server to query.
List scheduled events from Discord (HTTP Request) – calls the Discord API to fetch all scheduled events from that server.
Get events (Google Calendar – get) – tries to find a Google Calendar event whose ID matches the Discord event ID.
Create or update? (If) – checks if the Google event exists and decides whether to create or update.
Create event (Google Calendar – create) – creates a brand new event in Google Calendar using data from Discord.
Update event details (Google Calendar – update) – updates an existing Google Calendar event when something changes in Discord.
With that mental map in place, let’s walk through the setup step by step.
Step-by-step setup in n8n
1. Create and configure your Discord bot
Head over to the Discord Developer Portal and create a new application, then add a bot to it. Once the bot exists:
Copy the bot token and store it somewhere safe. You’ll need it for the n8n HTTP Request node.
Invite the bot to your Discord server with permissions that allow it to read scheduled events. The bot must actually be in the guild you want to sync.
Grab your server (guild) ID: Enable Developer Mode in Discord, right-click your server name, and choose Copy ID. This is the guild_id you’ll use in the workflow.
2. Set up header authentication for Discord in n8n
In the HTTP Request node that calls the Discord API, you’ll configure header-based authentication so Discord knows your bot is allowed to make the request.
Add this header:
Authorization: Bot <your_token>
Replace <your_token> with your actual bot token. This value is sent in the Authorization header every time the workflow fetches scheduled events from Discord.
3. Connect Google Calendar with OAuth2
Next, in n8n, create or select your Google Calendar OAuth2 credentials:
Use the standard OAuth2 flow in n8n to connect your Google account.
Make sure the account has access to the calendar where you want the events to appear.
Confirm that the scopes allow both reading and writing events.
Once this is set, the Google Calendar nodes in the workflow can create and update events without you needing to touch anything.
4. Import or build the workflow
You can either import the ready-made template or recreate it manually. Either way, here’s how the key nodes fit together:
On schedule Configure this node to run on a sensible interval, like every 5, 15, or 60 minutes. Running it too frequently can hit rate limits, so start conservatively unless you really need near real-time updates.
Configure (Set) Use a Set node to store your guild_id. This keeps things flexible if you want to switch servers later without touching the HTTP Request URL.
List scheduled events from Discord (HTTP Request) Point this node to:
GET https://discord.com/api/guilds/{guild_id}/scheduled-events?with_user_count=true
Replace {guild_id} with the value from your Set node. The with_user_count=true parameter includes user counts if you want that data.
Get events (Google Calendar – get) For each Discord event, this node tries to fetch a Google Calendar event by ID. The ID used here is the Discord event ID, which is what makes the whole create-or-update logic work so smoothly.
Create or update? (If) This If node checks whether the Google Calendar get operation found a matching event. If it did, the workflow follows the update path. If not, it goes down the create path.
Create event (Google Calendar – create) When no event exists yet, this node creates a fresh calendar entry. It sets key fields like start, end, summary, location, and description, and most importantly, it explicitly sets the Google event ID to the Discord event ID.
Update event details (Google Calendar – update) If the Google event already exists, this node updates it based on any changes in Discord, such as a new time, title, or description.
How to map Discord fields to Google Calendar
The magic of the sync comes from good field mapping. Here’s the typical mapping used in the example workflow:
summary <= Discord name
start <= Discord scheduled_start_time (ISO 8601)
end <= Discord scheduled_end_time (ISO 8601)
location <= Discord entity_metadata.location
description <= Discord description
id <= Discord id
That last one is especially important. Using the Discord event ID as the Google Calendar event ID keeps everything aligned and makes updates painless.
Why use the Discord event ID as the Google event ID?
You could try to match events by name or time, but that gets messy fast. Titles change, times shift, and you can easily end up with duplicates.
By reusing the Discord event ID as the Google Calendar event ID:
The workflow can reliably check if an event already exists in Google Calendar.
The get operation becomes a simple yes-or-no check based on ID.
Updates become straightforward, since each event has a unique, stable identifier.
In practice, this means:
If the Google Calendar get node finds an event with that ID, the workflow knows it should update it.
If no event is found, the workflow knows it needs to create a new one.
Handling time zones and date formats
Time zones can be sneaky. Discord sends scheduled event times in ISO 8601 format, which is great, because Google Calendar also accepts ISO 8601.
Still, you should:
Check that the times show up correctly in your Google Calendar client.
Verify the calendar’s default time zone matches what you expect.
Optionally transform the timestamps in n8n if you need to convert between time zones.
It is worth creating a couple of test events in Discord and confirming that they appear at the correct time in Google Calendar before you rely on this workflow for important events.
Staying within rate limits and keeping things reliable
Both Discord and Google Calendar APIs have rate limits, so it is a good idea to design your workflow with that in mind.
Choose a reasonable schedule Running the workflow every few seconds is usually unnecessary and can hit limits quickly. Every 5 to 60 minutes works well for most communities.
Use n8n error handling Configure retries or error workflows for transient API issues, so a temporary blip does not break your sync.
Respect Discord rate limit headers If you manage multiple guilds or a large number of events, consider adding throttling or delays to avoid hitting Discord’s limits.
Troubleshooting common issues
1. Discord returns an empty list of events
If the HTTP Request node comes back with no events when you know there should be some:
Confirm that your bot is actually in the correct Discord server.
Make sure the bot has permission to read scheduled events.
Double-check the guild_id you set in the workflow.
Verify the Authorization header is exactly Bot <token> with your real token.
2. Google Calendar “get” does not find an event
In this workflow, a missing event is not necessarily a problem. It is what triggers the “create” path. However, if you expected an existing event to be found:
Check that your Google Calendar credentials are correct and authorized.
Ensure the OAuth token has the right scopes for reading and writing events.
Confirm that the Google Calendar event ID is actually set to the Discord event ID.
3. Times look wrong or show up in the wrong time zone
If events appear at unexpected times in Google Calendar:
Check the time zone settings for both your Discord server and your Google Calendar.
Verify that the scheduled_start_time and scheduled_end_time from Discord are correctly passed through.
If needed, add a transformation step in n8n to adjust the timestamps into the desired time zone before sending them to Google.
Security and privacy best practices
Since this workflow deals with API tokens and credentials, it is worth taking a moment to lock things down:
Never hard-code your Discord bot token or Google secrets directly in your workflow JSON.
Use n8n’s built-in credential store to keep tokens and OAuth details secure.
Do not commit tokens or credentials to source control or share them in screenshots.
Restrict access to your n8n instance to trusted users only.
Ideas for next steps and improvements
Once the basic sync is working, you can start tailoring it to your specific use case. For example, you might:
Filter events by type or name so only certain Discord events are synced.
Add logging or alerts (via Slack, email, or another channel) whenever a sync fails.
Experiment with a two-way sync, where changes in Google Calendar can update Discord events. This usually requires webhooks or more frequent polling and some careful conflict handling.
Wrapping up
This n8n workflow template gives you a simple, reliable way to sync Discord scheduled events into Google Calendar. It:
Runs on a schedule you control.
Fetches events from your Discord server.
Uses the Discord event ID as the Google Calendar event ID.
Creates new events or updates existing ones automatically.
Ready to try it out? Import the template into your n8n instance, plug in your Discord bot token and Google OAuth credentials, set your guild_id, and run a quick test with a sample event.
If this workflow helps simplify your event management, feel free to share it with your community or teammates, and keep an eye out for more n8n automation ideas to streamline the rest of your stack.
Call to action: Grab the template below, connect your accounts, and let n8n handle the busywork so you can focus on running
Sync Discord Scheduled Events to Google Calendar with n8n
Ever told your friends, “Yeah, I’ll be there!” to a Discord event, then completely forgot because it never made it into your calendar? If your life is run by Google Calendar but your community lives on Discord, manually copying events back and forth gets old fast.
Good news: n8n can do that boring copy-paste work for you, quietly in the background, without complaining or getting distracted by memes.
What this n8n workflow actually does (in plain English)
This workflow connects your Discord server and Google Calendar so scheduled events in Discord automatically appear (and stay updated) in your calendar.
Here is the basic idea:
On a schedule, n8n asks Discord, “Hey, got any scheduled events for this server?”
For each event it finds, it checks Google Calendar to see if that event already exists.
If the event is new, n8n creates it in Google Calendar.
If the event already exists, n8n updates it so time, title, or description changes stay in sync.
The Discord event ID is used as the Google Calendar event ID so matching them later is simple and reliable.
Result: no more double entry, no more “Wait, what time is that event again?” and a lot fewer calendar-related facepalms.
Why bother syncing Discord events to Google Calendar?
Discord’s Scheduled Events feature is great for visibility inside a server, but most people still live inside their calendar when it comes to planning their day.
Automating the sync:
Prevents you from manually retyping event details like a spreadsheet goblin
Makes sure people see Discord events alongside work, personal, and other commitments
Keeps updates consistent so nobody shows up at the wrong time because only Discord got updated
Eliminates duplicate data entry, which everybody hates but pretends is “fine for now”
What you will build with this template
You will create an n8n workflow that:
Periodically fetches all scheduled events from a specific Discord server (guild).
For each event, calls Google Calendar and tries to get an event with the same ID.
Updates the event in Google Calendar if it already exists.
Creates a new event in Google Calendar if it does not exist.
Stores the Discord event ID as the Google Calendar event ID, so future updates are easy and consistent.
It is a lightweight, reliable sync that you can schedule to run as often as you like, as long as you respect Discord’s rate limits.
What you need before you start
Before diving into n8n, make sure you have:
An n8n instance (cloud or self-hosted).
A Discord bot token with permission to list scheduled events in the target server.
A Google account with a calendar and Google Calendar OAuth2 credentials configured in n8n.
The Discord server ID (guild_id) for the server whose events you want to sync.
Once those are ready, the rest is mostly clicking, mapping, and feeling smug about your new automated life.
Workflow overview: the n8n nodes involved
The template workflow is built from a simple left-to-right chain of nodes:
On schedule – triggers the workflow at a set interval.
Set (Configure) – stores the target Discord guild_id.
HTTP Request – lists scheduled events from Discord.
Google Calendar (Get events) – checks if an event with that ID exists in your calendar.
If – decides whether to create or update the event.
Google Calendar (Create event) – creates a new event when needed.
Google Calendar (Update event details) – updates existing events when they change on Discord.
Let us walk through how to configure each part without losing our sanity.
Step-by-step setup in n8n
1. Set up the schedule trigger
Start with the On schedule node.
Choose how often you want n8n to poll Discord, for example every 15 minutes.
Pick a frequency that balances freshness with Discord rate limits. For many servers, 10 to 30 minutes is a good starting point.
2. Store the Discord server ID
Add a Set node, usually named something like Configure.
Create a field called guild_id.
Set its value to the Discord server (guild) ID you want to sync.
This way you only have to change the server ID in one place instead of hunting through multiple nodes later.
3. Pull scheduled events from Discord
Next, add an HTTP Request node to fetch events from Discord’s API. This is where your bot actually does some work.
This tells Discord, “Hi, I am a bot, please let me see the scheduled events.”
4. Check for an existing Google Calendar event
Now add a Google Calendar node configured with the Get operation.
Set the Calendar field to the calendar where you want events to appear.
For eventId, use the Discord event ID:
eventId: ={{ $json.id }}
This step tries to find a Google Calendar event whose ID matches the Discord event ID. If it exists, great. If not, we will create one.
5. Decide whether to create or update
Add an If node to determine which path to follow.
You want to check whether the Google Calendar Get operation succeeded or not. A simple approach is to:
Use Continue On Fail on the Google Calendar Get node if you expect 404s when events are missing.
In the If node, test whether {{ $json.id }} is present or not, and use that to route to either Create or Update branches.
This is the logic that prevents duplicate events and keeps everything neatly aligned.
6. Create a new Google Calendar event
On the “create” branch, add a Google Calendar node with the Create operation.
Map the Discord fields to Google Calendar fields using n8n expressions. For example:
Start: {{ $('List scheduled events from Discord').item.json.scheduled_start_time }}
End: {{ $('List scheduled events from Discord').item.json.scheduled_end_time }}
Summary (event title): {{ $('List scheduled events from Discord').item.json.name }}
Location: {{ $('List scheduled events from Discord').item.json.entity_metadata.location }}
Description: {{ $('List scheduled events from Discord').item.json.description }}
Id (under additionalFields → id): {{ $('List scheduled events from Discord').item.json.id }} This sets the Google Calendar event ID to the Discord event ID.
That final mapping is the secret sauce that makes future updates easy, instead of forcing you to do fuzzy matching on titles or times.
7. Update existing Google Calendar events
On the “update” branch, add another Google Calendar node, this time using the Update operation.
Use the same eventId logic so you are updating the correct event.
Map the same fields as in the create node:
Start
End
Summary
Location
Description
Now, whenever you change the time, title, or details of a Discord scheduled event, the Google Calendar event will follow along on the next run. No more “Wait, which version is the right one?” confusion.
Key implementation details to get right
Using Discord event IDs as Google event IDs
By setting the Google Calendar event ID to the Discord scheduled event ID during creation, you give the workflow a stable way to find the same event later.
Google Calendar accepts custom event IDs as long as they are globally unique within that calendar, which makes Discord IDs a perfect match. It also keeps your logic clean and avoids messy lookups.
Handling timezones like a pro
Discord stores scheduled times in ISO format. That is good news, but you still need to make sure Google Calendar gets properly formatted timestamps with timezone information.
If Discord times are already in the correct timezone, you can map them directly.
If you need to normalize to a specific timezone, use a Function node or a Date/Format node in n8n to convert the times before sending them to Google Calendar.
Getting this right avoids the classic “Why is this event at 3 a.m.?” problem.
Respecting rate limits and choosing a polling interval
Discord has rate limits, and it is not shy about enforcing them. To stay on its good side:
Start with a conservative polling interval, such as every 10 to 30 minutes.
Monitor your n8n execution logs for any rate limit responses.
Only increase frequency if your server really needs it.
Your workflow will be happier, and so will Discord.
Error handling so you do not miss failures
Set the Google Calendar Get events node to Continue On Fail if you expect 404s when events do not exist yet.
Log or notify errors with an Email or Slack node if a particular request keeps failing.
Consider adding a retry or backoff pattern for transient errors so your workflow can recover gracefully.
That way, you find out about real problems instead of quietly losing events.
Testing your Discord to Google Calendar sync
Before trusting automation with your entire event schedule, give it a quick test run:
Deploy your Discord bot and confirm it:
Is a member of the target server.
Has permission to view scheduled events.
In n8n, configure:
The Header Auth credential with your Discord bot token.
Your Google Calendar OAuth2 credentials with the right scopes.
Create a test scheduled event in Discord, then run the workflow manually.
Check that a new event appears in your chosen Google Calendar.
Confirm that the event ID in Google Calendar matches the Discord event ID.
Edit the Discord event (change time, title, or description) and run the workflow again.
Confirm the Google Calendar event updates with the new details.
Once that works, you are ready to let the schedule trigger take over.
Advanced ideas for power users
When the basic sync is running smoothly, you can start getting fancy.
Two-way sync: Want changes in Google Calendar to flow back into Discord events? You can build a second workflow that:
Watches Google Calendar for changes.
Uses the Discord API to update scheduled events.
You will need a bot with the correct permissions and the relevant Discord endpoints.
Filtering events: Only want certain events, like “community” events or those with a specific keyword?
Add filters before the Create/Update nodes.
Filter by event type, name, or description so only relevant events sync.
Richer event details: Pull extra data such as entity_metadata or other fields from Discord.
Include images or metadata in the event description.
Add more context so your calendar events look less bare and more informative.
Troubleshooting common issues
If your events are not showing up in Google Calendar, do a quick sanity check:
Confirm your Header Auth is correctly set:
Authorization: Bot <token>
Verify the guild_id is correct and that the bot is actually in the server.
Check your Google Calendar OAuth credentials:
Make sure the scopes allow creating and updating events.
Confirm you selected the correct calendar in the node.
Inspect n8n execution logs:
Look at HTTP status codes from Discord and Google.
Check response bodies for helpful error messages.
Most problems come down to permissions, credentials, or a tiny typo in an ID.
Wrapping it up
This n8n workflow gives you a dependable way to sync Discord scheduled events into Google Calendar without manual effort. It is a great fit for communities, event organizers, and anyone who lives in Discord but plans their life in Google Calendar.
Start with the setup above, then:
Tune the polling interval based on your server size and activity.
Adjust field mappings to match how you want events to appear.
Add error handling and logging for production-grade reliability.
Ready to build? Import the template into your n8n instance, plug in your Discord bot header auth and Google Calendar OAuth credentials, and run a test. If you want to go further with filters, two-way sync, or timezone fine tuning, you can extend this workflow using the n8n docs and Discord API documentation.
Happy automating, and enjoy never having to copy event details by hand again.
On a rainy Tuesday afternoon, Mia stared at yet another Slack message blinking at the bottom of her screen.
“Hey, do you know where the latest onboarding checklist is?”
She sighed, opened Notion, and started typing into the search bar for what felt like the hundredth time that week. As the operations lead at a fast-growing startup, Mia had spent months organizing everything in Notion – product specs, onboarding docs, internal how-tos, HR policies, and meeting notes. The information was there, but finding it quickly had become a daily bottleneck.
New hires could not remember which space held which document. Managers asked the same questions about time off, billing, and product features. Even Mia, the person who built the knowledge base, sometimes struggled to track down the right page.
That afternoon, after answering the same “How do I request time off?” question for the third time, she decided something had to change.
The moment Mia realized search was not enough
Mia did a quick audit of their internal communication channels. The pattern was obvious:
Slack was full of repeated questions about policies and processes
New hires were overwhelmed by the size of the Notion workspace
Team leads were frustrated by how long it took to find answers
Notion was a great place to store knowledge, but it was not acting like an assistant. People did not want to “go search a database.” They wanted to ask a question and get a short, accurate answer with a link if they needed more detail.
That night, while searching for “Notion AI assistant” ideas, Mia discovered something that caught her eye: an n8n workflow template that turned a Notion knowledge base into a chat assistant using GPT. It promised to do exactly what she needed:
Receive chat messages from users
Search a Notion database for relevant records
Optionally pull in page content for deeper context
Use an AI agent and OpenAI to craft short, sourced answers
It sounded like magic. But it was not magic. It was just smart automation.
What Mia decided to build
Mia sketched the idea on a notepad first. She wanted an assistant that could sit behind a chat interface, listen for questions, and then quietly do the heavy lifting:
Receive a user question through a webhook
Understand how her Notion knowledge base was structured
Search by keywords or tags in the Notion database
Read the content of a page if needed
Use an AI agent connected to OpenAI to write a clear answer
Include links to the relevant Notion pages
Remember the last few messages so follow-up questions would make sense
In n8n terms, this translated into a set of specific building blocks:
Chat trigger using a webhook to receive user queries
Get database details to read the Notion DB schema and tag options
Search Notion database to find matching pages by keyword or tag
Search page content to pull matching blocks from a page
Window buffer memory to hold recent chat turns
AI agent + OpenAI Chat Model to summarize and produce the final response
If she could wire all of this together, her team could simply ask:
“How do I request time off?”
and get a concise answer plus links to the exact Notion pages, instead of a vague “search for it in Notion.”
Rising action: turning a Notion database into a real assistant
Mia starts with the Notion knowledge base
Before touching n8n, Mia opened Notion and looked at her existing knowledge base. To make it AI friendly, she created a dedicated database structured for question and answer pairs.
She made sure each entry had:
A “question” field (rich_text) that captured the main query, such as “How do I request time off?”
An “answer” field with a clear explanation
“tags” like product, billing, onboarding, HR, and policies
An “updated_at” field so she could track freshness
To make the future search more precise, she standardized the tags. No more “hr” in one place and “human resources” in another. She settled on a clear set of tags and stuck to them.
Then she created a Notion integration at developers.notion.com and shared the database with that integration. She knew that just creating an integration was not enough. It had to be explicitly granted read access to the specific database she wanted to search.
Bringing n8n into the story
With Notion ready, Mia logged into her n8n instance. She could have run it self-hosted or in the cloud, but her company already had an n8n cloud workspace, so she opened a new workflow and started dropping in nodes, following the architecture she had in mind.
Her workflow slowly took shape:
When chat message received – a webhook node that accepted user text from the chat interface
Get database details – a Notion node to fetch database information, including property names and tag options
Format schema – a Set node that transformed the raw Notion schema into compact JSON the AI agent could understand
AI Agent – a langchain-style agent node that would decide when to call search tools
Search notion database – an HTTP Request node calling /v1/databases/{id}/query with filters
Search inside database record – another HTTP Request node hitting /v1/blocks/{page_id}/children to fetch page blocks if the record needed deeper inspection
OpenAI Chat Model – the LLM node, wired to her OpenAI API key
Window Buffer Memory – a memory node to keep a short history of the conversation
It was starting to look like a real assistant. But a few critical steps still stood between Mia and a working system.
The turning point: making the AI agent actually useful
Credential setup, or why nothing worked at first
The first time Mia ran the workflow, it failed almost instantly.
The Notion node complained that it could not find the resource she was requesting. The OpenAI node refused to respond.
She realized she had skipped the boring but essential part: credentials.
For Notion, she went back into n8n, opened the credentials section, and added her Notion integration token. She double checked that the integration had been shared with the knowledge base database in Notion. Without that, Notion would keep responding with “The resource you are requesting could not be found”.
For OpenAI, she created an API key in her OpenAI account and stored it in n8n credentials, then linked it to the Chat Model node.
She made a mental note not to ever paste API keys into workflow JSON or share them in public repos. n8n credentials and environment variables were the right place for secrets.
Once credentials were in place, the workflow started to move. The webhook received a test message, the Notion node fetched database details, and the AI agent came to life.
Teaching the assistant how to search
Next, Mia had to decide how the assistant should search the Notion database. She knew that a naive search could either miss relevant answers or flood the AI model with too much information.
So she defined a clear search strategy inside the tools the agent could call:
First, try an exact keyword search against the question rich_text field
If no results, fall back to a tag-based search using the standardized tags
If still no results, broaden the keyword search by including variations like singular/plural forms or common synonyms
To keep costs and hallucinations down, she configured the search to return only the top 3 results to the LLM. There was no need to send the entire knowledge base to the model for every question.
Now the agent could:
Look up the database schema using the Get database details node
Use the Format schema node to map those fields into a structure it could reason about
Call the Search notion database tool when needed
Optionally call Search inside database record to fetch page blocks for deeper context
Giving the AI a clear role
Even with search working, Mia knew that the AI needed guardrails. She did not want it to invent answers or send people to the wrong policy page.
So she crafted a concise system prompt for the AI Agent node. Its job was very specific:
Use only facts from the provided Notion records
Never hallucinate or guess information that was not in the data
Always include the Notion page URL when a record contained the answer
Ask the user for clarification if the question was ambiguous
Her final system message looked something like this:
Role: You are an assistant that answers questions using only the provided Notion records.
If a record contains the answer, include the page URL.
If no records match, explain that no direct match was found and offer to broaden the search.
She attached this prompt to the AI Agent node so every conversation started from the same set of instructions.
A real example: “How do I request time off?”
To test her new Notion knowledge-base assistant, Mia used the most common question her team asked.
She opened the chat interface connected to the webhook and typed:
“How do I request time off?”
Behind the scenes, the workflow sprang into action:
The webhook node captured the message and passed the question to the AI Agent.
The AI Agent decided to call the Search notion database tool with a keyword search on the question field for “time off.”
The Notion database returned two relevant pages:
“Time Off Policy”
“Requesting Leave”
The agent then called the Search inside database record tool for each page, pulling short paragraphs from the page blocks.
Those snippets, plus the page URLs, were sent to the OpenAI Chat Model node.
The LLM synthesized a concise answer, combining the key steps for requesting time off, and included links to both original Notion pages.
The response that came back to the chat interface was exactly what Mia had always wanted:
A short explanation of how to request time off
Two direct links to the relevant Notion pages for more detail
No one had to search through Notion manually. No one had to ping Mia in Slack. The assistant handled it.
When things go wrong: Mia hits a few bumps
Notion resource not found
At one point, another team created a second knowledge base database and asked Mia to plug it into the assistant. Suddenly, she started seeing this error again:
“The resource you are requesting could not be found”
She quickly remembered the cause. The Notion integration had to be explicitly shared with the new database page inside Notion. Creating the integration alone was not enough. Once she granted access in the Notion UI, the error disappeared.
Slow responses
As the assistant grew more popular, some users noticed occasional delays. Mia traced them back to a few specific behaviors:
Fetching database details on every single request added 250 to 800 ms of latency
Pulling multiple page contents for each query added even more time
To speed things up, she:
Cached tag options and schema details instead of refreshing them on every call
Limited the number of pages fetched and passed only the most relevant snippets to the LLM
Incorrect or empty answers
On a few occasions, the assistant returned empty or unhelpful answers for questions Mia knew were covered in the knowledge base.
She tracked the issue to two common mistakes:
The /query request had filters that referenced a property name that did not exist in the database
The Format schema node was not mapping Notion properties correctly to the agent inputs
Once she corrected the property names and ensured that the schema mapping matched her Notion fields, the answers became reliable again.
Keeping things safe, fast, and sustainable
As more teams started using the assistant, Mia stepped back to think about security, costs, and best practices.
Permissions: She configured the Notion integration with least-privilege access, granting only read permissions unless a specific workflow required writes.
Secrets: All API keys stayed inside n8n credentials or encrypted environment variables. None were stored in workflow JSON or repos.
LLM costs: She reduced token usage by sending only the most relevant blocks to OpenAI and instructing the model to keep answers concise.
Rate limits: She respected Notion API rate limits and implemented exponential backoff for HTTP 429 responses to avoid hitting hard limits.
The assistant was no longer just a prototype. It was a production tool used daily by the entire company.
How Mia extended the assistant beyond the basics
Once the core Notion knowledge-base assistant worked smoothly, Mia started to think bigger.
She added a feedback loop so users could mark answers as helpful or not. Those ratings were stored back in Notion for future improvements.
She configured separate knowledge contexts so sensitive documents were only searchable by authorized users, while public FAQs stayed accessible to everyone.
She integrated the assistant with Slack, so team members could ask questions directly in their existing channels instead of opening another app.
What started as a way to stop answering the same questions turned into a central knowledge layer for the company, powered by Notion, n8n, and OpenAI.
From chaos to clarity: Mia’s outcome
A few weeks after launch, Mia checked Slack. The flood of repetitive questions had slowed to a trickle. New hires were getting up to speed faster. Managers knew they could rely on the assistant for up-to-date answers with direct links to the underlying Notion pages.
The company’s knowledge had not changed. It was still stored in the same Notion database. What changed was how accessible it had become.
Automate Zendesk to Jira with n8n: Turn Support Handoffs Into a Seamless Flow
Every time you copy a Zendesk ticket into Jira by hand, you lose a little bit of focus. You jump tools, retype details, paste links, and hope nothing gets missed. Over a day or a week, that context switching adds up. It slows your team down and pulls attention away from the work that really matters: helping customers and shipping improvements.
Automation with n8n can turn that friction into flow. By connecting Zendesk and Jira with a simple, reliable workflow, you create a system that quietly does the busywork for you. New tickets are picked up, checked, synced, and updated without you lifting a finger. Your support and engineering teams stay aligned, your incidents move faster, and you reclaim time and mental space.
This guide walks you through an n8n workflow template that does exactly that. You will see how to listen for new Zendesk tickets, detect whether a related Jira issue already exists, then either add a comment or create a new Jira issue and store its key back in Zendesk. Along the way, you will also see how this template can be a stepping stone toward a more automated, focused way of working.
The problem: Manual Zendesk to Jira handoffs drain your time
When Zendesk and Jira are not connected, every cross-team ticket becomes a mini-project:
Someone in support has to create a Jira issue by hand.
Ticket context needs to be copied, cleaned up, and pasted.
Links between tools have to be maintained manually.
Updates in one system may never make it into the other.
Over time, this creates hidden costs:
Duplicate work for support and engineering teams.
Lost or inconsistent context between Zendesk and Jira.
Slow and error-prone triage, especially during busy periods.
Difficulty meeting SLAs because information is scattered.
It does not have to stay this way. A small amount of automation can unlock a big shift in how your teams collaborate.
The mindset shift: Let automation carry the routine work
Automation is not just about saving a few clicks. It is about creating space for deeper work. When you let n8n handle the predictable steps, you:
Reduce context switching so your team can stay in flow.
Build reliable, repeatable processes instead of ad-hoc fixes.
Strengthen collaboration between support and engineering.
Free yourself to focus on strategy, not data shuffling.
Think of this Zendesk to Jira workflow as your first building block. Once you have it in place, you can extend it with richer data mapping, smarter routing, and even two-way syncing. Each improvement compounds the time you gain back.
The solution: An n8n template that keeps Zendesk and Jira in sync
The n8n workflow you are about to set up listens for new Zendesk tickets, checks for an existing Jira issue key in a custom field, and then:
Adds a comment to an existing Jira issue if it finds a key, or
Creates a new Jira issue and stores the new key back in Zendesk.
Technically, the workflow uses:
A Webhook trigger to receive Zendesk events.
The Zendesk node to fetch full ticket details.
A small Function node to inspect a custom field for a Jira Issue Key.
An IF node to branch based on whether that key exists.
Jira nodes to create issues or comments.
A Zendesk Update node to write the Jira key back.
The result is a lightweight, maintainable automation that quietly keeps both tools aligned. Once it is running, every new ticket becomes an opportunity to save a few more minutes and a bit more mental energy.
What you need before you start
To follow along and use the n8n Zendesk to Jira automation template, make sure you have:
An n8n instance (self-hosted or n8n cloud) with permission to create webhooks.
A Zendesk account with API access and a custom field to store the Jira issue key.
A Jira Software Cloud account with API credentials and a project where issues will be created.
Basic familiarity with n8n nodes and light JSON editing.
If you are new to n8n, do not worry. This workflow is a friendly starting point. You will see how each node works, and you can build on it as your confidence grows.
How the n8n workflow is structured
Here is the high-level sequence the template uses to automate Zendesk to Jira:
Webhook: Listens for new or updated Zendesk tickets.
Zendesk Get: Fetches the full ticket with all custom fields and comments.
Function (Determine): Checks a specific custom field for a Jira issue key.
IF: Branches depending on whether that key is present.
Jira issueComment: Adds a comment to an existing Jira issue if the key exists.
Jira create issue: Creates a new Jira issue if no key is found.
Zendesk Update: Writes the new Jira issue key back into the Zendesk custom field.
Next, you will configure each node. As you do, notice how simple the logic really is. This is the kind of automation you can understand at a glance, yet it can save hours over time.
Step 1: Webhook – listen for new Zendesk tickets
Start with the trigger that will kick off your automation.
Configure the Webhook node
In n8n, add a Webhook node and set a path, for example:
/zendesk-new-ticket-123
This path becomes part of the URL that Zendesk will call.
In your Zendesk account:
Create or update a trigger or webhook that sends events to the n8n Webhook URL.
Use the POST method so the ticket data is sent in the request body.
Configure it to fire when a ticket is created or updated, depending on your needs.
Once this is connected, every new ticket event will reach your n8n workflow automatically.
Step 2: Get the full Zendesk ticket
The webhook payload is useful, but you usually want the complete ticket record. That is where the Zendesk node comes in.
Configure the Zendesk “Get ticket” node
Add a Zendesk node and select the get operation. For the ID field, reference the ticket ID from the webhook payload, for example:
{{$node["On new Zendesk ticket"].json["body"]["id"]}}
(Adjust the node name if your Webhook node uses a different label.)
This retrieves the full ticket object, including:
custom_fields such as your Jira issue key field.
Ticket comments and metadata.
Requester, tags, and other useful context.
At this point, your workflow has everything it needs to decide whether to create a new Jira issue or update an existing one.
Step 3: Decide if a Jira issue already exists
Next, you will inspect a specific Zendesk custom field that stores the Jira issue key. This decision is the heart of the workflow: it determines whether you add a comment or create a new issue.
Configure the “Determine” Function node
Add a Function node and connect it after the Zendesk Get node. Use the following example code, and update the field ID to match your Zendesk configuration:
/* Zendesk field ID which represents the "Jira Issue Key" field. */
const ISSUE_KEY_FIELD_ID = 6689934837021;
new_items = [];
for (item of $items("Get ticket")) { var custom_fields = item.json["custom_fields"]; var jira_issue_key = ""; for (var i = 0; i < custom_fields.length; i++) { if (custom_fields[i].id == ISSUE_KEY_FIELD_ID) { jira_issue_key = custom_fields[i].value; break; } } new_items.push({ "Jira issue key": jira_issue_key });
}
return new_items;
Important details:
Replace ISSUE_KEY_FIELD_ID with the ID of your Zendesk custom field. You can find this in Zendesk admin > Ticket Fields.
The function outputs an item with a single property: Jira issue key.
This output makes the next step easy: you simply check whether that value is empty or not.
Step 4: Branch the workflow with an IF node
Now you will tell n8n how to behave in each scenario.
Configure the IF node
Add an IF node and connect it after the Determine node. Set the condition to check whether the Jira issue key is present, for example:
True branch: A Jira issue already exists, so the workflow will add a comment to that issue.
False branch: No Jira issue exists yet, so the workflow will create a new one.
From here, your Zendesk to Jira sync becomes fully automatic. Each ticket is handled appropriately without anyone having to think about it.
Step 5: Add a comment to an existing Jira issue
When the IF node finds an existing Jira key, you can keep Jira updated with the latest Zendesk activity.
Configure the Jira issueComment node
On the true branch of the IF node, add a Jira node and choose the issueComment operation. Configure it to:
Use the issueKey from the Determine node output.
Set the comment body from the Zendesk ticket event.
An example expression for the comment body could be:
{{$node["On new Zendesk ticket"].json["body"]["comment"]}}
You can adjust this to include more context, such as ticket ID, requester, or a direct link to Zendesk. Every new Zendesk update can now appear in Jira automatically, keeping engineers in the loop without manual effort.
Step 6: Create a new Jira issue when needed
When no Jira issue exists yet, the workflow will create one for you on the false branch of the IF node.
Configure the Jira “Create issue” node
Add another Jira node and select the create issue operation. Typical settings include:
Summary: Use the Zendesk ticket subject.
Description: Include a link back to the Zendesk ticket and any key details.
For example, you might use this description template:
See Zendesk issue at: https://your-domain.zendesk.com/agent/tickets/{{$node["Get ticket"].json["id"]}}
Make sure you also:
Set the correct project and issueType IDs (or names) in the Jira node.
Optionally map additional fields like priority, tags, or requester.
From now on, new Zendesk tickets that require engineering attention will automatically become Jira issues, complete with a reference back to the original ticket.
Step 7: Write the Jira key back into Zendesk
To close the loop, you want Zendesk to know which Jira issue is associated with each ticket. That way, future updates can reuse the same Jira issue instead of creating duplicates.
Configure the Zendesk “Update ticket” node
After the Jira Create issue node, add a Zendesk Update node. Configure it to:
Target the original Zendesk ticket ID.
Update the Jira issue key custom field with the newly created Jira key.
You can capture the Jira key from the create issue node, for example:
{{$node["Create issue"].json["key"]}}
Once this is stored in the custom field, the next time the ticket changes, the workflow will recognize the key and route to the “existing issue” path. Your systems stay in sync without extra effort.
Testing your Zendesk to Jira automation
Before you rely on this workflow in production, take a moment to test and validate it. This is where you turn a good idea into a dependable part of your process.
Enable the n8n webhook and send a test POST from Zendesk (or use curl/Postman) with a sample ticket payload.
Check the Zendesk Get node to confirm you see the full ticket data, including custom_fields.
Verify the Determine node correctly reads the Jira issue key field.
Test both branches:
A ticket that already has a Jira key set.
A ticket without any Jira key.
Look in Jira to confirm that:
Comments are added to existing issues.
New issues are created with the Zendesk link in the description.
Check Zendesk after issue creation to confirm the Jira key is populated in your custom field.
Once these checks pass, you have a working bridge between Zendesk and Jira that runs on its own.
Make your workflow more resilient and polished
With the core automation in place, you can start refining it. This is where you turn a working flow into a robust system that your team can trust at scale.
Error handling and reliability
Add a Set node or extra Function node to clean up comment text before sending it to Jira, for example stripping HTML or limiting length.
Use Try/Catch patterns or an Error Trigger workflow in n8n to handle API failures gracefully and notify admins when something goes wrong.
Log responses from Jira and Zendesk, for example by storing them in a database or sending them to a monitoring channel for traceability.
Watch out for rate limits. If you handle a high volume of tickets, consider throttling requests or batching updates.
Secure your webhook with a secret token and validate incoming payloads before processing them.
Advanced enhancements for deeper integration
Map additional Zendesk fields such as priority, tags, and requester into Jira custom fields to preserve richer context.
Automate Stock Reports with n8n, Baserow & SendGrid
Imagine opening your inbox every morning to a clean, up-to-date summary of your investments, without copying numbers from websites or spreadsheets ever again. That is exactly what this n8n workflow template gives you.
In this guide, we will walk through how the workflow works, when it is useful, and how each n8n node fits together. You will see how it pulls stock data from Tradegate, enriches it with your Baserow holdings, calculates values and changes, then wraps everything into a polished HTML email sent via SendGrid.
What this n8n workflow actually does
At a high level, this automation:
Reads your portfolio from a Baserow table (name, ISIN, quantity, purchase price)
Fetches the latest data from Tradegate for each ISIN
Scrapes bid, ask, and other key details from the Tradegate HTML
Calculates current values, gains, and percentage changes
Builds a simple HTML report with a table and summary
Sends the report as an email using SendGrid
The result is a daily (or on-demand) stock report that looks professional, runs automatically, and is easy to adjust as your portfolio or tooling evolves.
Why bother automating stock reports?
If you are still checking quotes manually, you already know the pain. Opening multiple tabs, copying values into a spreadsheet, doing the same calculations again and again… it adds up quickly and mistakes are almost guaranteed.
With n8n handling the work for you, you can:
Schedule checks at consistent times, for example every weekday morning
Standardize how you pull and calculate data
Keep your portfolio view aligned with real market values
Get clear HTML summaries in your inbox that you can read on any device
In short, you trade repetitive manual work for a single, maintainable workflow.
How the workflow is structured
The template follows a clean, linear pipeline. Each node has a narrow job, which makes the whole thing easier to understand and tweak later.
Cron / Manual Trigger – starts the workflow on a schedule or on demand
Baserow (getAll:row) – pulls your list of holdings
HTTP Request – grabs the Tradegate order book page for each ISIN
HTML Extract – scrapes bid, ask, and other details from the HTML
Set nodes – formats numbers and calculates values and changes
Function (Build HTML) – assembles the final email-ready HTML report
SendGrid – sends the report to your chosen recipients
Let us go through these pieces in a slightly different order so you can see the story behind the automation.
Step 1: Decide when the report should run
Cron and Manual triggers in n8n
Start by adding two trigger nodes:
Cron – configure this to run at your preferred time, for example weekdays at 07:15. This is what gives you an automatic daily stock report.
Manual Trigger – keep this in the workflow for quick tests, debugging, or one-off runs whenever you want an ad-hoc report.
Both triggers can feed into the same chain of nodes, so you do not need to duplicate any logic.
Step 2: Store and fetch your portfolio from Baserow
Baserow node – your holdings as structured data
Next, the workflow needs to know what you actually own. That is where Baserow comes in.
Set up a Baserow table with at least these columns:
Name – the stock or instrument name
ISIN – used to query Tradegate
Count – how many units you hold
Purchase Price – your buy price per unit
Use the Baserow (getAll:row) node in n8n to read all rows from this table. Each row becomes an item that flows through the workflow, and each item carries the data needed to look up the corresponding Tradegate page and to calculate your current position.
Step 3: Pull Tradegate data for every ISIN
HTTP Request node – grabbing the order book page
For each row from Baserow, the workflow calls Tradegate. You do this with an HTTP Request node configured to send a GET request to the Tradegate order book URL.
Pass the ISIN from Baserow as a query parameter so the right page is requested for each holding. In practice, you will use an expression like:
isin = ={{$json["ISIN"]}}
Set the response format to string. That way, the raw HTML comes through untouched, which is exactly what you want for the next step where you parse it.
Step 4: Scrape the key values from Tradegate
HTML Extract node – parsing the HTML
Once the HTML from Tradegate is available, the HTML Extract node takes over. This node lets you define CSS selectors to pick out exactly the pieces of data you need, such as:
WKN – the WKN cell
ISIN – the ISIN cell
Currency – the currency field
Name – typically in a heading element, such as <h2>
and similar patterns for other table cells. These may need updating if Tradegate changes its HTML structure, so it is worth checking them from time to time.
Step 5: Clean up the data and calculate portfolio metrics
First Set node – computing current value
Now that you have both Baserow data and scraped Tradegate values in each item, you can start calculating.
Use a Set node to normalize and compute a Current Value for each holding. One example expression looks like this:
Current Value = {{ (parseFloat($json["Bid"].replace(',', '.')) * parseFloat($node["Baserow"].json["Count"])) .toFixed(2) }}
A couple of important details here:
parseFloat is used to turn the text values into numbers
Commas in prices are replaced with dots, which is crucial for correct parsing
toFixed(2) keeps the output neat with two decimal places
Second Set node – calculating change and percentage change
Next, add another Set node to derive:
Change – difference between current value and purchase price
Change (%) – percentage gain or loss relative to purchase price
All the HTML is typically stored in a single variable, for example email_html, and returned as part of the item JSON. Keep the layout simple: a basic table, some light inline styles, and a short summary paragraph so it works well in most email clients.
Step 7: Email the report with SendGrid
SendGrid node – delivering the final result
The last step is to send that HTML to your inbox.
Use the SendGrid node (or a similar transactional email provider) and configure it to:
Set the contentType to text/html
Map the HTML output from the Function node, for example $json["html"], into the email body
Specify the sender, recipients, and subject line
Once this is in place, every scheduled run of the workflow will produce and send a fresh report automatically.
Practical tips to keep your workflow stable
Respect rate limits and site policies
Tradegate, like many exchanges, may have rate limits or specific rules about scraping. To stay on the safe side:
Use n8n’s Wait node to add small delays between HTTP requests
Limit concurrency so you do not hammer the remote server
Review the site’s terms of use and adapt accordingly
Data hygiene and robustness
Clean data means more reliable reports. A few best practices:
Normalize numbers, such as replacing commas with dots before using parseFloat
Handle missing or malformed values gracefully, for example by skipping problematic rows
Add basic validations so you can spot when parsing fails or selectors no longer match
Error handling and retries
Things will break occasionally, especially if Tradegate changes its HTML. To make this less painful:
Use n8n’s error workflows to catch and process failures
Connect HTTP and parsing steps to a retry or alerting subflow
Send a short error email or log a note back into Baserow when a row cannot be parsed
Security best practices
Because this workflow touches credentials and possibly sensitive data, keep security in mind:
Store API keys and credentials in n8n’s credentials store, not in plain text fields
Restrict access to production workflows and logs
Avoid writing secrets into output fields or debug messages
Ideas to enhance the workflow
Once the basic version is running smoothly, you can extend it in a few directions:
Add currency conversion if your holdings are in multiple currencies
Store daily snapshots in Baserow or a database and build simple charts or historical comparisons
Expose a webhook or Slack integration so you can trigger ad-hoc reports on demand
Improve email styling with branding, better typography, or conditional highlighting of winners and losers
Testing checklist before you rely on it
Before you fully trust the automation, run through this quick checklist:
Use the Manual Trigger to test a few portfolio rows
Inspect the raw HTML from the HTTP Request node to confirm Tradegate’s layout matches your expectations
Verify that the CSS selectors in the HTML Extract node still match the correct elements
Double check number parsing and calculations, especially if your locale uses commas in numbers
Wrapping up
This n8n workflow template gives you a repeatable, low-maintenance way to generate daily stock reports. It combines:
Baserow for storing your holdings
Tradegate as the source of live market prices
SendGrid to deliver clean HTML summaries straight to your inbox
Because the workflow is modular, you can easily switch data sources, extend the calculations, or plug in a different email provider without rewriting everything.
Ready to stop doing portfolio updates by hand? Import or recreate the nodes described here, adjust the CSS selectors for your specific Tradegate pages, and put the Cron node on a schedule that suits you.
If you want the example workflow JSON or help tailoring it to your setup, feel free to reach out.
Call to action: Try this workflow in your n8n instance today, or message me for a customized version, troubleshooting support, or a live walkthrough. Subscribe if you would like more automation recipes like this in your inbox.
From Document Chaos to Smart Answers: How One Marketer Built a RAG Chatbot with Google Drive & Qdrant
On a rainy Tuesday afternoon, Maya stared at yet another Slack message from sales:
“Hey, do we have the latest onboarding process for enterprise customers? The PDF in Drive looks outdated.”
She sighed. Somewhere in their sprawling Google Drive were dozens of PDFs, slide decks, and Google Docs that all seemed to describe slightly different versions of the same process. As head of marketing operations, Maya was supposed to be the person who knew where everything lived. Instead, she was spending her days hunting through folders and answering the same questions over and over.
That was the moment she decided something had to change.
The problem: Knowledge everywhere, answers nowhere
The company had grown fast. Teams were diligent about documenting things, but that only made the problem worse. There were:
Customer onboarding guides in PDFs
Support playbooks in Google Docs
Pricing explanations in scattered slide decks
Internal FAQs buried in shared folders
People were not short on documentation. They were short on answers.
Maya wanted a way for anyone in the company to simply ask a question in plain language and get a reliable, context-aware response, grounded in their existing docs. Not a generic chatbot, but one that actually understood their internal knowledge base.
That search led her to the concept of Retrieval-Augmented Generation (RAG), and eventually to an n8n workflow template that promised exactly what she needed: a production-ready RAG chatbot that could index documents from Google Drive, store embeddings in Qdrant, and serve conversational answers using Google Gemini.
Discovering RAG: Why this chatbot is different
As Maya dug deeper, she realized why a RAG chatbot was different from the generic AI bots she had tried before.
Instead of relying only on a language model’s training data, RAG combines:
A vector store for fast semantic search
A large language model for natural, context-aware responses
In practical terms, that meant:
Documents from Google Drive could be indexed and searched semantically
Qdrant would store embeddings and metadata for fast retrieval
Google Gemini would generate answers grounded in those documents
n8n would orchestrate the entire workflow, from ingestion to chat
For a team like hers, this was ideal. Their internal docs, knowledge bases, and customer files could finally become a living, searchable knowledge layer behind a simple conversational interface.
The architecture that changed everything
Maya decided to try the n8n template. Before touching anything, she sketched the architecture on a whiteboard so the rest of the team could understand what she was about to build.
At a high level, the system looked like this:
Document source: A specific Google Drive folder that held all key docs
Orchestration: An n8n workflow to discover files, download them, and extract text
Text processing: A token-based splitter and metadata extractor to prepare content
Embeddings: OpenAI text-embedding-3-large (or equivalent) to turn chunks into vectors
Vector store: A Qdrant collection, one per project or tenant
Chat model: Google Gemini for conversational answer generation
Human-in-the-loop: Telegram for approvals on destructive operations
History: Google Docs to store chat transcripts for later review
It sounded complex, but the n8n template broke it into manageable pieces. Each part of the story was actually an n8n node, wired together into a repeatable workflow.
Rising action: Turning messy Drive folders into structured knowledge
To get from chaos to chatbot, Maya had to wire up a few critical components inside n8n. The template already had them in place, but understanding each one helped her customize and trust the system.
Finding and downloading the right files
The first challenge was obvious: how do you reliably pull all relevant files from Google Drive without melting APIs or memory?
The workflow started with the Google Drive node, configured to:
List files in a specific folder ID
Loop through file IDs in batches
Download each file safely without hitting rate limits
n8n’s splitInBatches node helped here. Instead of trying to download hundreds of files at once, the workflow processed them in small, controlled chunks, which protected both Google APIs and her n8n instance from spikes.
Extracting text and rich metadata
Once files were downloaded, the next step was to turn them into something the AI could actually work with.
The workflow included a text extraction step that pulled the raw content from PDFs, DOCX files, and other formats. Then came a crucial part: an information-extractor stage that generated structured metadata, such as:
title
author
overarching_theme
recurring_topics
pain_points
keywords
Maya quickly realized this metadata would become her secret weapon. By attaching it to each vector, she could later:
Filter search results by specific files or themes
Perform safe, targeted deletes
Slice the knowledge base by project or customer type
Splitting long documents into smart chunks
Some of their onboarding guides ran to dozens of pages. Sending them as a single block to an embedding model was not an option.
The template used a token-based splitter to break long documents into smaller chunks, typically:
2,000 to 3,000 tokens per chunk
This struck the right balance: chunks were large enough to preserve context, but small enough to avoid truncation and respect embedding model limits. Maya learned that going too small could hurt answer quality, since the model would lose important surrounding context.
Generating embeddings and upserting into Qdrant
With chunks ready, the workflow called the embedding model, using:
OpenAItext-embedding-3-large (or a compatible provider)
Each chunk became a vector, enriched with metadata like:
file_id
title
keywords
Extracted themes and topics
These vectors were then upserted into a Qdrant collection. Maya followed a consistent naming scheme, such as:
project-<project_name> for per-project isolation
tenant-<tenant_id> for multi-tenant setups
That design would later make it easy to enforce data boundaries and control quotas.
The turning point: When the chatbot finally spoke
After a week of tinkering, Maya was ready to move from ingestion to interaction. This was the part her colleagues actually cared about: could they ask a question and get a useful answer?
Wiring up chat and retrieval with Google Gemini
The template exposed a chat trigger inside n8n. When someone sent a query, the workflow did three things in quick succession:
Sent the query to Qdrant as a semantic retrieval tool
Retrieved the top K most relevant chunks
Passed those chunks as context to Google Gemini
Gemini then generated a response that was not just plausible, but grounded in their actual documents. By default, Maya started with a topK value between 5 and 10, then adjusted based on answer quality.
On the first real test, a sales rep asked:
“What are the key steps in onboarding a new enterprise customer using SSO?”
The chatbot responded with a clear, step-by-step explanation, pulled from their latest onboarding guide and support documentation, complete with references to API keys and setup steps. For the first time, Maya saw their scattered docs behave like a single, coherent source of truth.
Adding memory and chat history
To make conversations feel natural, the template also included a short-term memory system. It kept a rolling window of about 40 messages, so the chatbot could maintain context across multiple turns.
At the same time, the workflow persisted chat history to Google Docs. This served several purposes:
Auditing what information was being surfaced
Reviewing tricky conversations for future improvements
Demonstrating compliance and oversight to leadership
The chatbot was no longer a black box. It was a transparent system that the team could inspect and refine.
Keeping control: Safe deletes and human approvals
With power came a new concern. What happened if they needed to remove outdated or sensitive content from the vector store?
The template had anticipated this with a human-in-the-loop flow for destructive operations.
When Maya wanted to remove content related to a specific file, the workflow would:
Assemble a list of file_id values targeted for deletion
Send a notification via Telegram
Require a double-approval before proceeding
Run a deletion script that filtered Qdrant points by metadata.file_id
This approach made accidental data loss far less likely. No one could wipe out large portions of the knowledge base with a single misclick.
How Maya set everything up in practice
Looking back, the setup itself followed a clear sequence. Here is how she put the n8n RAG chatbot into production.
1. Provisioning the core services
First, she ensured all underlying services were ready:
Qdrant deployed, either hosted or self-hosted
Google Cloud APIs enabled for Drive and Gemini (PaLM)
OpenAI (or another embedding provider) configured
2. Importing and configuring the n8n template
Next, she imported the provided workflow template into n8n and added credentials for:
Google Drive
Google Docs
Google Gemini
Qdrant API
OpenAI embeddings
In a couple of Set nodes, she defined the key variables:
The Google Drive folder ID that would serve as the document source
The Qdrant collection name for this project
3. Running a small test ingest
Before going all in, Maya pointed the workflow at a small folder of representative documents and ran a test ingest. She verified that:
Text was extracted correctly
Metadata fields were populated as expected
Vectors were successfully upserted into the Qdrant collection
4. Testing chat and tuning retrieval
Finally, she tested the chat trigger with real questions from sales and support. When answers were too shallow or missed context, she experimented with:
Adjusting chunk size within the 1,000 to 3,000 token range
Tuning topK between 5 and 10 for better relevance
Within a few iterations, the chatbot felt reliable enough to introduce to the rest of the company.
Best practices Maya learned along the way
As the system moved from experiment to daily tool, several best practices emerged.
Designing chunks and metadata
Chunk size: Keep chunks in the 1,000 to 3,000 token range, depending on the embedding model. Avoid tiny chunks that strip away context.
Metadata: Always attach fields like file_id, title, keywords, and extracted themes. This makes filtered search and safe deletes possible.
Collection and retrieval strategy
Collection design: Use per-project or per-environment collections to isolate data and manage quotas.
Top-K tuning: Start with topK=5-10 and adjust based on how relevant the answers feel in practice.
Scaling without breaking APIs
Rate limits: Batch downloads and embedding calls. Use n8n’s splitInBatches and add retry or backoff logic to handle throttling gracefully.
Access control: Restrict credentials for Drive and Qdrant, audit who can access what, and enforce TLS for data in transit.
Security, compliance, and peace of mind
As more teams started relying on the chatbot, security moved from an afterthought to a central requirement. Maya worked with IT to ensure the system aligned with their data governance rules.
They implemented policies to:
Encrypt data both at rest and in transit
Anonymize PII where required
Maintain an audit trail for data access and deletions
Use tenant separation and strict RBAC for Qdrant and n8n in multi-tenant scenarios
The combination of Telegram approvals, metadata-based deletes, and detailed chat logs gave leadership confidence that the system was not just powerful, but also controlled.
When things go wrong: Troubleshooting in the real world
Not everything worked perfectly on day one. Along the way, Maya hit a few common pitfalls and learned how to fix them.
Empty or weak responses: She increased topK, reduced chunk size slightly, and double-checked that embeddings had been upserted with the correct metadata.
Rate limit errors: She added retry and backoff logic, and split downloads into smaller batches.
Truncated text: She confirmed that the extractor handled PDFs and DOCX files properly, and used a better OCR solution for scanned PDFs.
Deletion mistakes avoided: She kept the Telegram double-approval flow mandatory before any script could delete vectors based on metadata.file_id.
Cost and performance: Keeping the chatbot sustainable
As usage grew, so did costs. Maya tracked where the money was going and adjusted accordingly.
She found that the main cost drivers were:
Embedding generation for large document sets
Large language model calls for chat responses
To keep things efficient, she:
Used shorter retrieved contexts when possible
Cached embeddings for documents that had not changed
Window Memory – Maintains recent message history for the RAG agent.
RAG Agent (LangChain) – Orchestrates retrieval and generation for grading.
OpenAI Chat Model – Produces the actual grade, feedback, and confidence values.
Google Sheets Append – Logs grading outcomes and metadata.
Slack Alert – Sends error notifications to a designated channel.
2.2 Typical Payload & Metadata
The webhook typically receives a JSON payload with fields such as:
submission_id
student_id
question_id
question_text (optional but recommended)
student_answer
These identifiers are reused as metadata in Pinecone and as columns in Google Sheets, making it easier to trace individual grading decisions.
3. Node-by-Node Breakdown
3.1 Webhook Trigger
The workflow starts with an n8n Webhook node configured to accept POST requests over HTTPS. This node:
Receives the quiz submission payload
Validates the presence of required fields like student ID and answer text
Forwards the parsed JSON to downstream nodes
Ensure the webhook URL is secured via HTTPS and optionally protected behind an authentication mechanism if required by your environment.
3.2 Text Splitter
The Text Splitter node processes the student_answer (or combined question + answer text) and splits it into smaller segments. In the template, the typical configuration is:
chunkSize = 400 characters
chunkOverlap = 40 characters
This chunking:
Improves embedding quality by keeping each segment focused
Enhances retrieval recall in Pinecone when querying similar content
For very short answers, the splitter may produce a single chunk. For longer answers, overlapping segments preserve context across chunk boundaries.
3.3 Cohere Embeddings Node
Each chunk from the Text Splitter is passed into a Cohere Embeddings node. This node:
Uses your Cohere API credentials configured in n8n
Converts each text chunk into an embedding vector
Outputs vectors that can be stored in or queried from Pinecone
These embeddings are later used for similarity search against:
Canonical rubrics
Example answers
Instructor notes
Previously graded responses
3.4 Pinecone Insert
The Pinecone Insert node writes embeddings into a Pinecone index, such as:
index name: quiz_auto_grader
For each embedding, the workflow typically stores metadata including:
student_id
question_id
submission_id
timestamp
Optionally, flags indicating whether it is a rubric item, example answer, or a live submission
This index acts as a searchable knowledge base of grading-relevant content, allowing the RAG agent to ground its decisions in prior examples and rubrics.
3.5 Pinecone Query
When grading a specific answer, the workflow uses a Pinecone Query node to retrieve context. The node:
Accepts the embedding of the current answer as the query vector
Searches the quiz_auto_grader index for top-k similar vectors
Returns the most relevant rubric entries, example answers, or past submissions
The retrieved documents are then attached as context to the RAG agent, typically passed via a Vector Tool configured in the LangChain integration.
3.6 Window Memory
A Window Memory node is used to maintain short-term conversational context for the RAG agent. This is particularly useful if:
You grade multiple questions for the same student within a single workflow run
You want the agent to maintain consistent grading style across a small sequence of interactions
Window Memory stores the most recent messages up to a configurable limit, preventing the context window from growing indefinitely.
3.7 RAG Agent (LangChain Agent in n8n)
The core grading logic is implemented using a RAG Agent based on LangChain within n8n. The agent:
Uses the OpenAI Chat Model as the language model
Has access to a Vector Tool that queries Pinecone
Receives system and user messages that describe the grading task
Combines retrieved context with the student answer to produce a grade
The system message is typically configured along the lines of:
“You are an assistant for Quiz Auto Grader”
The agent outputs:
A numeric grade, often on a 0-100 scale
Short, human-readable feedback for the student
An optional confidence score between 0 and 1
3.8 OpenAI Chat Model
The OpenAI Chat node provides the underlying language model for the RAG agent. It:
Consumes the combined prompt that includes question, rubric, retrieved context, and student answer
Returns a structured response that the agent interprets as grading output
Model selection, temperature, and other generation parameters can be tuned based on how deterministic you want the grading to be.
3.9 Google Sheets Append
After the grade is produced, a Google Sheets Append node logs the result. Typical columns include:
submission_id
student_id
question_id
grade
feedback
confidence (if available)
grader_notes or raw model output
timestamp
This sheet functions as an audit log and makes it easy to review or export grading results for further analysis.
3.10 Slack Error Alert
For reliability, the workflow includes a Slack node that sends alerts when errors occur. If any step, such as:
Embedding generation
Pinecone operations
RAG agent execution
Google Sheets logging
throws an error, the workflow sends a message to an #alerts channel with:
The error message or stack trace (as available)
The submission_id or related identifiers to locate the failed item
This allows quick triage and manual intervention when needed.
4. Configuration & Integration Notes
4.1 Credentials & Environment
Configure the following credentials in n8n:
OpenAI API key for the chat model
Cohere API key for embeddings
Pinecone API key and environment
Google credentials for Sheets access
Slack bot token or webhook for alerts
It is recommended to run n8n in a managed environment or containerized setup and provide these keys via environment variables, not hard-coded in the workflow.
4.2 Data Privacy & Security
When working with student data:
Use HTTPS for the Webhook endpoint
Restrict access to Pinecone indexes to only the services that require it
Encrypt stored credentials in n8n and your hosting environment
Limit Google Sheets sharing to authorized staff only
Avoid logging unnecessary personally identifiable information (PII) in external systems
4.3 Rubric & Example Storage Strategy
To improve grading consistency, store:
Canonical rubrics for each question
High-quality example answers
Instructor notes or grading guidelines
in the same Pinecone index as vectors. Label them appropriately in metadata so that the RAG agent can retrieve them alongside live student answers. This makes the grading process more transparent and aligned with instructor expectations.
4.4 Chunk Size & Overlap Tuning
The default template uses:
chunkSize = 400
chunkOverlap = 40
Adjust these values based on:
Typical length of student responses
Embedding model characteristics
Desired tradeoff between context richness and noise
Very small chunks may lose context, while very large chunks may reduce retrieval precision.
4.5 Prompt Engineering for the RAG Agent
Clear and constrained prompts are essential. Define:
A system message that describes the grader’s role and responsibilities
A user message structure that includes question, rubric, and student answer
An expected output format, such as JSON, to simplify downstream parsing
A typical system and user prompt combination looks like:
<system>
You are an assistant for Quiz Auto Grader. Given a question, rubric, and a student's answer, return a JSON object with keys: grade (0-100), feedback (short text), confidence (0-1).
</system>
<user>
Question: ...
Rubric: ...
Student answer: ...
</user>
By enforcing a structured JSON output, the Google Sheets node can reliably map fields to columns without additional parsing complexity.
Invalid or missing payload fields in the webhook request
Authentication failures due to expired or misconfigured API keys
The template is designed to detect such failures at each step and trigger the Slack alert path.
5.2 Slack Alerts & Manual Review
When an error is detected, the Slack node:
Sends a message to an #alerts channel
Includes the relevant identifiers (for example, submission_id)
You can then:
Investigate the underlying issue
Re-run the workflow for the affected submission
Perform manual grading if necessary
For additional robustness, consider enabling retries on transient network errors and defining a dead-letter queue or dedicated sheet for failed items that require manual attention.