Sync Discord Events to Google Calendar with n8n

Sync Discord Events to Google Calendar with n8n

Ever said “yes” to a Discord event, only to forget about it because it never made it onto your Google Calendar? If you live in your calendar but your community lives in Discord, that gets old fast.

In this guide, we’ll walk through a ready-to-use n8n workflow template that quietly keeps your Discord scheduled events and Google Calendar in sync. No more copy-pasting, no more “What time was that again?” Just one source of truth in your calendar.

We’ll cover what the template does, when you’d want to use it, and exactly how to set it up step by step. Grab a coffee and let’s get your automation running.

What this n8n workflow template actually does

At a high level, this workflow acts like a bridge between your Discord server and a Google Calendar. On a schedule that you choose, it:

  • Calls the Discord API to list all scheduled events in a specific server
  • Checks Google Calendar to see if each Discord event already exists there (using the Discord event id)
  • Creates new Google Calendar events when needed
  • Updates existing Google Calendar events if any details have changed

The end result: every scheduled event on your Discord server appears in your Google Calendar with matching details, and stays updated over time.

When to use this Discord-to-Google Calendar sync

This template is perfect if you:

  • Run a community, guild, or server that schedules events in Discord
  • Rely on Google Calendar to plan your day or share availability
  • Want your team or community to see Discord events in a shared calendar
  • Are tired of manually recreating every Discord event in Google Calendar

In other words, if Discord is where you organize events but Google Calendar is where you actually look, this workflow saves you from juggling both.

Why this approach works so reliably

The magic here is in how we match Discord events to Google Calendar events. Discord gives each scheduled event a stable id. Instead of trying to match on names or times (which can change), we simply reuse that id as the Google Calendar eventId.

That means:

  • Each Discord event maps to exactly one Google Calendar event
  • The workflow can easily tell if an event already exists in Calendar
  • We avoid duplicates and weird mismatches when details are edited

The logic stays simple: if an event with this ID exists, update it. If not, create it.

What you’ll build in n8n

Here is the core node flow you’ll end up with:

  • On schedule – Triggers the sync every X minutes
  • Set / Configure – Stores your Discord guild_id (server ID)
  • HTTP Request – Lists scheduled events from Discord
  • Google Calendar (Get events) – Tries to fetch a matching event using the Discord event id
  • If – Decides whether to create or update in Google Calendar
  • Google Calendar (Create event) – Creates a new calendar event when needed
  • Google Calendar (Update event) – Updates an existing calendar event if it already exists

Let’s go through how to set this up from start to finish.

Before you start: prerequisites

You’ll need a few things ready to go:

  • An n8n instance, either cloud or self-hosted
  • A Discord bot token with permission to read guild scheduled events
  • A Google account with OAuth credentials that allow access to Google Calendar
  • The Google Calendar you want to sync events into, plus its Calendar ID (you’ll select it inside n8n)

Once you have these, you’re ready to wire everything together.

Step 1 – Create and configure your Discord bot

First, you need a Discord bot that can see scheduled events in your server.

  1. Create a bot in the Discord Developer Portal.
  2. Invite the bot to your server with permissions to view scheduled events.
  3. Copy the bot token. You’ll use this in n8n to authenticate API calls.

In n8n, open the HTTP Request node that lists scheduled Discord events and set up authentication:

  • Use Header Auth
  • Add a header: Authorization with value Bot YOUR_BOT_TOKEN

In production, you’ll want to store that token in n8n credentials, not directly in the node. We’ll touch on security best practices later.

Step 2 – Configure the schedule trigger

Next up is deciding how often you want the sync to run.

In the On schedule node, choose a cadence that fits your use case, for example:

  • Every 5 minutes if you want near real-time sync
  • Every 15 or 30 minutes if you prefer lighter API usage

More frequent runs give faster updates, but also mean more calls to both Discord and Google APIs. Pick a balance that feels right for your server size and activity.

Step 3 – Add your Discord server ID (guild_id)

The workflow needs to know which Discord server to pull events from. That’s where the guild_id comes in.

In n8n:

  1. Add a Set node (you might name it “Configure”).
  2. Create a field called guild_id.
  3. Paste in your Discord server ID.

Not sure where to find the server ID? In Discord:

  1. Go to User Settings > Advanced.
  2. Enable Developer Mode.
  3. Right click your server name and select Copy ID.

Step 4 – List scheduled events from Discord

Now we’ll fetch the actual events from your server using the Discord API.

In your HTTP Request node, configure it to call:

GET https://discord.com/api/guilds/{{guild_id}}/scheduled-events?with_user_count=true

Key details:

  • Method: GET
  • URL: use the URL above, with {{guild_id}} coming from your Set node
  • Headers:
    • Authorization: Bot <your_token>
    • Content-Type: application/json (n8n usually adds this automatically when needed)
  • Query parameter: set with_user_count=true if you want attendee counts in the response

This node will output an array of Discord scheduled event objects, each with fields like id, name, scheduled_start_time, scheduled_end_time, and so on.

Step 5 – Check Google Calendar for each Discord event

With the Discord events in hand, the next step is to see if each one already exists in your Google Calendar.

In n8n, add a Google Calendar node and use the Get operation. For each incoming Discord event, map:

  • eventId to {{ $json.id }} (the Discord event id)

If the Get operation finds a matching event, you’ll get the event data back. If it fails or returns nothing, that means this Discord event has not been synced to Google Calendar yet.

Step 6 – Decide: create or update the event?

Now we need to branch the workflow based on whether the Google Calendar event already exists.

Add an If node and point it at the output of the Google Calendar Get node. A common condition is to check whether the result has an id value, for example:

{{ $json.id }} isNotEmpty

Interpretation:

  • If that condition is true, the event exists in Google Calendar, so you’ll update it.
  • If it’s false, no event was found, so you’ll create a new one.

This simple check keeps the logic clean and avoids messy duplicate handling.

Step 7 – Create new Google Calendar events

On the “create” branch of the If node, add a Google Calendar node with the Create operation. This is where you map Discord fields to Google Calendar fields.

Typical mappings:

  • Start: {{ $json.scheduled_start_time }}
  • End: {{ $json.scheduled_end_time }} (or calculate a duration if end time is missing)
  • Summary (title): {{ $json.name }}
  • Location: {{ $json.entity_metadata.location }}
  • Description: {{ $json.description }}
  • ID / eventId: explicitly set to {{ $json.id }}

That last step is crucial. By assigning the Google Calendar event ID to the Discord event id, future runs of the workflow will always be able to find and update the same event.

Step 8 – Update existing Google Calendar events

On the “update” branch of the If node, add another Google Calendar node, this time using the Update operation.

You’ll again map the Discord event fields to the Google Calendar ones, similar to the Create operation:

  • eventId: use the existing ID from the Get node (which matches the Discord id)
  • Start: {{ $json.scheduled_start_time }}
  • End: {{ $json.scheduled_end_time }}
  • Summary: {{ $json.name }}
  • Location: {{ $json.entity_metadata.location }}
  • Description: {{ $json.description }}

This way, if you edit the time, title, description, or location in Discord, the corresponding Google Calendar event will be updated on the next run.

Tips, gotchas, and troubleshooting

Once the core sync is working, a few details are worth paying attention to.

Time zones

  • Discord scheduled times are ISO8601 strings.
  • Google Calendar also expects proper date-time formats with timezone info.
  • If you see events at the wrong time, normalize times in n8n with a Date/Time node or a small Function node to adjust timezones.

Event IDs

  • Reusing the Discord id as the Google Calendar eventId keeps matching simple.
  • Some Google Calendar accounts may limit custom IDs, so test this with your account to be sure.

Permissions

  • Make sure your Discord bot has permission to view guild scheduled events and is actually in the server.
  • For Google, your OAuth credentials must include the proper Calendar scopes.

Rate limits

  • Discord and Google both enforce rate limits.
  • If you have a large number of events or a very frequent schedule, consider backing off a bit.
  • You can add retry or backoff logic in n8n if you start hitting rate limit errors.

Edge cases

  • Deleted or canceled Discord events are not automatically removed from Google Calendar in the basic flow.
  • If you want strict one-way or two-way sync, add extra logic to handle deletions or cancellations, such as:
    • Periodically checking for events that no longer exist in Discord
    • Marking or deleting the matching Google Calendar event

Security best practices

You’re dealing with tokens and credentials here, so a few precautions help keep things safe.

  • Store the Discord bot token and Google OAuth credentials in n8n credentials, not as plain text in nodes.
  • Give your Discord bot only the permissions it actually needs.
  • Enable OAuth refresh token handling in n8n so your Google credentials stay valid over time.

How to test your workflow

Before you let the schedule run on its own, it’s worth testing the setup end to end.

  1. In n8n, run the workflow manually or trigger the On schedule node once.
  2. Create a scheduled event in Discord and wait for the workflow to run.
  3. Check your Google Calendar and confirm:
    • The event appears
    • The details match (title, time, description, location)
    • The event ID is the same as the Discord event id
  4. Edit the event in Discord (for example, change the time or name) and run the workflow again.
  5. Verify that the Google Calendar event updates accordingly.
  6. If something fails, inspect the n8n execution logs and API responses to fix any field mappings or permission issues.

Ideas for advanced enhancements

Once the basic sync is humming along, you can add extra logic to make it even smarter.

  • Change detection: Compare “last modified” timestamps to avoid unnecessary updates when nothing has changed.
  • Notifications: Send a Slack message or email whenever a new event is created or an existing one is updated.
  • Cancellation handling: Detect deleted or canceled Discord events and either remove or mark the corresponding Google Calendar events.

Wrapping up

Using n8n to sync Discord scheduled events to Google Calendar is a simple way to keep your community events visible where you actually plan your life. The core pattern is straightforward: list Discord events, look them up in Google Calendar by ID, then create or update as needed.

You can start with the template as-is, then tweak field mappings, time handling, or notification logic to match how your server runs events.

If you’d like to go further, for example handling cancellations or adding reminders, you can extend the same workflow with a few extra nodes.

Call to action: Clone the template into your n8n instance, create a test scheduled event in Discord, and watch it appear in your Google Calendar automatically. Once you see it working, you’ll never want to do this manually again.

Sync Discord Events to Google Calendar with n8n

Sync Discord scheduled events to Google Calendar with n8n

If you live in Discord all day but still rely on Google Calendar to keep your life organized, you’ve probably felt that annoying gap between the two. Events get scheduled in Discord, but your calendar stays clueless. This n8n workflow template fixes exactly that problem by automatically syncing Discord scheduled events to Google Calendar, so everything ends up in one clean, central place.

In this guide, we’ll walk through what the workflow does, when you’d want to use it, how to set it up, and what to watch out for. Think of it as having a friend show you around n8n while you sip your coffee, rather than a dry technical manual.

What this n8n workflow actually does

At a high level, this workflow acts like a bridge between your Discord server and your Google Calendar. On a schedule you choose, it:

  • Calls the Discord API to fetch all scheduled events in a specific server (guild).
  • Looks in Google Calendar to see if each Discord event already has a matching calendar event.
  • Creates a new Google Calendar event if it doesn’t exist yet.
  • Updates the existing Google Calendar event if details have changed in Discord.

The clever trick behind all this is that the workflow uses the Discord event ID as the Google Calendar event ID. That way, n8n can instantly tell whether an event is new or already synced.

Why bother syncing Discord events to Google Calendar?

If your community, team, or audience hangs out in Discord, you’re probably using Discord scheduled events to promote:

  • Community calls or town halls
  • Streams, live sessions, or watch parties
  • Workshops, office hours, or recurring meetups

The problem is, many people still rely on Google Calendar for their day-to-day planning. By syncing Discord events to Google Calendar, you:

  • Centralize your schedule in one place.
  • Make it easy for teammates or community members to see events on their phones, tablets, and desktop calendar apps.
  • Reduce “I forgot” moments because events show up alongside everything else in their calendar.

So if you’ve ever had to manually copy event details from Discord into Google Calendar, this workflow is about to save you a lot of repetitive clicking.

What you’ll need before you start

Before you import the template or build the workflow, make sure you have:

  • An n8n instance You can use n8n Cloud or a self-hosted setup.
  • A Discord bot token Create a bot in the Discord Developer Portal, and keep the token handy.
  • A Google account with Calendar access You’ll need OAuth2 credentials set up in n8n so it can read and write events.
  • The n8n workflow template You can import the template from the example or from the link at the end of this article.

How the workflow is structured

Let’s quickly outline the main pieces of the workflow so the setup steps make more sense:

  • On schedule – runs the workflow at a fixed interval, like every 5 minutes or once an hour.
  • Configure (Set) – stores your guild_id so the workflow knows which Discord server to query.
  • List scheduled events from Discord (HTTP Request) – calls the Discord API to fetch all scheduled events from that server.
  • Get events (Google Calendar – get) – tries to find a Google Calendar event whose ID matches the Discord event ID.
  • Create or update? (If) – checks if the Google event exists and decides whether to create or update.
  • Create event (Google Calendar – create) – creates a brand new event in Google Calendar using data from Discord.
  • Update event details (Google Calendar – update) – updates an existing Google Calendar event when something changes in Discord.

With that mental map in place, let’s walk through the setup step by step.

Step-by-step setup in n8n

1. Create and configure your Discord bot

Head over to the Discord Developer Portal and create a new application, then add a bot to it. Once the bot exists:

  • Copy the bot token and store it somewhere safe. You’ll need it for the n8n HTTP Request node.
  • Invite the bot to your Discord server with permissions that allow it to read scheduled events. The bot must actually be in the guild you want to sync.
  • Grab your server (guild) ID: Enable Developer Mode in Discord, right-click your server name, and choose Copy ID. This is the guild_id you’ll use in the workflow.

2. Set up header authentication for Discord in n8n

In the HTTP Request node that calls the Discord API, you’ll configure header-based authentication so Discord knows your bot is allowed to make the request.

Add this header:

Authorization: Bot <your_token>

Replace <your_token> with your actual bot token. This value is sent in the Authorization header every time the workflow fetches scheduled events from Discord.

3. Connect Google Calendar with OAuth2

Next, in n8n, create or select your Google Calendar OAuth2 credentials:

  • Use the standard OAuth2 flow in n8n to connect your Google account.
  • Make sure the account has access to the calendar where you want the events to appear.
  • Confirm that the scopes allow both reading and writing events.

Once this is set, the Google Calendar nodes in the workflow can create and update events without you needing to touch anything.

4. Import or build the workflow

You can either import the ready-made template or recreate it manually. Either way, here’s how the key nodes fit together:

  • On schedule Configure this node to run on a sensible interval, like every 5, 15, or 60 minutes. Running it too frequently can hit rate limits, so start conservatively unless you really need near real-time updates.
  • Configure (Set) Use a Set node to store your guild_id. This keeps things flexible if you want to switch servers later without touching the HTTP Request URL.
  • List scheduled events from Discord (HTTP Request) Point this node to:
    GET https://discord.com/api/guilds/{guild_id}/scheduled-events?with_user_count=true

    Replace {guild_id} with the value from your Set node. The with_user_count=true parameter includes user counts if you want that data.

  • Get events (Google Calendar – get) For each Discord event, this node tries to fetch a Google Calendar event by ID. The ID used here is the Discord event ID, which is what makes the whole create-or-update logic work so smoothly.
  • Create or update? (If) This If node checks whether the Google Calendar get operation found a matching event. If it did, the workflow follows the update path. If not, it goes down the create path.
  • Create event (Google Calendar – create) When no event exists yet, this node creates a fresh calendar entry. It sets key fields like start, end, summary, location, and description, and most importantly, it explicitly sets the Google event ID to the Discord event ID.
  • Update event details (Google Calendar – update) If the Google event already exists, this node updates it based on any changes in Discord, such as a new time, title, or description.

How to map Discord fields to Google Calendar

The magic of the sync comes from good field mapping. Here’s the typical mapping used in the example workflow:

  • summary <= Discord name
  • start <= Discord scheduled_start_time (ISO 8601)
  • end <= Discord scheduled_end_time (ISO 8601)
  • location <= Discord entity_metadata.location
  • description <= Discord description
  • id <= Discord id

That last one is especially important. Using the Discord event ID as the Google Calendar event ID keeps everything aligned and makes updates painless.

Why use the Discord event ID as the Google event ID?

You could try to match events by name or time, but that gets messy fast. Titles change, times shift, and you can easily end up with duplicates.

By reusing the Discord event ID as the Google Calendar event ID:

  • The workflow can reliably check if an event already exists in Google Calendar.
  • The get operation becomes a simple yes-or-no check based on ID.
  • Updates become straightforward, since each event has a unique, stable identifier.

In practice, this means:

  • If the Google Calendar get node finds an event with that ID, the workflow knows it should update it.
  • If no event is found, the workflow knows it needs to create a new one.

Handling time zones and date formats

Time zones can be sneaky. Discord sends scheduled event times in ISO 8601 format, which is great, because Google Calendar also accepts ISO 8601.

Still, you should:

  • Check that the times show up correctly in your Google Calendar client.
  • Verify the calendar’s default time zone matches what you expect.
  • Optionally transform the timestamps in n8n if you need to convert between time zones.

It is worth creating a couple of test events in Discord and confirming that they appear at the correct time in Google Calendar before you rely on this workflow for important events.

Staying within rate limits and keeping things reliable

Both Discord and Google Calendar APIs have rate limits, so it is a good idea to design your workflow with that in mind.

  • Choose a reasonable schedule Running the workflow every few seconds is usually unnecessary and can hit limits quickly. Every 5 to 60 minutes works well for most communities.
  • Use n8n error handling Configure retries or error workflows for transient API issues, so a temporary blip does not break your sync.
  • Respect Discord rate limit headers If you manage multiple guilds or a large number of events, consider adding throttling or delays to avoid hitting Discord’s limits.

Troubleshooting common issues

1. Discord returns an empty list of events

If the HTTP Request node comes back with no events when you know there should be some:

  • Confirm that your bot is actually in the correct Discord server.
  • Make sure the bot has permission to read scheduled events.
  • Double-check the guild_id you set in the workflow.
  • Verify the Authorization header is exactly Bot <token> with your real token.

2. Google Calendar “get” does not find an event

In this workflow, a missing event is not necessarily a problem. It is what triggers the “create” path. However, if you expected an existing event to be found:

  • Check that your Google Calendar credentials are correct and authorized.
  • Ensure the OAuth token has the right scopes for reading and writing events.
  • Confirm that the Google Calendar event ID is actually set to the Discord event ID.

3. Times look wrong or show up in the wrong time zone

If events appear at unexpected times in Google Calendar:

  • Check the time zone settings for both your Discord server and your Google Calendar.
  • Verify that the scheduled_start_time and scheduled_end_time from Discord are correctly passed through.
  • If needed, add a transformation step in n8n to adjust the timestamps into the desired time zone before sending them to Google.

Security and privacy best practices

Since this workflow deals with API tokens and credentials, it is worth taking a moment to lock things down:

  • Never hard-code your Discord bot token or Google secrets directly in your workflow JSON.
  • Use n8n’s built-in credential store to keep tokens and OAuth details secure.
  • Do not commit tokens or credentials to source control or share them in screenshots.
  • Restrict access to your n8n instance to trusted users only.

Ideas for next steps and improvements

Once the basic sync is working, you can start tailoring it to your specific use case. For example, you might:

  • Filter events by type or name so only certain Discord events are synced.
  • Add logging or alerts (via Slack, email, or another channel) whenever a sync fails.
  • Experiment with a two-way sync, where changes in Google Calendar can update Discord events. This usually requires webhooks or more frequent polling and some careful conflict handling.

Wrapping up

This n8n workflow template gives you a simple, reliable way to sync Discord scheduled events into Google Calendar. It:

  • Runs on a schedule you control.
  • Fetches events from your Discord server.
  • Uses the Discord event ID as the Google Calendar event ID.
  • Creates new events or updates existing ones automatically.

Ready to try it out? Import the template into your n8n instance, plug in your Discord bot token and Google OAuth credentials, set your guild_id, and run a quick test with a sample event.

If this workflow helps simplify your event management, feel free to share it with your community or teammates, and keep an eye out for more n8n automation ideas to streamline the rest of your stack.

Call to action: Grab the template below, connect your accounts, and let n8n handle the busywork so you can focus on running

Sync Discord Scheduled Events to Google Calendar with n8n

Sync Discord Scheduled Events to Google Calendar with n8n

Ever told your friends, “Yeah, I’ll be there!” to a Discord event, then completely forgot because it never made it into your calendar? If your life is run by Google Calendar but your community lives on Discord, manually copying events back and forth gets old fast.

Good news: n8n can do that boring copy-paste work for you, quietly in the background, without complaining or getting distracted by memes.

What this n8n workflow actually does (in plain English)

This workflow connects your Discord server and Google Calendar so scheduled events in Discord automatically appear (and stay updated) in your calendar.

Here is the basic idea:

  • On a schedule, n8n asks Discord, “Hey, got any scheduled events for this server?”
  • For each event it finds, it checks Google Calendar to see if that event already exists.
  • If the event is new, n8n creates it in Google Calendar.
  • If the event already exists, n8n updates it so time, title, or description changes stay in sync.
  • The Discord event ID is used as the Google Calendar event ID so matching them later is simple and reliable.

Result: no more double entry, no more “Wait, what time is that event again?” and a lot fewer calendar-related facepalms.

Why bother syncing Discord events to Google Calendar?

Discord’s Scheduled Events feature is great for visibility inside a server, but most people still live inside their calendar when it comes to planning their day.

Automating the sync:

  • Prevents you from manually retyping event details like a spreadsheet goblin
  • Makes sure people see Discord events alongside work, personal, and other commitments
  • Keeps updates consistent so nobody shows up at the wrong time because only Discord got updated
  • Eliminates duplicate data entry, which everybody hates but pretends is “fine for now”

What you will build with this template

You will create an n8n workflow that:

  • Periodically fetches all scheduled events from a specific Discord server (guild).
  • For each event, calls Google Calendar and tries to get an event with the same ID.
  • Updates the event in Google Calendar if it already exists.
  • Creates a new event in Google Calendar if it does not exist.
  • Stores the Discord event ID as the Google Calendar event ID, so future updates are easy and consistent.

It is a lightweight, reliable sync that you can schedule to run as often as you like, as long as you respect Discord’s rate limits.

What you need before you start

Before diving into n8n, make sure you have:

  • An n8n instance (cloud or self-hosted).
  • A Discord bot token with permission to list scheduled events in the target server.
  • A Google account with a calendar and Google Calendar OAuth2 credentials configured in n8n.
  • The Discord server ID (guild_id) for the server whose events you want to sync.

Once those are ready, the rest is mostly clicking, mapping, and feeling smug about your new automated life.

Workflow overview: the n8n nodes involved

The template workflow is built from a simple left-to-right chain of nodes:

  • On schedule – triggers the workflow at a set interval.
  • Set (Configure) – stores the target Discord guild_id.
  • HTTP Request – lists scheduled events from Discord.
  • Google Calendar (Get events) – checks if an event with that ID exists in your calendar.
  • If – decides whether to create or update the event.
  • Google Calendar (Create event) – creates a new event when needed.
  • Google Calendar (Update event details) – updates existing events when they change on Discord.

Let us walk through how to configure each part without losing our sanity.

Step-by-step setup in n8n

1. Set up the schedule trigger

Start with the On schedule node.

  • Choose how often you want n8n to poll Discord, for example every 15 minutes.
  • Pick a frequency that balances freshness with Discord rate limits. For many servers, 10 to 30 minutes is a good starting point.

2. Store the Discord server ID

Add a Set node, usually named something like Configure.

  • Create a field called guild_id.
  • Set its value to the Discord server (guild) ID you want to sync.

This way you only have to change the server ID in one place instead of hunting through multiple nodes later.

3. Pull scheduled events from Discord

Next, add an HTTP Request node to fetch events from Discord’s API. This is where your bot actually does some work.

Use this URL expression:

=https://discord.com/api/guilds/{{ $('Configure').first().json.guild_id }}/scheduled-events

If you want attendee counts included, add this query parameter:

  • with_user_count=true

For authentication, create a generic Header Auth credential in n8n:

  • Header name: Authorization
  • Header value: Bot <your token>
    Example: Bot MTEzMTgw...uQdg

This tells Discord, “Hi, I am a bot, please let me see the scheduled events.”

4. Check for an existing Google Calendar event

Now add a Google Calendar node configured with the Get operation.

  • Set the Calendar field to the calendar where you want events to appear.
  • For eventId, use the Discord event ID:
    eventId: ={{ $json.id }}

This step tries to find a Google Calendar event whose ID matches the Discord event ID. If it exists, great. If not, we will create one.

5. Decide whether to create or update

Add an If node to determine which path to follow.

You want to check whether the Google Calendar Get operation succeeded or not. A simple approach is to:

  • Use Continue On Fail on the Google Calendar Get node if you expect 404s when events are missing.
  • In the If node, test whether {{ $json.id }} is present or not, and use that to route to either Create or Update branches.

This is the logic that prevents duplicate events and keeps everything neatly aligned.

6. Create a new Google Calendar event

On the “create” branch, add a Google Calendar node with the Create operation.

Map the Discord fields to Google Calendar fields using n8n expressions. For example:

  • Start: {{ $('List scheduled events from Discord').item.json.scheduled_start_time }}
  • End: {{ $('List scheduled events from Discord').item.json.scheduled_end_time }}
  • Summary (event title): {{ $('List scheduled events from Discord').item.json.name }}
  • Location: {{ $('List scheduled events from Discord').item.json.entity_metadata.location }}
  • Description: {{ $('List scheduled events from Discord').item.json.description }}
  • Id (under additionalFields → id): {{ $('List scheduled events from Discord').item.json.id }} This sets the Google Calendar event ID to the Discord event ID.

That final mapping is the secret sauce that makes future updates easy, instead of forcing you to do fuzzy matching on titles or times.

7. Update existing Google Calendar events

On the “update” branch, add another Google Calendar node, this time using the Update operation.

  • Use the same eventId logic so you are updating the correct event.
  • Map the same fields as in the create node:
    • Start
    • End
    • Summary
    • Location
    • Description

Now, whenever you change the time, title, or details of a Discord scheduled event, the Google Calendar event will follow along on the next run. No more “Wait, which version is the right one?” confusion.

Key implementation details to get right

Using Discord event IDs as Google event IDs

By setting the Google Calendar event ID to the Discord scheduled event ID during creation, you give the workflow a stable way to find the same event later.

Google Calendar accepts custom event IDs as long as they are globally unique within that calendar, which makes Discord IDs a perfect match. It also keeps your logic clean and avoids messy lookups.

Handling timezones like a pro

Discord stores scheduled times in ISO format. That is good news, but you still need to make sure Google Calendar gets properly formatted timestamps with timezone information.

  • If Discord times are already in the correct timezone, you can map them directly.
  • If you need to normalize to a specific timezone, use a Function node or a Date/Format node in n8n to convert the times before sending them to Google Calendar.

Getting this right avoids the classic “Why is this event at 3 a.m.?” problem.

Respecting rate limits and choosing a polling interval

Discord has rate limits, and it is not shy about enforcing them. To stay on its good side:

  • Start with a conservative polling interval, such as every 10 to 30 minutes.
  • Monitor your n8n execution logs for any rate limit responses.
  • Only increase frequency if your server really needs it.

Your workflow will be happier, and so will Discord.

Error handling so you do not miss failures

  • Set the Google Calendar Get events node to Continue On Fail if you expect 404s when events do not exist yet.
  • Log or notify errors with an Email or Slack node if a particular request keeps failing.
  • Consider adding a retry or backoff pattern for transient errors so your workflow can recover gracefully.

That way, you find out about real problems instead of quietly losing events.

Testing your Discord to Google Calendar sync

Before trusting automation with your entire event schedule, give it a quick test run:

  1. Deploy your Discord bot and confirm it:
    • Is a member of the target server.
    • Has permission to view scheduled events.
  2. In n8n, configure:
    • The Header Auth credential with your Discord bot token.
    • Your Google Calendar OAuth2 credentials with the right scopes.
  3. Create a test scheduled event in Discord, then run the workflow manually.
    • Check that a new event appears in your chosen Google Calendar.
    • Confirm that the event ID in Google Calendar matches the Discord event ID.
  4. Edit the Discord event (change time, title, or description) and run the workflow again.
    • Confirm the Google Calendar event updates with the new details.

Once that works, you are ready to let the schedule trigger take over.

Advanced ideas for power users

When the basic sync is running smoothly, you can start getting fancy.

  • Two-way sync: Want changes in Google Calendar to flow back into Discord events? You can build a second workflow that:
    • Watches Google Calendar for changes.
    • Uses the Discord API to update scheduled events.

    You will need a bot with the correct permissions and the relevant Discord endpoints.

  • Filtering events: Only want certain events, like “community” events or those with a specific keyword?
    • Add filters before the Create/Update nodes.
    • Filter by event type, name, or description so only relevant events sync.
  • Richer event details: Pull extra data such as entity_metadata or other fields from Discord.
    • Include images or metadata in the event description.
    • Add more context so your calendar events look less bare and more informative.

Troubleshooting common issues

If your events are not showing up in Google Calendar, do a quick sanity check:

  • Confirm your Header Auth is correctly set:
    • Authorization: Bot <token>
  • Verify the guild_id is correct and that the bot is actually in the server.
  • Check your Google Calendar OAuth credentials:
    • Make sure the scopes allow creating and updating events.
    • Confirm you selected the correct calendar in the node.
  • Inspect n8n execution logs:
    • Look at HTTP status codes from Discord and Google.
    • Check response bodies for helpful error messages.

Most problems come down to permissions, credentials, or a tiny typo in an ID.

Wrapping it up

This n8n workflow gives you a dependable way to sync Discord scheduled events into Google Calendar without manual effort. It is a great fit for communities, event organizers, and anyone who lives in Discord but plans their life in Google Calendar.

Start with the setup above, then:

  • Tune the polling interval based on your server size and activity.
  • Adjust field mappings to match how you want events to appear.
  • Add error handling and logging for production-grade reliability.

Ready to build? Import the template into your n8n instance, plug in your Discord bot header auth and Google Calendar OAuth credentials, and run a test. If you want to go further with filters, two-way sync, or timezone fine tuning, you can extend this workflow using the n8n docs and Discord API documentation.

Happy automating, and enjoy never having to copy event details by hand again.


Posted in CommunicationLeave a Comment on Sync Discord Scheduled Events to Google Calendar with n8n

Build a Notion Knowledge-Base Assistant with n8n

Build a Notion Knowledge-Base Assistant with n8n

On a rainy Tuesday afternoon, Mia stared at yet another Slack message blinking at the bottom of her screen.

“Hey, do you know where the latest onboarding checklist is?”

She sighed, opened Notion, and started typing into the search bar for what felt like the hundredth time that week. As the operations lead at a fast-growing startup, Mia had spent months organizing everything in Notion – product specs, onboarding docs, internal how-tos, HR policies, and meeting notes. The information was there, but finding it quickly had become a daily bottleneck.

New hires could not remember which space held which document. Managers asked the same questions about time off, billing, and product features. Even Mia, the person who built the knowledge base, sometimes struggled to track down the right page.

That afternoon, after answering the same “How do I request time off?” question for the third time, she decided something had to change.

The moment Mia realized search was not enough

Mia did a quick audit of their internal communication channels. The pattern was obvious:

  • Slack was full of repeated questions about policies and processes
  • New hires were overwhelmed by the size of the Notion workspace
  • Team leads were frustrated by how long it took to find answers

Notion was a great place to store knowledge, but it was not acting like an assistant. People did not want to “go search a database.” They wanted to ask a question and get a short, accurate answer with a link if they needed more detail.

That night, while searching for “Notion AI assistant” ideas, Mia discovered something that caught her eye: an n8n workflow template that turned a Notion knowledge base into a chat assistant using GPT. It promised to do exactly what she needed:

  • Receive chat messages from users
  • Search a Notion database for relevant records
  • Optionally pull in page content for deeper context
  • Use an AI agent and OpenAI to craft short, sourced answers

It sounded like magic. But it was not magic. It was just smart automation.

What Mia decided to build

Mia sketched the idea on a notepad first. She wanted an assistant that could sit behind a chat interface, listen for questions, and then quietly do the heavy lifting:

  1. Receive a user question through a webhook
  2. Understand how her Notion knowledge base was structured
  3. Search by keywords or tags in the Notion database
  4. Read the content of a page if needed
  5. Use an AI agent connected to OpenAI to write a clear answer
  6. Include links to the relevant Notion pages
  7. Remember the last few messages so follow-up questions would make sense

In n8n terms, this translated into a set of specific building blocks:

  • Chat trigger using a webhook to receive user queries
  • Get database details to read the Notion DB schema and tag options
  • Search Notion database to find matching pages by keyword or tag
  • Search page content to pull matching blocks from a page
  • Window buffer memory to hold recent chat turns
  • AI agent + OpenAI Chat Model to summarize and produce the final response

If she could wire all of this together, her team could simply ask:

“How do I request time off?”

and get a concise answer plus links to the exact Notion pages, instead of a vague “search for it in Notion.”

Rising action: turning a Notion database into a real assistant

Mia starts with the Notion knowledge base

Before touching n8n, Mia opened Notion and looked at her existing knowledge base. To make it AI friendly, she created a dedicated database structured for question and answer pairs.

She made sure each entry had:

  • A “question” field (rich_text) that captured the main query, such as “How do I request time off?”
  • An “answer” field with a clear explanation
  • “tags” like product, billing, onboarding, HR, and policies
  • An “updated_at” field so she could track freshness

To make the future search more precise, she standardized the tags. No more “hr” in one place and “human resources” in another. She settled on a clear set of tags and stuck to them.

Then she created a Notion integration at developers.notion.com and shared the database with that integration. She knew that just creating an integration was not enough. It had to be explicitly granted read access to the specific database she wanted to search.

Bringing n8n into the story

With Notion ready, Mia logged into her n8n instance. She could have run it self-hosted or in the cloud, but her company already had an n8n cloud workspace, so she opened a new workflow and started dropping in nodes, following the architecture she had in mind.

Her workflow slowly took shape:

  • When chat message received – a webhook node that accepted user text from the chat interface
  • Get database details – a Notion node to fetch database information, including property names and tag options
  • Format schema – a Set node that transformed the raw Notion schema into compact JSON the AI agent could understand
  • AI Agent – a langchain-style agent node that would decide when to call search tools
  • Search notion database – an HTTP Request node calling /v1/databases/{id}/query with filters
  • Search inside database record – another HTTP Request node hitting /v1/blocks/{page_id}/children to fetch page blocks if the record needed deeper inspection
  • OpenAI Chat Model – the LLM node, wired to her OpenAI API key
  • Window Buffer Memory – a memory node to keep a short history of the conversation

It was starting to look like a real assistant. But a few critical steps still stood between Mia and a working system.

The turning point: making the AI agent actually useful

Credential setup, or why nothing worked at first

The first time Mia ran the workflow, it failed almost instantly.

The Notion node complained that it could not find the resource she was requesting. The OpenAI node refused to respond.

She realized she had skipped the boring but essential part: credentials.

  • For Notion, she went back into n8n, opened the credentials section, and added her Notion integration token. She double checked that the integration had been shared with the knowledge base database in Notion. Without that, Notion would keep responding with “The resource you are requesting could not be found”.
  • For OpenAI, she created an API key in her OpenAI account and stored it in n8n credentials, then linked it to the Chat Model node.

She made a mental note not to ever paste API keys into workflow JSON or share them in public repos. n8n credentials and environment variables were the right place for secrets.

Once credentials were in place, the workflow started to move. The webhook received a test message, the Notion node fetched database details, and the AI agent came to life.

Teaching the assistant how to search

Next, Mia had to decide how the assistant should search the Notion database. She knew that a naive search could either miss relevant answers or flood the AI model with too much information.

So she defined a clear search strategy inside the tools the agent could call:

  • First, try an exact keyword search against the question rich_text field
  • If no results, fall back to a tag-based search using the standardized tags
  • If still no results, broaden the keyword search by including variations like singular/plural forms or common synonyms

To keep costs and hallucinations down, she configured the search to return only the top 3 results to the LLM. There was no need to send the entire knowledge base to the model for every question.

Now the agent could:

  1. Look up the database schema using the Get database details node
  2. Use the Format schema node to map those fields into a structure it could reason about
  3. Call the Search notion database tool when needed
  4. Optionally call Search inside database record to fetch page blocks for deeper context

Giving the AI a clear role

Even with search working, Mia knew that the AI needed guardrails. She did not want it to invent answers or send people to the wrong policy page.

So she crafted a concise system prompt for the AI Agent node. Its job was very specific:

  • Use only facts from the provided Notion records
  • Never hallucinate or guess information that was not in the data
  • Always include the Notion page URL when a record contained the answer
  • Ask the user for clarification if the question was ambiguous

Her final system message looked something like this:

Role: You are an assistant that answers questions using only the provided Notion records. 
If a record contains the answer, include the page URL. 
If no records match, explain that no direct match was found and offer to broaden the search.

She attached this prompt to the AI Agent node so every conversation started from the same set of instructions.

A real example: “How do I request time off?”

To test her new Notion knowledge-base assistant, Mia used the most common question her team asked.

She opened the chat interface connected to the webhook and typed:

“How do I request time off?”

Behind the scenes, the workflow sprang into action:

  1. The webhook node captured the message and passed the question to the AI Agent.
  2. The AI Agent decided to call the Search notion database tool with a keyword search on the question field for “time off.”
  3. The Notion database returned two relevant pages:
    • “Time Off Policy”
    • “Requesting Leave”
  4. The agent then called the Search inside database record tool for each page, pulling short paragraphs from the page blocks.
  5. Those snippets, plus the page URLs, were sent to the OpenAI Chat Model node.
  6. The LLM synthesized a concise answer, combining the key steps for requesting time off, and included links to both original Notion pages.

The response that came back to the chat interface was exactly what Mia had always wanted:

  • A short explanation of how to request time off
  • Two direct links to the relevant Notion pages for more detail

No one had to search through Notion manually. No one had to ping Mia in Slack. The assistant handled it.

When things go wrong: Mia hits a few bumps

Notion resource not found

At one point, another team created a second knowledge base database and asked Mia to plug it into the assistant. Suddenly, she started seeing this error again:

“The resource you are requesting could not be found”

She quickly remembered the cause. The Notion integration had to be explicitly shared with the new database page inside Notion. Creating the integration alone was not enough. Once she granted access in the Notion UI, the error disappeared.

Slow responses

As the assistant grew more popular, some users noticed occasional delays. Mia traced them back to a few specific behaviors:

  • Fetching database details on every single request added 250 to 800 ms of latency
  • Pulling multiple page contents for each query added even more time

To speed things up, she:

  • Cached tag options and schema details instead of refreshing them on every call
  • Limited the number of pages fetched and passed only the most relevant snippets to the LLM

Incorrect or empty answers

On a few occasions, the assistant returned empty or unhelpful answers for questions Mia knew were covered in the knowledge base.

She tracked the issue to two common mistakes:

  • The /query request had filters that referenced a property name that did not exist in the database
  • The Format schema node was not mapping Notion properties correctly to the agent inputs

Once she corrected the property names and ensured that the schema mapping matched her Notion fields, the answers became reliable again.

Keeping things safe, fast, and sustainable

As more teams started using the assistant, Mia stepped back to think about security, costs, and best practices.

  • Permissions: She configured the Notion integration with least-privilege access, granting only read permissions unless a specific workflow required writes.
  • Secrets: All API keys stayed inside n8n credentials or encrypted environment variables. None were stored in workflow JSON or repos.
  • LLM costs: She reduced token usage by sending only the most relevant blocks to OpenAI and instructing the model to keep answers concise.
  • Rate limits: She respected Notion API rate limits and implemented exponential backoff for HTTP 429 responses to avoid hitting hard limits.

The assistant was no longer just a prototype. It was a production tool used daily by the entire company.

How Mia extended the assistant beyond the basics

Once the core Notion knowledge-base assistant worked smoothly, Mia started to think bigger.

  • She added a feedback loop so users could mark answers as helpful or not. Those ratings were stored back in Notion for future improvements.
  • She configured separate knowledge contexts so sensitive documents were only searchable by authorized users, while public FAQs stayed accessible to everyone.
  • She integrated the assistant with Slack, so team members could ask questions directly in their existing channels instead of opening another app.

What started as a way to stop answering the same questions turned into a central knowledge layer for the company, powered by Notion, n8n, and OpenAI.

From chaos to clarity: Mia’s outcome

A few weeks after launch, Mia checked Slack. The flood of repetitive questions had slowed to a trickle. New hires were getting up to speed faster. Managers knew they could rely on the assistant for up-to-date answers with direct links to the underlying Notion pages.

The company’s knowledge had not changed. It was still stored in the same Notion database. What changed was how accessible it had become.

By combining:

  • A well structured Notion knowledge base
  • An n8n workflow with:
    • Webhook chat trigger
    • Not

Automate Zendesk → Jira with n8n: Step-by-Step

Automate Zendesk to Jira with n8n: Turn Support Handoffs Into a Seamless Flow

Every time you copy a Zendesk ticket into Jira by hand, you lose a little bit of focus. You jump tools, retype details, paste links, and hope nothing gets missed. Over a day or a week, that context switching adds up. It slows your team down and pulls attention away from the work that really matters: helping customers and shipping improvements.

Automation with n8n can turn that friction into flow. By connecting Zendesk and Jira with a simple, reliable workflow, you create a system that quietly does the busywork for you. New tickets are picked up, checked, synced, and updated without you lifting a finger. Your support and engineering teams stay aligned, your incidents move faster, and you reclaim time and mental space.

This guide walks you through an n8n workflow template that does exactly that. You will see how to listen for new Zendesk tickets, detect whether a related Jira issue already exists, then either add a comment or create a new Jira issue and store its key back in Zendesk. Along the way, you will also see how this template can be a stepping stone toward a more automated, focused way of working.

The problem: Manual Zendesk to Jira handoffs drain your time

When Zendesk and Jira are not connected, every cross-team ticket becomes a mini-project:

  • Someone in support has to create a Jira issue by hand.
  • Ticket context needs to be copied, cleaned up, and pasted.
  • Links between tools have to be maintained manually.
  • Updates in one system may never make it into the other.

Over time, this creates hidden costs:

  • Duplicate work for support and engineering teams.
  • Lost or inconsistent context between Zendesk and Jira.
  • Slow and error-prone triage, especially during busy periods.
  • Difficulty meeting SLAs because information is scattered.

It does not have to stay this way. A small amount of automation can unlock a big shift in how your teams collaborate.

The mindset shift: Let automation carry the routine work

Automation is not just about saving a few clicks. It is about creating space for deeper work. When you let n8n handle the predictable steps, you:

  • Reduce context switching so your team can stay in flow.
  • Build reliable, repeatable processes instead of ad-hoc fixes.
  • Strengthen collaboration between support and engineering.
  • Free yourself to focus on strategy, not data shuffling.

Think of this Zendesk to Jira workflow as your first building block. Once you have it in place, you can extend it with richer data mapping, smarter routing, and even two-way syncing. Each improvement compounds the time you gain back.

The solution: An n8n template that keeps Zendesk and Jira in sync

The n8n workflow you are about to set up listens for new Zendesk tickets, checks for an existing Jira issue key in a custom field, and then:

  • Adds a comment to an existing Jira issue if it finds a key, or
  • Creates a new Jira issue and stores the new key back in Zendesk.

Technically, the workflow uses:

  • A Webhook trigger to receive Zendesk events.
  • The Zendesk node to fetch full ticket details.
  • A small Function node to inspect a custom field for a Jira Issue Key.
  • An IF node to branch based on whether that key exists.
  • Jira nodes to create issues or comments.
  • A Zendesk Update node to write the Jira key back.

The result is a lightweight, maintainable automation that quietly keeps both tools aligned. Once it is running, every new ticket becomes an opportunity to save a few more minutes and a bit more mental energy.

What you need before you start

To follow along and use the n8n Zendesk to Jira automation template, make sure you have:

  • An n8n instance (self-hosted or n8n cloud) with permission to create webhooks.
  • A Zendesk account with API access and a custom field to store the Jira issue key.
  • A Jira Software Cloud account with API credentials and a project where issues will be created.
  • Basic familiarity with n8n nodes and light JSON editing.

If you are new to n8n, do not worry. This workflow is a friendly starting point. You will see how each node works, and you can build on it as your confidence grows.

How the n8n workflow is structured

Here is the high-level sequence the template uses to automate Zendesk to Jira:

  1. Webhook: Listens for new or updated Zendesk tickets.
  2. Zendesk Get: Fetches the full ticket with all custom fields and comments.
  3. Function (Determine): Checks a specific custom field for a Jira issue key.
  4. IF: Branches depending on whether that key is present.
  5. Jira issueComment: Adds a comment to an existing Jira issue if the key exists.
  6. Jira create issue: Creates a new Jira issue if no key is found.
  7. Zendesk Update: Writes the new Jira issue key back into the Zendesk custom field.

Next, you will configure each node. As you do, notice how simple the logic really is. This is the kind of automation you can understand at a glance, yet it can save hours over time.

Step 1: Webhook – listen for new Zendesk tickets

Start with the trigger that will kick off your automation.

Configure the Webhook node

In n8n, add a Webhook node and set a path, for example:

/zendesk-new-ticket-123

This path becomes part of the URL that Zendesk will call.

In your Zendesk account:

  • Create or update a trigger or webhook that sends events to the n8n Webhook URL.
  • Use the POST method so the ticket data is sent in the request body.
  • Configure it to fire when a ticket is created or updated, depending on your needs.

Once this is connected, every new ticket event will reach your n8n workflow automatically.

Step 2: Get the full Zendesk ticket

The webhook payload is useful, but you usually want the complete ticket record. That is where the Zendesk node comes in.

Configure the Zendesk “Get ticket” node

Add a Zendesk node and select the get operation. For the ID field, reference the ticket ID from the webhook payload, for example:

{{$node["On new Zendesk ticket"].json["body"]["id"]}}

(Adjust the node name if your Webhook node uses a different label.)

This retrieves the full ticket object, including:

  • custom_fields such as your Jira issue key field.
  • Ticket comments and metadata.
  • Requester, tags, and other useful context.

At this point, your workflow has everything it needs to decide whether to create a new Jira issue or update an existing one.

Step 3: Decide if a Jira issue already exists

Next, you will inspect a specific Zendesk custom field that stores the Jira issue key. This decision is the heart of the workflow: it determines whether you add a comment or create a new issue.

Configure the “Determine” Function node

Add a Function node and connect it after the Zendesk Get node. Use the following example code, and update the field ID to match your Zendesk configuration:

/* Zendesk field ID which represents the "Jira Issue Key" field. */
const ISSUE_KEY_FIELD_ID = 6689934837021;

new_items = [];
for (item of $items("Get ticket")) {  var custom_fields = item.json["custom_fields"];  var jira_issue_key = "";  for (var i = 0; i < custom_fields.length; i++) {  if (custom_fields[i].id == ISSUE_KEY_FIELD_ID) {  jira_issue_key = custom_fields[i].value;  break;  }  }  new_items.push({  "Jira issue key": jira_issue_key  });
}
return new_items;

Important details:

  • Replace ISSUE_KEY_FIELD_ID with the ID of your Zendesk custom field. You can find this in Zendesk admin > Ticket Fields.
  • The function outputs an item with a single property: Jira issue key.

This output makes the next step easy: you simply check whether that value is empty or not.

Step 4: Branch the workflow with an IF node

Now you will tell n8n how to behave in each scenario.

Configure the IF node

Add an IF node and connect it after the Determine node. Set the condition to check whether the Jira issue key is present, for example:

{{$node["Determine"].json["Jira issue key"]}} isNotEmpty

This creates two clear paths:

  • True branch: A Jira issue already exists, so the workflow will add a comment to that issue.
  • False branch: No Jira issue exists yet, so the workflow will create a new one.

From here, your Zendesk to Jira sync becomes fully automatic. Each ticket is handled appropriately without anyone having to think about it.

Step 5: Add a comment to an existing Jira issue

When the IF node finds an existing Jira key, you can keep Jira updated with the latest Zendesk activity.

Configure the Jira issueComment node

On the true branch of the IF node, add a Jira node and choose the issueComment operation. Configure it to:

  • Use the issueKey from the Determine node output.
  • Set the comment body from the Zendesk ticket event.

An example expression for the comment body could be:

{{$node["On new Zendesk ticket"].json["body"]["comment"]}}

You can adjust this to include more context, such as ticket ID, requester, or a direct link to Zendesk. Every new Zendesk update can now appear in Jira automatically, keeping engineers in the loop without manual effort.

Step 6: Create a new Jira issue when needed

When no Jira issue exists yet, the workflow will create one for you on the false branch of the IF node.

Configure the Jira “Create issue” node

Add another Jira node and select the create issue operation. Typical settings include:

  • Summary: Use the Zendesk ticket subject.
  • Description: Include a link back to the Zendesk ticket and any key details.

For example, you might use this description template:

See Zendesk issue at: https://your-domain.zendesk.com/agent/tickets/{{$node["Get ticket"].json["id"]}}

Make sure you also:

  • Set the correct project and issueType IDs (or names) in the Jira node.
  • Optionally map additional fields like priority, tags, or requester.

From now on, new Zendesk tickets that require engineering attention will automatically become Jira issues, complete with a reference back to the original ticket.

Step 7: Write the Jira key back into Zendesk

To close the loop, you want Zendesk to know which Jira issue is associated with each ticket. That way, future updates can reuse the same Jira issue instead of creating duplicates.

Configure the Zendesk “Update ticket” node

After the Jira Create issue node, add a Zendesk Update node. Configure it to:

  • Target the original Zendesk ticket ID.
  • Update the Jira issue key custom field with the newly created Jira key.

You can capture the Jira key from the create issue node, for example:

{{$node["Create issue"].json["key"]}}

Once this is stored in the custom field, the next time the ticket changes, the workflow will recognize the key and route to the “existing issue” path. Your systems stay in sync without extra effort.

Testing your Zendesk to Jira automation

Before you rely on this workflow in production, take a moment to test and validate it. This is where you turn a good idea into a dependable part of your process.

  1. Enable the n8n webhook and send a test POST from Zendesk (or use curl/Postman) with a sample ticket payload.
  2. Check the Zendesk Get node to confirm you see the full ticket data, including custom_fields.
  3. Verify the Determine node correctly reads the Jira issue key field.
  4. Test both branches:
    • A ticket that already has a Jira key set.
    • A ticket without any Jira key.
  5. Look in Jira to confirm that:
    • Comments are added to existing issues.
    • New issues are created with the Zendesk link in the description.
  6. Check Zendesk after issue creation to confirm the Jira key is populated in your custom field.

Once these checks pass, you have a working bridge between Zendesk and Jira that runs on its own.

Make your workflow more resilient and polished

With the core automation in place, you can start refining it. This is where you turn a working flow into a robust system that your team can trust at scale.

Error handling and reliability

  • Add a Set node or extra Function node to clean up comment text before sending it to Jira, for example stripping HTML or limiting length.
  • Use Try/Catch patterns or an Error Trigger workflow in n8n to handle API failures gracefully and notify admins when something goes wrong.
  • Log responses from Jira and Zendesk, for example by storing them in a database or sending them to a monitoring channel for traceability.
  • Watch out for rate limits. If you handle a high volume of tickets, consider throttling requests or batching updates.
  • Secure your webhook with a secret token and validate incoming payloads before processing them.

Advanced enhancements for deeper integration

  • Map additional Zendesk fields such as priority, tags, and requester into Jira custom fields to preserve richer context.
  • Create label rules in Jira

Automate Stock Reports with n8n, Baserow & SendGrid

Automate Stock Reports with n8n, Baserow & SendGrid

Imagine opening your inbox every morning to a clean, up-to-date summary of your investments, without copying numbers from websites or spreadsheets ever again. That is exactly what this n8n workflow template gives you.

In this guide, we will walk through how the workflow works, when it is useful, and how each n8n node fits together. You will see how it pulls stock data from Tradegate, enriches it with your Baserow holdings, calculates values and changes, then wraps everything into a polished HTML email sent via SendGrid.

What this n8n workflow actually does

At a high level, this automation:

  • Reads your portfolio from a Baserow table (name, ISIN, quantity, purchase price)
  • Fetches the latest data from Tradegate for each ISIN
  • Scrapes bid, ask, and other key details from the Tradegate HTML
  • Calculates current values, gains, and percentage changes
  • Builds a simple HTML report with a table and summary
  • Sends the report as an email using SendGrid

The result is a daily (or on-demand) stock report that looks professional, runs automatically, and is easy to adjust as your portfolio or tooling evolves.

Why bother automating stock reports?

If you are still checking quotes manually, you already know the pain. Opening multiple tabs, copying values into a spreadsheet, doing the same calculations again and again… it adds up quickly and mistakes are almost guaranteed.

With n8n handling the work for you, you can:

  • Schedule checks at consistent times, for example every weekday morning
  • Standardize how you pull and calculate data
  • Keep your portfolio view aligned with real market values
  • Get clear HTML summaries in your inbox that you can read on any device

In short, you trade repetitive manual work for a single, maintainable workflow.

How the workflow is structured

The template follows a clean, linear pipeline. Each node has a narrow job, which makes the whole thing easier to understand and tweak later.

  • Cron / Manual Trigger – starts the workflow on a schedule or on demand
  • Baserow (getAll:row) – pulls your list of holdings
  • HTTP Request – grabs the Tradegate order book page for each ISIN
  • HTML Extract – scrapes bid, ask, and other details from the HTML
  • Set nodes – formats numbers and calculates values and changes
  • Function (Build HTML) – assembles the final email-ready HTML report
  • SendGrid – sends the report to your chosen recipients

Let us go through these pieces in a slightly different order so you can see the story behind the automation.

Step 1: Decide when the report should run

Cron and Manual triggers in n8n

Start by adding two trigger nodes:

  • Cron – configure this to run at your preferred time, for example weekdays at 07:15. This is what gives you an automatic daily stock report.
  • Manual Trigger – keep this in the workflow for quick tests, debugging, or one-off runs whenever you want an ad-hoc report.

Both triggers can feed into the same chain of nodes, so you do not need to duplicate any logic.

Step 2: Store and fetch your portfolio from Baserow

Baserow node – your holdings as structured data

Next, the workflow needs to know what you actually own. That is where Baserow comes in.

Set up a Baserow table with at least these columns:

  • Name – the stock or instrument name
  • ISIN – used to query Tradegate
  • Count – how many units you hold
  • Purchase Price – your buy price per unit

Use the Baserow (getAll:row) node in n8n to read all rows from this table. Each row becomes an item that flows through the workflow, and each item carries the data needed to look up the corresponding Tradegate page and to calculate your current position.

Step 3: Pull Tradegate data for every ISIN

HTTP Request node – grabbing the order book page

For each row from Baserow, the workflow calls Tradegate. You do this with an HTTP Request node configured to send a GET request to the Tradegate order book URL.

Pass the ISIN from Baserow as a query parameter so the right page is requested for each holding. In practice, you will use an expression like:

isin = ={{$json["ISIN"]}}

Set the response format to string. That way, the raw HTML comes through untouched, which is exactly what you want for the next step where you parse it.

Step 4: Scrape the key values from Tradegate

HTML Extract node – parsing the HTML

Once the HTML from Tradegate is available, the HTML Extract node takes over. This node lets you define CSS selectors to pick out exactly the pieces of data you need, such as:

  • WKN – the WKN cell
  • ISIN – the ISIN cell
  • Currency – the currency field
  • Name – typically in a heading element, such as <h2>
  • Bid / Ask – the relevant price fields

In the example template, selectors look like:

#col1_content > table > tbody > tr:nth-child(2) > td:nth-child(1)

and similar patterns for other table cells. These may need updating if Tradegate changes its HTML structure, so it is worth checking them from time to time.

Step 5: Clean up the data and calculate portfolio metrics

First Set node – computing current value

Now that you have both Baserow data and scraped Tradegate values in each item, you can start calculating.

Use a Set node to normalize and compute a Current Value for each holding. One example expression looks like this:

Current Value = {{ (parseFloat($json["Bid"].replace(',', '.')) * parseFloat($node["Baserow"].json["Count"])) .toFixed(2) }}

A couple of important details here:

  • parseFloat is used to turn the text values into numbers
  • Commas in prices are replaced with dots, which is crucial for correct parsing
  • toFixed(2) keeps the output neat with two decimal places

Second Set node – calculating change and percentage change

Next, add another Set node to derive:

  • Change – difference between current value and purchase price
  • Change (%) – percentage gain or loss relative to purchase price

The percentage change can be computed like this:

Change (%) = {{(((parseFloat($json["Current Value"]) - parseFloat($json["Purchase Price"])) / parseFloat($json["Purchase Price"])) * 100).toFixed(2)}}

By the time items leave this step, each one carries all the fields you need for a clear portfolio snapshot.

Step 6: Turn the data into a readable HTML report

Function node – building the email HTML

Now for the fun part: turning these rows into an HTML report that you actually want to read.

Add a Function node that:

  • Receives the final list of items (one per holding)
  • Loops through them to build table rows
  • Adds a header row and a footer with portfolio totals
  • Wraps everything in a simple HTML structure with inline CSS

The example uses n8n’s $now helper to include a timestamp in the report, with timezone and locale formatting. For example:

${'$now'}.setZone("Europe/Dublin").setLocale('ie').toFormat('fff')

All the HTML is typically stored in a single variable, for example email_html, and returned as part of the item JSON. Keep the layout simple: a basic table, some light inline styles, and a short summary paragraph so it works well in most email clients.

Step 7: Email the report with SendGrid

SendGrid node – delivering the final result

The last step is to send that HTML to your inbox.

Use the SendGrid node (or a similar transactional email provider) and configure it to:

  • Set the contentType to text/html
  • Map the HTML output from the Function node, for example $json["html"], into the email body
  • Specify the sender, recipients, and subject line

Once this is in place, every scheduled run of the workflow will produce and send a fresh report automatically.

Practical tips to keep your workflow stable

Respect rate limits and site policies

Tradegate, like many exchanges, may have rate limits or specific rules about scraping. To stay on the safe side:

  • Use n8n’s Wait node to add small delays between HTTP requests
  • Limit concurrency so you do not hammer the remote server
  • Review the site’s terms of use and adapt accordingly

Data hygiene and robustness

Clean data means more reliable reports. A few best practices:

  • Normalize numbers, such as replacing commas with dots before using parseFloat
  • Handle missing or malformed values gracefully, for example by skipping problematic rows
  • Add basic validations so you can spot when parsing fails or selectors no longer match

Error handling and retries

Things will break occasionally, especially if Tradegate changes its HTML. To make this less painful:

  • Use n8n’s error workflows to catch and process failures
  • Connect HTTP and parsing steps to a retry or alerting subflow
  • Send a short error email or log a note back into Baserow when a row cannot be parsed

Security best practices

Because this workflow touches credentials and possibly sensitive data, keep security in mind:

  • Store API keys and credentials in n8n’s credentials store, not in plain text fields
  • Restrict access to production workflows and logs
  • Avoid writing secrets into output fields or debug messages

Ideas to enhance the workflow

Once the basic version is running smoothly, you can extend it in a few directions:

  • Add currency conversion if your holdings are in multiple currencies
  • Store daily snapshots in Baserow or a database and build simple charts or historical comparisons
  • Expose a webhook or Slack integration so you can trigger ad-hoc reports on demand
  • Improve email styling with branding, better typography, or conditional highlighting of winners and losers

Testing checklist before you rely on it

Before you fully trust the automation, run through this quick checklist:

  • Use the Manual Trigger to test a few portfolio rows
  • Inspect the raw HTML from the HTTP Request node to confirm Tradegate’s layout matches your expectations
  • Verify that the CSS selectors in the HTML Extract node still match the correct elements
  • Double check number parsing and calculations, especially if your locale uses commas in numbers

Wrapping up

This n8n workflow template gives you a repeatable, low-maintenance way to generate daily stock reports. It combines:

  • Baserow for storing your holdings
  • Tradegate as the source of live market prices
  • SendGrid to deliver clean HTML summaries straight to your inbox

Because the workflow is modular, you can easily switch data sources, extend the calculations, or plug in a different email provider without rewriting everything.

Ready to stop doing portfolio updates by hand? Import or recreate the nodes described here, adjust the CSS selectors for your specific Tradegate pages, and put the Cron node on a schedule that suits you.

If you want the example workflow JSON or help tailoring it to your setup, feel free to reach out.

Call to action: Try this workflow in your n8n instance today, or message me for a customized version, troubleshooting support, or a live walkthrough. Subscribe if you would like more automation recipes like this in your inbox.

Build a RAG Chatbot with Google Drive & Qdrant

From Document Chaos to Smart Answers: How One Marketer Built a RAG Chatbot with Google Drive & Qdrant

On a rainy Tuesday afternoon, Maya stared at yet another Slack message from sales:

“Hey, do we have the latest onboarding process for enterprise customers? The PDF in Drive looks outdated.”

She sighed. Somewhere in their sprawling Google Drive were dozens of PDFs, slide decks, and Google Docs that all seemed to describe slightly different versions of the same process. As head of marketing operations, Maya was supposed to be the person who knew where everything lived. Instead, she was spending her days hunting through folders and answering the same questions over and over.

That was the moment she decided something had to change.

The problem: Knowledge everywhere, answers nowhere

The company had grown fast. Teams were diligent about documenting things, but that only made the problem worse. There were:

  • Customer onboarding guides in PDFs
  • Support playbooks in Google Docs
  • Pricing explanations in scattered slide decks
  • Internal FAQs buried in shared folders

People were not short on documentation. They were short on answers.

Maya wanted a way for anyone in the company to simply ask a question in plain language and get a reliable, context-aware response, grounded in their existing docs. Not a generic chatbot, but one that actually understood their internal knowledge base.

That search led her to the concept of Retrieval-Augmented Generation (RAG), and eventually to an n8n workflow template that promised exactly what she needed: a production-ready RAG chatbot that could index documents from Google Drive, store embeddings in Qdrant, and serve conversational answers using Google Gemini.

Discovering RAG: Why this chatbot is different

As Maya dug deeper, she realized why a RAG chatbot was different from the generic AI bots she had tried before.

Instead of relying only on a language model’s training data, RAG combines:

  • A vector store for fast semantic search
  • A large language model for natural, context-aware responses

In practical terms, that meant:

  • Documents from Google Drive could be indexed and searched semantically
  • Qdrant would store embeddings and metadata for fast retrieval
  • Google Gemini would generate answers grounded in those documents
  • n8n would orchestrate the entire workflow, from ingestion to chat

For a team like hers, this was ideal. Their internal docs, knowledge bases, and customer files could finally become a living, searchable knowledge layer behind a simple conversational interface.

The architecture that changed everything

Maya decided to try the n8n template. Before touching anything, she sketched the architecture on a whiteboard so the rest of the team could understand what she was about to build.

At a high level, the system looked like this:

  • Document source: A specific Google Drive folder that held all key docs
  • Orchestration: An n8n workflow to discover files, download them, and extract text
  • Text processing: A token-based splitter and metadata extractor to prepare content
  • Embeddings: OpenAI text-embedding-3-large (or equivalent) to turn chunks into vectors
  • Vector store: A Qdrant collection, one per project or tenant
  • Chat model: Google Gemini for conversational answer generation
  • Human-in-the-loop: Telegram for approvals on destructive operations
  • History: Google Docs to store chat transcripts for later review

It sounded complex, but the n8n template broke it into manageable pieces. Each part of the story was actually an n8n node, wired together into a repeatable workflow.

Rising action: Turning messy Drive folders into structured knowledge

To get from chaos to chatbot, Maya had to wire up a few critical components inside n8n. The template already had them in place, but understanding each one helped her customize and trust the system.

Finding and downloading the right files

The first challenge was obvious: how do you reliably pull all relevant files from Google Drive without melting APIs or memory?

The workflow started with the Google Drive node, configured to:

  • List files in a specific folder ID
  • Loop through file IDs in batches
  • Download each file safely without hitting rate limits

n8n’s splitInBatches node helped here. Instead of trying to download hundreds of files at once, the workflow processed them in small, controlled chunks, which protected both Google APIs and her n8n instance from spikes.

Extracting text and rich metadata

Once files were downloaded, the next step was to turn them into something the AI could actually work with.

The workflow included a text extraction step that pulled the raw content from PDFs, DOCX files, and other formats. Then came a crucial part: an information-extractor stage that generated structured metadata, such as:

  • title
  • author
  • overarching_theme
  • recurring_topics
  • pain_points
  • keywords

Maya quickly realized this metadata would become her secret weapon. By attaching it to each vector, she could later:

  • Filter search results by specific files or themes
  • Perform safe, targeted deletes
  • Slice the knowledge base by project or customer type

Splitting long documents into smart chunks

Some of their onboarding guides ran to dozens of pages. Sending them as a single block to an embedding model was not an option.

The template used a token-based splitter to break long documents into smaller chunks, typically:

  • 2,000 to 3,000 tokens per chunk

This struck the right balance: chunks were large enough to preserve context, but small enough to avoid truncation and respect embedding model limits. Maya learned that going too small could hurt answer quality, since the model would lose important surrounding context.

Generating embeddings and upserting into Qdrant

With chunks ready, the workflow called the embedding model, using:

  • OpenAI text-embedding-3-large (or a compatible provider)

Each chunk became a vector, enriched with metadata like:

  • file_id
  • title
  • keywords
  • Extracted themes and topics

These vectors were then upserted into a Qdrant collection. Maya followed a consistent naming scheme, such as:

  • project-<project_name> for per-project isolation
  • tenant-<tenant_id> for multi-tenant setups

That design would later make it easy to enforce data boundaries and control quotas.

The turning point: When the chatbot finally spoke

After a week of tinkering, Maya was ready to move from ingestion to interaction. This was the part her colleagues actually cared about: could they ask a question and get a useful answer?

Wiring up chat and retrieval with Google Gemini

The template exposed a chat trigger inside n8n. When someone sent a query, the workflow did three things in quick succession:

  1. Sent the query to Qdrant as a semantic retrieval tool
  2. Retrieved the top K most relevant chunks
  3. Passed those chunks as context to Google Gemini

Gemini then generated a response that was not just plausible, but grounded in their actual documents. By default, Maya started with a topK value between 5 and 10, then adjusted based on answer quality.

On the first real test, a sales rep asked:

“What are the key steps in onboarding a new enterprise customer using SSO?”

The chatbot responded with a clear, step-by-step explanation, pulled from their latest onboarding guide and support documentation, complete with references to API keys and setup steps. For the first time, Maya saw their scattered docs behave like a single, coherent source of truth.

Adding memory and chat history

To make conversations feel natural, the template also included a short-term memory system. It kept a rolling window of about 40 messages, so the chatbot could maintain context across multiple turns.

At the same time, the workflow persisted chat history to Google Docs. This served several purposes:

  • Auditing what information was being surfaced
  • Reviewing tricky conversations for future improvements
  • Demonstrating compliance and oversight to leadership

The chatbot was no longer a black box. It was a transparent system that the team could inspect and refine.

Keeping control: Safe deletes and human approvals

With power came a new concern. What happened if they needed to remove outdated or sensitive content from the vector store?

The template had anticipated this with a human-in-the-loop flow for destructive operations.

When Maya wanted to remove content related to a specific file, the workflow would:

  1. Assemble a list of file_id values targeted for deletion
  2. Send a notification via Telegram
  3. Require a double-approval before proceeding
  4. Run a deletion script that filtered Qdrant points by metadata.file_id

This approach made accidental data loss far less likely. No one could wipe out large portions of the knowledge base with a single misclick.

How Maya set everything up in practice

Looking back, the setup itself followed a clear sequence. Here is how she put the n8n RAG chatbot into production.

1. Provisioning the core services

First, she ensured all underlying services were ready:

  • Qdrant deployed, either hosted or self-hosted
  • Google Cloud APIs enabled for Drive and Gemini (PaLM)
  • OpenAI (or another embedding provider) configured

2. Importing and configuring the n8n template

Next, she imported the provided workflow template into n8n and added credentials for:

  • Google Drive
  • Google Docs
  • Google Gemini
  • Qdrant API
  • OpenAI embeddings

In a couple of Set nodes, she defined the key variables:

  • The Google Drive folder ID that would serve as the document source
  • The Qdrant collection name for this project

3. Running a small test ingest

Before going all in, Maya pointed the workflow at a small folder of representative documents and ran a test ingest. She verified that:

  • Text was extracted correctly
  • Metadata fields were populated as expected
  • Vectors were successfully upserted into the Qdrant collection

4. Testing chat and tuning retrieval

Finally, she tested the chat trigger with real questions from sales and support. When answers were too shallow or missed context, she experimented with:

  • Adjusting chunk size within the 1,000 to 3,000 token range
  • Tuning topK between 5 and 10 for better relevance

Within a few iterations, the chatbot felt reliable enough to introduce to the rest of the company.

Best practices Maya learned along the way

As the system moved from experiment to daily tool, several best practices emerged.

Designing chunks and metadata

  • Chunk size: Keep chunks in the 1,000 to 3,000 token range, depending on the embedding model. Avoid tiny chunks that strip away context.
  • Metadata: Always attach fields like file_id, title, keywords, and extracted themes. This makes filtered search and safe deletes possible.

Collection and retrieval strategy

  • Collection design: Use per-project or per-environment collections to isolate data and manage quotas.
  • Top-K tuning: Start with topK=5-10 and adjust based on how relevant the answers feel in practice.

Scaling without breaking APIs

  • Rate limits: Batch downloads and embedding calls. Use n8n’s splitInBatches and add retry or backoff logic to handle throttling gracefully.
  • Access control: Restrict credentials for Drive and Qdrant, audit who can access what, and enforce TLS for data in transit.

Security, compliance, and peace of mind

As more teams started relying on the chatbot, security moved from an afterthought to a central requirement. Maya worked with IT to ensure the system aligned with their data governance rules.

They implemented policies to:

  • Encrypt data both at rest and in transit
  • Anonymize PII where required
  • Maintain an audit trail for data access and deletions
  • Use tenant separation and strict RBAC for Qdrant and n8n in multi-tenant scenarios

The combination of Telegram approvals, metadata-based deletes, and detailed chat logs gave leadership confidence that the system was not just powerful, but also controlled.

When things go wrong: Troubleshooting in the real world

Not everything worked perfectly on day one. Along the way, Maya hit a few common pitfalls and learned how to fix them.

  • Empty or weak responses: She increased topK, reduced chunk size slightly, and double-checked that embeddings had been upserted with the correct metadata.
  • Rate limit errors: She added retry and backoff logic, and split downloads into smaller batches.
  • Truncated text: She confirmed that the extractor handled PDFs and DOCX files properly, and used a better OCR solution for scanned PDFs.
  • Deletion mistakes avoided: She kept the Telegram double-approval flow mandatory before any script could delete vectors based on metadata.file_id.

Cost and performance: Keeping the chatbot sustainable

As usage grew, so did costs. Maya tracked where the money was going and adjusted accordingly.

She found that the main cost drivers were:

  • Embedding generation for large document sets
  • Large language model calls for chat responses

To keep things efficient, she:

  • Used shorter retrieved contexts when possible
  • Cached embeddings for documents that had not changed
  • Monitored Q

Build a Quiz Auto Grader with n8n & RAG

Build a Quiz Auto Grader with n8n & RAG

This guide describes how to implement a production-ready Quiz Auto Grader in n8n using Retrieval-Augmented Generation (RAG). The solution combines:

  • n8n for orchestration and workflow automation
  • Pinecone as a vector database
  • Cohere for text embeddings
  • An OpenAI chat model, wrapped in a LangChain RAG agent
  • Google Sheets for logging and audit trails
  • Slack for automated error notifications

The workflow is designed for educators and technical teams who need scalable, consistent quiz grading for short-answer and open-ended questions.

1. Solution Overview

1.1 Objectives

The automated quiz grading workflow is intended to:

  • Produce consistent, repeatable scores for subjective answers
  • Scale to large volumes of quiz submissions with minimal manual effort
  • Provide structured logs and metadata for auditing and quality review
  • Integrate cleanly with existing tools like Google Sheets and Slack

1.2 High-level Flow

At a high level, the n8n workflow:

  1. Receives quiz submissions via a Webhook trigger
  2. Splits long answers into chunks with the Text Splitter node
  3. Generates embeddings using Cohere
  4. Stores and queries vectors in Pinecone
  5. Maintains conversational context with Window Memory
  6. Uses a RAG Agent (LangChain) with an OpenAI Chat Model to grade answers
  7. Appends grading results to a Google Sheets spreadsheet
  8. Sends Slack alerts on errors

2. Architecture & Data Flow

2.1 Core Components

  • Webhook Trigger – Entry point for quiz submissions via HTTP POST.
  • Text Splitter – Splits long responses into overlapping text chunks.
  • Cohere Embeddings – Converts text chunks into numeric embedding vectors.
  • Pinecone Insert & Query – Persists embeddings and retrieves relevant context.
  • Window Memory – Maintains recent message history for the RAG agent.
  • RAG Agent (LangChain) – Orchestrates retrieval and generation for grading.
  • OpenAI Chat Model – Produces the actual grade, feedback, and confidence values.
  • Google Sheets Append – Logs grading outcomes and metadata.
  • Slack Alert – Sends error notifications to a designated channel.

2.2 Typical Payload & Metadata

The webhook typically receives a JSON payload with fields such as:

  • submission_id
  • student_id
  • question_id
  • question_text (optional but recommended)
  • student_answer

These identifiers are reused as metadata in Pinecone and as columns in Google Sheets, making it easier to trace individual grading decisions.

3. Node-by-Node Breakdown

3.1 Webhook Trigger

The workflow starts with an n8n Webhook node configured to accept POST requests over HTTPS. This node:

  • Receives the quiz submission payload
  • Validates the presence of required fields like student ID and answer text
  • Forwards the parsed JSON to downstream nodes

Ensure the webhook URL is secured via HTTPS and optionally protected behind an authentication mechanism if required by your environment.

3.2 Text Splitter

The Text Splitter node processes the student_answer (or combined question + answer text) and splits it into smaller segments. In the template, the typical configuration is:

  • chunkSize = 400 characters
  • chunkOverlap = 40 characters

This chunking:

  • Improves embedding quality by keeping each segment focused
  • Enhances retrieval recall in Pinecone when querying similar content

For very short answers, the splitter may produce a single chunk. For longer answers, overlapping segments preserve context across chunk boundaries.

3.3 Cohere Embeddings Node

Each chunk from the Text Splitter is passed into a Cohere Embeddings node. This node:

  • Uses your Cohere API credentials configured in n8n
  • Converts each text chunk into an embedding vector
  • Outputs vectors that can be stored in or queried from Pinecone

These embeddings are later used for similarity search against:

  • Canonical rubrics
  • Example answers
  • Instructor notes
  • Previously graded responses

3.4 Pinecone Insert

The Pinecone Insert node writes embeddings into a Pinecone index, such as:

index name: quiz_auto_grader

For each embedding, the workflow typically stores metadata including:

  • student_id
  • question_id
  • submission_id
  • timestamp
  • Optionally, flags indicating whether it is a rubric item, example answer, or a live submission

This index acts as a searchable knowledge base of grading-relevant content, allowing the RAG agent to ground its decisions in prior examples and rubrics.

3.5 Pinecone Query

When grading a specific answer, the workflow uses a Pinecone Query node to retrieve context. The node:

  • Accepts the embedding of the current answer as the query vector
  • Searches the quiz_auto_grader index for top-k similar vectors
  • Returns the most relevant rubric entries, example answers, or past submissions

The retrieved documents are then attached as context to the RAG agent, typically passed via a Vector Tool configured in the LangChain integration.

3.6 Window Memory

A Window Memory node is used to maintain short-term conversational context for the RAG agent. This is particularly useful if:

  • You grade multiple questions for the same student within a single workflow run
  • You want the agent to maintain consistent grading style across a small sequence of interactions

Window Memory stores the most recent messages up to a configurable limit, preventing the context window from growing indefinitely.

3.7 RAG Agent (LangChain Agent in n8n)

The core grading logic is implemented using a RAG Agent based on LangChain within n8n. The agent:

  • Uses the OpenAI Chat Model as the language model
  • Has access to a Vector Tool that queries Pinecone
  • Receives system and user messages that describe the grading task
  • Combines retrieved context with the student answer to produce a grade

The system message is typically configured along the lines of:

“You are an assistant for Quiz Auto Grader”

The agent outputs:

  • A numeric grade, often on a 0-100 scale
  • Short, human-readable feedback for the student
  • An optional confidence score between 0 and 1

3.8 OpenAI Chat Model

The OpenAI Chat node provides the underlying language model for the RAG agent. It:

  • Consumes the combined prompt that includes question, rubric, retrieved context, and student answer
  • Returns a structured response that the agent interprets as grading output

Model selection, temperature, and other generation parameters can be tuned based on how deterministic you want the grading to be.

3.9 Google Sheets Append

After the grade is produced, a Google Sheets Append node logs the result. Typical columns include:

  • submission_id
  • student_id
  • question_id
  • grade
  • feedback
  • confidence (if available)
  • grader_notes or raw model output
  • timestamp

This sheet functions as an audit log and makes it easy to review or export grading results for further analysis.

3.10 Slack Error Alert

For reliability, the workflow includes a Slack node that sends alerts when errors occur. If any step, such as:

  • Embedding generation
  • Pinecone operations
  • RAG agent execution
  • Google Sheets logging

throws an error, the workflow sends a message to an #alerts channel with:

  • The error message or stack trace (as available)
  • The submission_id or related identifiers to locate the failed item

This allows quick triage and manual intervention when needed.

4. Configuration & Integration Notes

4.1 Credentials & Environment

Configure the following credentials in n8n:

  • OpenAI API key for the chat model
  • Cohere API key for embeddings
  • Pinecone API key and environment
  • Google credentials for Sheets access
  • Slack bot token or webhook for alerts

It is recommended to run n8n in a managed environment or containerized setup and provide these keys via environment variables, not hard-coded in the workflow.

4.2 Data Privacy & Security

When working with student data:

  • Use HTTPS for the Webhook endpoint
  • Restrict access to Pinecone indexes to only the services that require it
  • Encrypt stored credentials in n8n and your hosting environment
  • Limit Google Sheets sharing to authorized staff only
  • Avoid logging unnecessary personally identifiable information (PII) in external systems

4.3 Rubric & Example Storage Strategy

To improve grading consistency, store:

  • Canonical rubrics for each question
  • High-quality example answers
  • Instructor notes or grading guidelines

in the same Pinecone index as vectors. Label them appropriately in metadata so that the RAG agent can retrieve them alongside live student answers. This makes the grading process more transparent and aligned with instructor expectations.

4.4 Chunk Size & Overlap Tuning

The default template uses:

  • chunkSize = 400
  • chunkOverlap = 40

Adjust these values based on:

  • Typical length of student responses
  • Embedding model characteristics
  • Desired tradeoff between context richness and noise

Very small chunks may lose context, while very large chunks may reduce retrieval precision.

4.5 Prompt Engineering for the RAG Agent

Clear and constrained prompts are essential. Define:

  • A system message that describes the grader’s role and responsibilities
  • A user message structure that includes question, rubric, and student answer
  • An expected output format, such as JSON, to simplify downstream parsing

A typical system and user prompt combination looks like:

<system>
You are an assistant for Quiz Auto Grader. Given a question, rubric, and a student's answer, return a JSON object with keys: grade (0-100), feedback (short text), confidence (0-1).
</system>

<user>
Question: ...
Rubric: ...
Student answer: ...
</user>

By enforcing a structured JSON output, the Google Sheets node can reliably map fields to columns without additional parsing complexity.

5. Error Handling & Reliability

5.1 Error Scenarios

Common error cases include:

  • Network timeouts when calling external APIs (OpenAI, Cohere, Pinecone, Google, Slack)
  • Invalid or missing payload fields in the webhook request
  • Authentication failures due to expired or misconfigured API keys

The template is designed to detect such failures at each step and trigger the Slack alert path.

5.2 Slack Alerts & Manual Review

When an error is detected, the Slack node:

  • Sends a message to an #alerts channel
  • Includes the relevant identifiers (for example, submission_id)

You can then:

  • Investigate the underlying issue
  • Re-run the workflow for the affected submission
  • Perform manual grading if necessary

For additional robustness, consider enabling retries on transient network errors and defining a dead-letter queue or dedicated sheet for failed items that require manual attention.

6. Monitoring, Evaluation & Iteration

6.1 Tracking Model Confidence

The RAG agent can output a

Automate Confluence Page Creation with n8n

Ever find yourself copying an old Confluence page, tweaking a few fields, and thinking, “There has to be a better way”? If you are doing release notes, onboarding docs, or incident reports over and over, you are not alone.

In this guide, we will walk through an n8n workflow template that automatically creates new Confluence pages from an existing space template. We will chat about what the workflow does, when it is worth using, and how to set it up step by step so you can stop doing the boring stuff by hand.

What this n8n – Confluence workflow actually does

At a high level, this automation takes a Confluence template, fills in placeholders with real data from a webhook, then creates a brand new page for you. No more copying and pasting, no more “did I forget a field this time?” worries.

Here is the basic flow:

  • n8n waits for a webhook call with data, for example a release, user, or ticket.
  • It pulls some configuration values like your Confluence base URL, template ID, space key, and parent page ID.
  • It calls the Confluence REST API to fetch the template content.
  • A Code node replaces placeholder tokens in the template with the actual values from your webhook payload.
  • Finally, it creates a new Confluence page using that filled-in content.

The result: consistent, standardized pages created on demand, directly from your tools or pipelines.

When to use this Confluence automation

This workflow is perfect any time you need repeatable documentation with a similar structure every time. For example:

  • Release notes populated from your CI/CD pipeline
  • Onboarding checklists filled with user or employee details
  • Incident reports that pull in ticket IDs, timelines, and owners

If you are already using Confluence templates and you find yourself creating almost identical pages again and again, this n8n template will probably save you a lot of clicks and reduce human error along the way.

What you need before you start

Before you plug in the workflow, make sure you have:

  • An Atlassian account and Confluence Cloud with permission to create pages in the target space.
  • An Atlassian API token for authentication (this is used as the password in basic auth).
  • An n8n instance where you can use HTTP Request nodes and a Code node.
  • Your Confluence template ID, plus the space key and parent page ID where you want new pages to appear.

Once you have those in place, you are ready to wire everything together.

How the n8n workflow is structured

The workflow is built around five main nodes that work together:

  1. Webhook – receives incoming data and triggers the workflow.
  2. Set parameters – stores static configuration like URLs and IDs.
  3. Confluence: Get template content – fetches the template title and body.
  4. Replace placeholders in template – uses JavaScript to inject data into the template.
  5. Confluence: Create page from template – sends a POST request to create the final page.

Let us go through each of these so you know exactly what to configure.

Step 1 – Webhook: trigger the workflow with data

The Webhook node is how your other tools talk to n8n. You can send it data from CI/CD systems, forms, Zapier, custom apps, or anything else that can make an HTTP request.

Configure the Webhook node to:

  • Use a unique path (so you know which workflow you are hitting).
  • Accept POST requests.

The payload should contain all the values you want to plug into the Confluence template. For example:

{  "user": { "name": "Alice", "email": "alice@example.com" },  "release": { "version": "1.4.2", "notes": "Bug fixes and improvements" },  "ticket": { "id": "PROJ-123", "url": "https://jira.example/browse/PROJ-123" }
}

Later on, the Code node will read this JSON, match it with placeholders in the template, and swap everything in.

Step 2 – Set parameters: keep your config in one place

Next up is a Set node that acts like a small config file inside your workflow. These are values that rarely change, so it is easier to define them once and reuse them everywhere.

In the Set node, define fields such as:

  • confluence_base_url – for example https://your-domain.atlassian.net
  • template_id – the ID of the Confluence template you want to use
  • target_space_key – the space key where the new page should live
  • target_parent_page_id – the ID of the parent page to nest under

Keeping these in one node makes it much easier to update later, especially if you move to a different space or change templates.

Step 3 – Fetch your Confluence template content

Now n8n needs the actual template from Confluence so it can work with the title and body.

Use an HTTP Request node configured as a GET request to the Confluence REST API:

GET https://your-domain.atlassian.net/wiki/rest/api/template/{template_id}

In the response, you will get:

  • The template title, usually in a field like name.
  • The template body in Confluence storage format at body.storage.value.

Those two pieces are what you will transform in the next step.

Step 4 – Replace placeholders in the template

This is where the magic happens. A Code node scans the template title and body for placeholders and replaces them with values from the webhook payload.

The placeholders follow a pattern like $some.field.path$. That path lines up with the JSON structure in your webhook data. For example:

  • $user.name$ looks for {"user": {"name": "..."}}
  • $release.version$ looks for {"release": {"version": "..."}}

Here is a sample JavaScript snippet that does the heavy lifting:

function replacePlaceholders(template, values) {  const placeholderPattern = /\$(.*?)\$/g;  return template.replace(placeholderPattern, (match, p1) => {  const keys = p1.split('.');  let value = values;  for (const key of keys) {  if (value && key in value) {  value = value[key];  } else {  return match; // fallback to original placeholder  }  }  return value;  });
}

const templateTitle = $('Confluence: Get template content').item.json.name;
const templateBody = $('Confluence: Get template content').item.json.body.storage.value;
const values = $('Webhook').item.json;

const pageTitle = replacePlaceholders(templateTitle, values);
const pageBody = replacePlaceholders(templateBody, values);

return { "page_title": pageTitle, "page_body": pageBody };

A couple of handy details here:

  • If a placeholder does not match any value in the payload, it is left as-is. That makes it easier to spot missing data later.
  • If you ever need more advanced logic, such as loops, lists, or conditional sections, you can generate HTML fragments in this Code node and then insert them into the template body.

Step 5 – Create the new Confluence page

Once the template is filled in, the final step is to send it back to Confluence as a new page.

Use another HTTP Request node configured as a POST to the Confluence content endpoint:

POST https://your-domain.atlassian.net/wiki/rest/api/content/
Content-Type: application/json
Authorization: Basic base64(email:api_token)

{  "type": "page",  "title": "2025-09-12-14-30 - Release notes 1.4.2",  "space": { "key": "TARGET_SPACE" },  "ancestors": [{ "type": "page", "id": "PARENT_PAGE_ID" }],  "body": {  "storage": {  "value": "<p>Your rendered storage HTML here</p>",  "representation": "storage"  }  }
}

A few important notes:

  • Authentication uses basic auth with your Atlassian account email as the username and the API token as the password.
  • type must be page.
  • space.key and ancestors.id control where the page lives and which page it is nested under.
  • The body.storage.value field should contain the HTML you produced in the Code node.

Best practices for smooth Confluence automation

To keep things safe and maintainable, it helps to follow a few simple habits:

  • Start by testing in a sandbox space or under a restricted parent page so you do not clutter production docs.
  • Keep your Confluence templates simple and use placeholders only where you really need dynamic data.
  • Validate webhook payloads and return clear error messages if required fields are missing.
  • Log Confluence responses so you can capture the created page ID and URL for follow-up automations or notifications.
  • Export your n8n workflow and keep it in version control so you can track changes over time.

Troubleshooting common issues

401 Unauthorized

If you get a 401, double-check your authentication details. Make sure you are:

  • Using your Atlassian account email as the username.
  • Using a valid, active API token as the password.
  • Confirming that the token has not been revoked or regenerated without updating n8n.

Template placeholders are not replaced

If some placeholders are still visible in the final page, it usually means the payload structure does not match the placeholder path.

For example, the placeholder $release.version$ expects a payload like:

{  "release": {  "version": "1.4.2"  }
}

If the JSON keys or nesting differ, the Code node will not find the value and will leave the placeholder unchanged.

Malformed storage HTML

Confluence uses a specific storage format that expects valid HTML and properly formatted Atlassian macros. If you are dynamically generating HTML in the Code node:

  • Check for mismatched or unclosed tags.
  • Make sure you are escaping special characters correctly.
  • Verify that any macros you insert follow Confluence syntax.

Security tips for using n8n with Confluence

Because this workflow touches your documentation system, it is worth tightening security a bit:

  • Never commit API tokens to public repos. Use n8n credentials or environment variables for secrets.
  • If you accept webhooks from external sources, validate the sender using IP allowlists, HMAC signatures, or other checks.
  • Grant the least privileges necessary to the Confluence account that n8n uses, so it cannot do more than it needs to.

Why this n8n template makes your life easier

Once this workflow is in place, you are no longer manually creating Confluence pages for every release or incident. Instead, your tools send a webhook, n8n fills in the template, and the new page appears right where it should be, with the right structure and data every time.

That means:

  • Less repetitive work.
  • More consistent documentation.
  • Fewer mistakes and forgotten fields.

Try the template and customize it to your needs

Ready to give it a spin?

  1. Import the workflow into your n8n instance.
  2. Set your Confluence credentials (email + API token) in n8n.
  3. Configure the template ID, space key, and parent page ID in the Set node.
  4. Trigger the webhook with a sample JSON payload and check the new page in Confluence.

Start in a sandbox space, tweak your placeholders and formatting, and iterate until the generated pages look exactly how you want. If you need more advanced replacement logic for lists or conditional sections, you can extend the Code node to handle those patterns too.

Next step: import the template, run a test, and share it with your team once it is dialed in. If this kind of automation is helpful, keep exploring more n8n workflows to connect your tools and clean up other repetitive tasks.

Automate Confluence Page Creation with n8n

Automate Confluence Page Creation with n8n

Imagine never having to copy a Confluence template, tweak fields, and double check formatting again. Instead, every new onboarding doc, release note, or meeting summary appears in the right space, with the right title, already filled with the right details. That is the kind of small but powerful transformation that n8n can unlock for your workday.

In this guide, you will walk through an n8n workflow that automatically creates a new Atlassian Confluence page from a space template. The workflow listens to a webhook, fills in placeholders in the template, then publishes a fully formatted page using the Confluence REST API. Along the way, you will see how this template can be a first step toward a more automated, focused workflow where repetitive tasks run in the background and you stay focused on higher value work.

From repetitive tasks to reliable systems

The problem: manual Confluence pages slow you down

Confluence templates are incredibly useful, but creating pages from them can become a grind. Every time you prepare onboarding docs, release notes, meeting notes, or internal documentation, you repeat the same steps: create a page, select a template, fill in fields, and make sure everything is consistent.

Over time, this manual work leads to:

  • Lost time on low value, repetitive tasks
  • Inconsistent structure, titles, and metadata
  • Human errors in links, versions, or names

It is not that these tasks are hard. They just add up and distract you from the work that really moves your team or business forward.

The mindset shift: treat documentation as a workflow, not a chore

Automation in n8n is not just about saving a few clicks. It is about turning fragile, human dependent steps into reliable systems that run the same way every time. When you automate Confluence page creation, you:

  • Build a repeatable process that anyone on your team can rely on
  • Free your brain from remembering tiny details like page titles and parent IDs
  • Create a foundation you can extend with approvals, notifications, and reporting

Think of this workflow as your starting point. Once you have one automated Confluence process, it becomes much easier to imagine and build the next one.

The n8n Confluence template: your practical starting point

The sample n8n workflow is designed to be both simple and powerful. You can import it, plug in your Confluence details, and start creating pages in minutes. From there, you can customize and grow it to fit your exact use case.

Workflow overview: how the automation flows

The workflow is built from five core n8n nodes that work together to turn incoming data into a published Confluence page:

  • Webhook – receives the incoming payload that will populate your template placeholders
  • Set parameters – stores reusable configuration values like your Confluence base URL and template ID
  • Confluence: Get template content – fetches the template content (title and body) from Confluence
  • Replace placeholders in template body and title – a JavaScript node that swaps placeholders like $user.name$ with real values from the webhook
  • Confluence: Create page from template – sends the final content to the Confluence REST API to create a page

Once this is in place, your role shifts from “doing the work” to “designing the process.” You decide what data comes in through the webhook, how the template is structured, and how the final page should look, then let n8n handle the rest.

Step 1: Connect n8n securely to Atlassian

Authentication with Atlassian

Before you can let your workflow publish pages on your behalf, you need to connect n8n to Atlassian in a secure way.

In n8n, use Atlassian basic authentication with:

  • Your Atlassian account email as the username
  • An API token as the password

You can create an API token at id.atlassian.com. Once generated, store this token in n8n credentials so it is never hard coded in individual nodes or shared workflows.

With authentication in place, your workflow can safely call the Confluence REST API and act with the permissions you have granted.

Step 2: Centralize your Confluence settings

The Set parameters node as your control panel

Instead of scattering configuration across multiple nodes, this workflow uses a Set parameters node as a central place for values you might want to change over time. This makes your automation much easier to maintain and scale.

Typical keys you will define include:

  • confluence_base_url – for example https://your-domain.atlassian.net
  • template_id – the numeric ID of the Confluence template you want to use
  • target_space_key – the space key or personal space identifier where pages should be created
  • target_parent_page_id – the ID of the parent page if you want new pages nested under a specific parent

By adjusting these values, you can quickly point the same workflow to a different space, template, or parent page without rewriting any logic.

Step 3: Pull the Confluence template content

Fetching the template via REST API

Next, the workflow needs to know what your Confluence template looks like. The Confluence: Get template content node calls the REST API to fetch it.

The request looks like this:

GET {confluence_base_url}/wiki/rest/api/template/{template_id}

The response includes:

  • The template title
  • The template body in storage format (Confluence storage representation)

These values become the source content that your JavaScript node will later customize with real data from the webhook.

Step 4: Turn raw data into a polished page

Placeholder replacement with a JavaScript node

This is where your workflow becomes truly dynamic. Instead of static text, your template can contain placeholders like $user.name$ or $project.name$ that are automatically replaced with values from the webhook payload.

The Replace placeholders node uses a small JavaScript function to do this mapping. It supports dot notation for nested JSON values and wraps placeholders with a character such as $.

function replacePlaceholders(template, values) {  const placeholderPattern = /\$(.*?)\$/g;  return template.replace(placeholderPattern, (match, p1) => {  const keys = p1.split('.');  let value = values;  for (const key of keys) {  if (value && key in value) {  value = value[key];  } else {  return match; // keep original if not found  }  }  return value;  });
}

const templateTitle = $('Confluence: Get template content').item.json.name;
const templateBody = $('Confluence: Get template content').item.json.body.storage.value;
const values = $('Webhook').item.json;

const pageTitle = replacePlaceholders(templateTitle, values);
const pageBody = replacePlaceholders(templateBody, values);

return { "page_title": pageTitle, "page_body": pageBody };

Here is what is happening in this step:

  • The node reads the template title and body from the previous Confluence node
  • It accesses the JSON payload from the Webhook node
  • It replaces every placeholder wrapped in $ with the matching value from the JSON
  • It outputs a final page_title and page_body that will be sent to Confluence

This is where your automation becomes uniquely yours. By changing the placeholders in your Confluence template and the structure of your webhook payload, you can adapt this approach to almost any documentation pattern.

Step 5: Create the Confluence page automatically

Publishing via the Confluence content API

With a fully prepared title and body, the workflow is ready to create the page. The Confluence: Create page from template step uses the content API:

POST {confluence_base_url}/wiki/rest/api/content/

An example JSON request body looks like this:

{  "type": "page",  "title": "{{timestamp}}-{{page_title}}",  "space": { "key": "TARGET_SPACE_KEY" },  "ancestors": [{ "id": TARGET_PARENT_PAGE_ID }],  "body": {  "storage": {  "value": "<h1>Final HTML content here</h1>",  "representation": "storage"  }  }
}

In the actual workflow, you will:

  • Use n8n expressions to inject {{ $json["page_title"] }} and {{ $json["page_body"] }} from the JavaScript node
  • Keep the storage representation for the body, since the template is already in Confluence storage format
  • Escape any special characters if you build JSON manually

Once this node runs successfully, your new Confluence page appears exactly where you want it, already filled with the right information.

Example: what the incoming data can look like

To make this concrete, here is a sample webhook payload that could trigger the workflow:

{  "user": { "name": "Alice", "email": "alice@example.com" },  "project": { "key": "PROJ", "name": "Example Project" },  "release": { "version": "1.2.3", "date": "2025-09-12" }
}

If your Confluence template includes placeholders like:

  • $user.name$
  • $project.name$
  • $release.version$

then the workflow will automatically fill in those values, giving you a complete, customized page for each new release, project, or onboarding flow.

Growing your automation: tips, fixes, and safeguards

Troubleshooting common issues

As you experiment and refine the workflow, you may run into a few common issues. Use these checks to keep things running smoothly:

  • 401 or 403 errors – Verify your Atlassian credentials and API token, and confirm that the user has permission to create pages in the target space.
  • Invalid space or ancestors – Double check the space key and parent page ID. You can look up IDs in the Confluence UI or via the API.
  • Pages not rendering correctly – Make sure you are using the storage representation and that the template body has not been HTML escaped twice.
  • Placeholders not replaced – Confirm that the webhook JSON structure matches the dot notation in your placeholders exactly.
  • Rate limits – Atlassian applies rate limits to API calls. For large scale automation, add retries or throttling in n8n.

Security best practices as you scale

As your automation becomes more powerful, security becomes more important. Keep these guidelines in mind:

  • Store Atlassian API tokens in n8n credentials, never in plain text inside nodes or shared workflows
  • Validate incoming webhook payloads for public endpoints, for example using HMAC or a shared secret
  • Apply the principle of least privilege so the Confluence user only has the access that the automation really needs

Take it further: extend and personalize the workflow

Once your first automated Confluence page is working, you have a powerful base to build on. Here are a few ways to grow this into a richer system:

  • Notifications – Send a Slack message or email with the link to the newly created page so your team knows it is ready
  • Dynamic templates – Support multiple template IDs and select one based on fields in the webhook payload, such as project type or environment
  • Tracking and audits – Store created page IDs and metadata in a database for reporting, auditing, or rollback workflows

Each improvement brings you closer to a documentation process that runs itself, while you focus on strategy, communication, and product work.

Bringing it all together

Automating Confluence page creation with n8n is a small project with an outsized impact. You define your template once, wire up a webhook, and let the workflow handle the rest. The result is faster documentation, fewer mistakes, and consistent structure across every space.

The sample workflow you have just explored covers everything you need to get started:

  • Receiving data through a webhook
  • Fetching a Confluence template via the REST API
  • Replacing placeholders in the title and body with live data
  • Publishing a new Confluence page in the right space and under the right parent

From here, you can extend it with approvals, notifications, scheduling, or integrations with other tools in your stack. Each iteration is another step toward a more automated, intentional way of working.

Ready to build your own automated documentation flow? Import the n8n workflow, update the Set parameters node with your Confluence base URL, space, template ID, and credentials, then send a test webhook. Use the result as your foundation, tweak it, and keep improving until it perfectly matches how your team works.

Download workflow JSON
 
Get support


Want more step by step automation tutorials like this? Subscribe to get new n8n and Atlassian automation recipes delivered directly to your inbox, and keep building a workflow that works for you, not the other way around.