AI Template Search
N8N Bazar

Find n8n Templates with AI Search

Search thousands of workflows using natural language. Find exactly what you need, instantly.

Start Searching Free
Oct 20, 2025

Build a LINE Chatbot with n8n & Google Sheets Memory

Build a LINE Chatbot with n8n & Google Sheets as Memory Imagine this: you are answering the same questions on LINE all day, every day. Your fingers hurt, your brain is on autopilot, and you start replying to real humans with “see FAQ sheet row 12”. Not ideal. Now picture this instead: a LINE chatbot […]

Build a LINE Chatbot with n8n & Google Sheets Memory

Build a LINE Chatbot with n8n & Google Sheets as Memory

Imagine this: you are answering the same questions on LINE all day, every day. Your fingers hurt, your brain is on autopilot, and you start replying to real humans with “see FAQ sheet row 12”. Not ideal.

Now picture this instead: a LINE chatbot that remembers conversations, replies in natural language, and politely handles your users while you sip coffee and pretend everything is fully under control. That is what this n8n + Google Gemini + Google Sheets setup is for.

In this guide, you will learn how to build a production-ready LINE chatbot using:

  • n8n as the low-code orchestration brain
  • Google Gemini (via LangChain) for natural language replies
  • Google Sheets as a simple, cheap memory store

We will walk through the architecture, the key n8n nodes, how the Google Sheets memory works, and how to send reliable replies back to the LINE Messaging API. By the end, repetitive support messages should feel like a distant, slightly traumatic memory.


What this LINE chatbot workflow actually does

The workflow connects your LINE Official Account to n8n, lets Gemini generate context-aware responses, and stores chat history per user in Google Sheets. So your bot can remember what was said before instead of answering every message like it just woke up with no idea who you are.

Here is the high-level flow in n8n:

  1. Receive a message from LINE via a Webhook node.
  2. Extract the essentials (user message, reply token, user ID) with a Set node.
  3. Fetch the user’s history from Google Sheets using the userId.
  4. Build a prompt that combines archived history, recent messages, and the new user input.
  5. Send the prompt to a LangChain AI Agent configured with a Google Gemini chat model.
  6. Update the memory in Google Sheets with the new user message and AI response, including some smart history splitting.
  7. Reply to the user on LINE using an HTTP Request node and the LINE Messaging API reply endpoint.

The result: a contextual, Thai-friendly chatbot (in this example) that remembers past exchanges and uses a spreadsheet as its very humble yet effective brain.


Why this architecture works so well

If you want automation without spinning up heavy infrastructure or managing a zoo of microservices, this combo is very practical.

  • Fast prototyping with n8n as your low-code orchestration platform. Drag, drop, connect, and ship.
  • Simple long-term memory by using Google Sheets rows and archive columns instead of a full database at the start.
  • Flexible prompt engineering with the LangChain agent node inside n8n, so you can tweak how Gemini thinks and talks.
  • Direct LINE replies through the LINE Messaging API using a simple HTTP Request node.

It is a sweet spot for building a real chatbot without committing to complex infra from day one. When you outgrow Sheets, you can always move to a proper database later.


Step 1 – Receive messages from LINE with a Webhook

Configure the Webhook node (LINE to n8n)

Start by creating a Webhook node in n8n and set it to handle POST requests. Give it a path such as /guitarpa or anything memorable.

Then, in your LINE Official Account settings, register the full webhook URL from n8n so LINE knows where to send incoming messages.

From the webhook body, you will need these fields:

  • body.events[0].message.text – the user’s message
  • body.events[0].replyToken – required to send back a reply
  • body.events[0].source.userId – a unique ID so you can store per-user memory

This is the raw material that flows through the rest of the automation.


Step 2 – Clean up the incoming data with a Set node

Prepare input fields for easier use

Working directly with body.events[0].something everywhere gets old very fast. Use a Set node to rename things to something more pleasant.

Example mappings in the Set node:

body.events[0].message.text -> userMessage
body.events[0].replyToken -> replyToken
body.events[0].source.userId -> userId

Now all downstream nodes can refer to userMessage, replyToken, and userId without digging through nested JSON every time.


Step 3 – Use Google Sheets as chatbot memory

Design your Google Sheets structure

Each user gets a single row in Google Sheets. That row stores their conversation history and archived chunks. A simple structure might look like this:

  • UserID
  • History
  • LastUpdated
  • History_Archive_1
  • History_Archive_2
  • History_Archive_3
  • History_Archive_4

Use a Google Sheets node in n8n to read the row that matches the incoming userId. This gives you the recent conversational context you will feed into the AI.


Step 4 – Build a prompt that includes history

Prepare the AI prompt with a Set node

Now it is time to talk to Gemini. Use another Set node to assemble the full prompt. You will concatenate the archive fields and the main History field, then add the latest user message.

Example prompt pattern:

คุณคือลลิตา แชทบอทภาษาไทยที่สุภาพและเป็นมิตร ตอบตามบริบทของการสนทนา:
{History_Archive_1}
{History_Archive_2}
{History_Archive_3}
{History_Archive_4}
{History}
ผู้ใช้: {userMessage}
ลลิตา:

A few tips when crafting the prompt:

  • Keep your system instruction short and clear.
  • Optionally mention timezone or any other context the bot should know.
  • Tell the bot to answer in the user’s language when possible, so it feels natural and local.

Step 5 – Call Gemini via the LangChain AI Agent node

Configure LangChain with Google Gemini

Next, drop in a LangChain AI Agent node in n8n and attach a Google Gemini chat model, such as models/gemini-2.0-flash-001.

Make sure you have set up your Google PaLM / Gemini credentials in n8n first. The LangChain agent helps you keep prompts structured and parse the model’s output more reliably, which is helpful when you are building flows that should not break just because the AI decided to be creative.

The node takes your prepared prompt, sends it to Gemini, and returns the AI’s response for the next steps in the workflow.


Step 6 – Split and archive long history with a Code node

Why you need history splitting

Google Sheets cells are generous, but not infinite. If you dump every single message into one cell forever, you will eventually hit limits and slow down lookups. Also, your prompt will get bloated and expensive to send.

To keep things efficient, this workflow uses a Code node to:

  • Keep the live History cell under about 17,500 characters.
  • Use an archive threshold of around 35,000 characters per archive cell.
  • When updatedHistory exceeds the threshold, move the oldest content into archive fields one by one.

Here is some pseudocode from the workflow:

// pseudocode from the workflow
let updatedHistory = previousHistory + '\n' + 'ผู้ใช้: ' + message + '\nลลิตา: ' + response;
if (updatedHistory.length > 35000) {  // keep last 17,500 chars and move older chunk into archives
}

This approach keeps the most recent context in the main History column, while older conversations are still stored in History_Archive_1..4 for reference or analysis.

Example splitting logic snippet

A more explicit summary snippet might look like this:

// Splitting logic: keep recent 17,500 chars, move older to archive cells
const threshold = 35000;
if (updatedHistory.length > threshold) {  const keepLength = 17500;  const archiveChunk = updatedHistory.substring(0, updatedHistory.length - keepLength);  historyToSave = updatedHistory.substring(updatedHistory.length - keepLength);  // distribute archiveChunk across archive cells
}

You can adapt this logic to your own sheet structure, but the principle stays the same: keep recent chat short, archive the rest.


Step 7 – Save everything back to Google Sheets

Update or append the user row

Once you have the new AI response and updated history, use another Google Sheets node to store it.

Using the UserID column as your key, either:

  • Update the existing row for that user, or
  • Append a new row if it is a new user.

Make sure you write back:

  • The latest History
  • All History_Archive_* columns you manage
  • The LastUpdated timestamp

Now your chatbot has a persistent memory that survives restarts, crashes, and your occasional workflow experiments.


Step 8 – Reply to the user on LINE

Send the response with an HTTP Request node

Finally, it is time to talk back to the human who started all this. Use an HTTP Request node configured to POST to:

https://api.line.me/v2/bot/message/reply

Set the headers:

  • Authorization: Bearer <YOUR_CHANNEL_ACCESS_TOKEN>
  • Content-Type: application/json

Use a JSON body like this:

{  "replyToken": "{{ replyToken }}",  "messages": [{"type": "text", "text": "{{ aiResponse }}"}]
}

Where {{ replyToken }} and {{ aiResponse }} come from earlier nodes. When building this JSON string in a Set node, be careful to escape line breaks and quotes so the payload stays valid.


Security, sanity, and best practices

Automation is fun until credentials leak or you accidentally reply to LINE with a stack trace. A few safety tips:

  • Use n8n credentials for API keys and tokens. Do not hardcode secrets directly in your workflow.
  • Validate incoming webhooks from LINE where possible, for example by checking the LINE signature.
  • Rate-limit model calls and replies to control cost and avoid surprise bills when your bot suddenly becomes popular.
  • Handle PII carefully. If you store sensitive user data in Google Sheets, clean or anonymize logs where appropriate.

Testing, debugging, and avoiding silent failures

Local testing with tunneling tools

When developing locally, you can expose your n8n webhook URL to LINE using tools like ngrok. This lets you test the full LINE to n8n round trip without deploying n8n to a public server immediately.

Common issues and how to fix them

  • No reply from the bot
    Check that the replyToken is valid and used only once. LINE reply tokens are single-use per event, so reusing them will fail silently.
  • Slow model responses
    If Gemini is taking a while, consider caching answers to common questions or using a lighter, cheaper model for quick responses.
  • Errors writing to Sheets
    Verify that the authenticated Google account has edit rights to the sheet and that you are not hitting Google Sheets API quotas.

Scaling beyond “spreadsheet as a brain”

Google Sheets is great for prototyping and small to medium usage. When your chatbot grows up and starts handling serious traffic, you might want to:

  • Move memory to a real database like Firestore, PostgreSQL, or Redis for faster reads and structured queries.
  • Shard user history by a userId hash to avoid huge single rows or hot spots.
  • Batch writes or use a queue so peak traffic does not overwhelm the model or the Sheets API.

The good news is that the overall logic stays the same. You just swap out the persistence layer when you are ready.


Prompt example and quick JS recap

Sample system prompt

Here is a simple prompt pattern you can adapt:

System: You are "ลลิตา", a helpful Thai chatbot. Today is {date}.
User: {userMessage}
ลลิตา:

Combine this with your archived history to keep the conversation coherent, polite, and on topic.

History splitting snippet (summary)

As a reminder, the splitting logic keeps recent history short and archives older content:

// Splitting logic: keep recent 17,500 chars, move older to archive cells
const threshold = 35000;
if (updatedHistory.length > threshold) {  const keepLength = 17500;  const archiveChunk = updatedHistory.substring(0, updatedHistory.length - keepLength);  historyToSave = updatedHistory.substring(updatedHistory.length - keepLength);  // distribute archiveChunk across archive cells
}

This keeps your Sheets usable and your prompts efficient, instead of sending the AI a novel every time someone types “hi”.


Wrap-up and what to do next

With this blueprint, you can spin up a LINE chatbot that:

  • Has contextual memory backed by Google Sheets
  • Provides friendly localized responses (Thai in the example)
  • Runs on a low-cost, easy-to-audit storage backend
  • Is orchestrated in n8n with Google Gemini doing the language heavy lifting

Time to retire some of those repetitive manual replies.

Next steps:

  • Deploy the flow in your n8n instance.
  • Connect it to your LINE Official Account webhook.
  • Configure your Google Gemini credentials in n8n.
  • Test with a small user group, then iterate on prompts and memory pruning as real conversations roll in.

Call to action: Want the full exported n8n workflow and a ready-to-use Google Sheet template so you can skip the

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Workflow Builder
N8N Bazar

AI-Powered n8n Workflows

🔍 Search 1000s of Templates
✨ Generate with AI
🚀 Deploy Instantly
Try Free Now