n8n LINE Chatbot with Google Gemini & Sheets Memory
Imagine this: your LINE chatbot is having a full-on heart-to-heart with users, but it forgets everything the moment the conversation scrolls up. Every reply is like meeting for the first time. Again. And again. And again.
If that makes you cringe a little, good news. With n8n, Google Gemini (Gemini-2.0), and a humble Google Sheet, you can give your LINE bot a decent memory, keep everything low-cost, and avoid spinning up a full database or backend. Your bot gets context, you get fewer repetitive tasks, and everyone gets to keep their sanity.
What this n8n workflow actually does
This workflow template ties together LINE Messaging API, Google Sheets, and Google Gemini so your chatbot can:
- Receive messages from LINE via webhook
- Look up the user’s past conversation history in Google Sheets
- Feed that history plus the new message to Google Gemini via an AI Agent node
- Store the updated conversation back in the sheet, with automatic archiving when things get too long
- Reply to the user on LINE, all in one smooth flow
In short, it turns “stateless bot with goldfish memory” into “stateful chatbot that remembers what you said 5 messages ago” without the overhead of a full database.
Why this setup works so well
This architecture is especially handy if you want:
- A cheap and simple memory layer using Google Sheets instead of a database
- Smart, friendly language understanding powered by Google Gemini
- An event-driven flow in n8n that plugs into the LINE Messaging API without writing a full backend service
It is perfect for prototypes, small to medium bots, multilingual experiments, and anyone who would rather not babysit servers.
High-level flow: from LINE message to smart reply
Here is the overall journey of a single user message through the n8n workflow:
- LINE sends a
POSTwebhook to your n8n Webhook node when a user messages the bot. - The workflow extracts the message text,
userId, andreplyToken. - Google Sheets is queried to find any existing conversation history for that
userId. - The workflow builds a prompt that mixes previous history with the new message.
- An AI Agent node, backed by Google Gemini, generates a reply.
- The new Q&A pair is appended to the history, and if things are getting too long, older parts are shifted into archive columns.
- The updated history is saved back to Google Sheets, matched by
UserID. - The reply is sent back to the user via the LINE Messaging API using the
replyToken.
All of that happens inside a single n8n workflow template that you can import and customize.
Step-by-step: building the n8n LINE chatbot workflow
1. Webhook node – catching LINE events
Everything starts with a Webhook node in n8n that accepts POST requests from LINE.
- Set the Webhook path to something like
guitarpa(or any path you like). - LINE will send events to this URL, including:
body.events[0].message.text– the user’s messagebody.events[0].source.userId– the unique user IDbody.events[0].replyToken– the token required to send a reply
Think of this node as the front door where every LINE message knocks.
2. Set node – cleaning up the incoming data
Next, a Set node (often named something like “Edit Fields”) normalizes the incoming body so the rest of the workflow does not need to dig through nested JSON every time.
In this node you map fields like:
body.events[0].message.textbody.events[0].replyTokenbody.events[0].source.userId
into simpler properties that later nodes can use directly. This is the “tidy up the kitchen so you can cook” step.
3. Google Sheets node – fetching conversation history
Now it is time to give your bot some memory. A Google Sheets node (for example, “Get History”) looks up the row that matches the current userId.
- Use a lookup on the
UserIDcolumn. - Enable
returnFirstMatch = trueso you always get the latest row for that user. - Keep columns such as:
HistoryHistory_Archive_1up toHistory_Archive_4LastUpdated
These fields are the raw material for building a contextual prompt for Gemini.
4. Set node – preparing the AI prompt
With history in hand, another Set node (for example, “Prepare Prompt”) constructs the message that will be sent to the AI Agent node.
The prompt is built by concatenating:
- Archived history columns:
History_Archive_1..4 - The current
Historycolumn - The latest user message
All of this is formatted as a clear conversation transcript. A sample system message used in the workflow looks like this:
คุณคือลลิตา แชทบอทภาษาไทยที่สุภาพและเป็นมิตร ตอบตามบริบทของการสนทนา:
[...history chunks...]
ผู้ใช้: [user message]
ลลิตา:
This example uses Thai system text and instructs the assistant to respond based on the conversation context, in the user’s language. You can adapt the persona and language to match your bot.
5. AI Agent + Google Gemini – generating the reply
Next up is the AI Agent node, wired to a Google Gemini chat model, for example:
models/gemini-2.0-flash-001
The Agent node receives the prepared prompt and returns the reply text from Gemini.
In the AI Agent node configuration you can:
- Set a system message to define the bot’s personality and behavior
- Configure timezone and other parameters
- Control how the model responds to user messages
This is where the “smart” part of the chatbot lives. Everything before this was about context. Everything after is about saving and sending the answer.
6. Code node – splitting and archiving long history
Conversation history grows quickly. Google Sheets cells have limits, and your prompts should not be unreasonably large. To handle this, the workflow includes a Code node (for example, “Split History”) that:
- Builds a new exchange string in the form:
ผู้ใช้: [message]\nลลิตา: [response] - Appends that string to the existing
Historyvalue. - Checks the total length, and if it exceeds a safe threshold, moves older content into archive columns like
History_Archive_1..4. - Keeps only the most recent chunk, for example the last 17,500 characters, in the main
Historycolumn.
The sample logic in the template uses around 70% of a 50,000 character limit (about 35,000 characters) as a threshold, then keeps the freshest half of that in the active History column.
This strategy keeps your prompts relevant, avoids hitting Google Sheets cell limits, and lets you preserve older context in the archive columns if you ever need it.
7. Google Sheets node – saving updated history
Once the new history is calculated, another Google Sheets node (for example, “Save History”) writes it back.
- Use the node in appendOrUpdate mode.
- Match rows using the
UserIDcolumn so each user has a single row. - Update:
HistoryHistory_Archive_1..4LastUpdated
At this point, your Google Sheet is the bot’s brain-in-a-table, neatly organized per user.
8. HTTP Request node – replying to LINE
Finally, the workflow sends the generated reply back to the user on LINE using an HTTP Request node.
- Endpoint:
https://api.line.me/v2/bot/message/reply - Include:
- The
replyTokenfrom the webhook - A
messagesarray with the AI response
- The
The template uses string replacement inside the HTTP Request node to escape newlines, tabs, and quotes in the AI response so your JSON payload does not break. That means fewer mysterious 400 errors and more happy users.
Key implementation tips so your bot does not break at 2 a.m.
1. Escaping AI responses in JSON
AI models love newlines, quotes, and other characters that JSON does not always appreciate. When you inject the model output into the HTTP Request body, make sure to escape:
- Quotes
- Newlines
- Tabs
The example template uses chained replace functions on the response string before it is placed into the JSON body. This avoids malformed payloads and lets LINE accept your replies without complaint.
2. Handling LINE reply token lifetime
LINE reply tokens are not patient. They expire quickly, so you should:
- Send the reply in the same n8n flow, without long blocking operations.
- If you need heavy processing, consider:
- Replying quickly with an acknowledgement (via push messages or a separate flow).
- Doing the heavy work asynchronously, then sending a follow up message using the appropriate LINE API.
In other words, do not go on a coffee break between receiving the message and using the replyToken.
3. Security best practices
- Store your LINE channel access token and Google credentials in n8n’s credential store, not hard coded in nodes.
- Restrict who can call your webhook by:
- Verifying LINE signatures, or
- Validating that incoming events match the expected structure
This keeps your workflow from becoming an open playground for random requests.
4. Managing conversation history size
The split and archive approach in the template is designed to:
- Keep the most recent, relevant context in the main
Historycolumn - Move older content into
History_Archive_1..4
You can adjust thresholds based on:
- Typical conversation length
- Google Sheets limits
- How much context your Gemini prompts really need
Think of it as a rolling chat log with a short-term memory section and an attic full of archives.
Scalability: when Google Sheets starts to feel a bit small
Google Sheets is great for:
- Prototypes
- Low volume bots
- Early experiments
If your LINE bot grows to thousands of users or very frequent messages, you might want to graduate to a proper database. Consider:
- Firestore
- PostgreSQL
- Another lightweight DB that plays nicely with n8n
This gives you better performance, more robust querying, and fewer concurrency headaches when multiple users are chatting at the same time.
Ideas to improve the workflow further
Once you have the basic template running, you can enhance it in several ways:
- Compress or summarize old exchanges before archiving to save space and keep context lean.
- Store metadata like topics, intent, or sentiment to help the model pick the most relevant parts of the history.
- Rate-limit user requests so you do not blow through your Google Gemini or LINE quotas during a busy day.
These tweaks help you keep costs under control while still giving users a smooth, contextual chat experience.
Putting it all together
This n8n workflow shows a practical way to:
- Use Google Gemini as the “brain” of your LINE chatbot
- Use Google Sheets as a lightweight memory layer
- Handle multilingual, contextual conversations without running a full backend
The provided example is Thai-first, but the same pattern works for any language Gemini supports. It is ideal for rapid prototyping, experimenting with personas, and iterating on prompts without touching server code.
Next steps: try the template yourself
If you are ready to stop copy-pasting chat logs and let automation handle the boring parts, here is what to do:
- Import the n8n template.
- Connect your LINE Messaging API credentials and Google credentials.
- Test the flow with a single user and watch the conversation history appear in Google Sheets.
- Tweak:
- Prompt wording and persona
- History length thresholds
- Archive behavior
If you want a version that automatically summarizes older history, supports more complex multi-user scenarios, or migrates from Sheets to a database, this flow is a solid starting point that can be adapted to your goals.
Call to action: Import the flow in n8n, connect your credentials, and send a message to your LINE bot. If you want a more detailed step-by-step setup or an extended version that auto-summarizes old history, reply and we can shape this workflow to your exact use case.
