Import Google Sheets to MySQL with n8n (Automated)
This guide describes how to implement a robust scheduled integration that reads rows from Google Sheets and writes them into a MySQL database using n8n. The workflow follows a lightweight ETL pattern, suitable for synchronizing operational spreadsheets into relational tables, feeding reporting databases, or maintaining a central data store sourced from collaborative Google Sheets.
Business case and benefits of automating Google Sheets to MySQL
Many teams rely on Google Sheets as an operational data entry surface, while analytics, applications, and reporting tools consume data from MySQL or other relational databases. Automating the bridge between the two avoids manual work and reduces risk.
- Eliminate manual export, copy, and paste between Sheets and MySQL.
- Maintain a single, up-to-date database for BI tools, dashboards, and applications.
- Standardize on a repeatable, auditable process with built-in scheduling and retry behavior.
Using n8n, this pattern can be implemented with minimal scripting, while still following sound automation and data engineering practices.
Workflow architecture and core components
The integration is intentionally minimal and uses three primary nodes, which can later be extended with additional logic for validation, transformation, and monitoring.
- Cron – defines the execution schedule for the sync.
- Google Sheets – read – fetches rows from the specified spreadsheet using OAuth2.
- MySQL – insert – writes the incoming rows into a target MySQL table.
The following JSON represents the base workflow that you can import directly into n8n and adapt to your environment:
{ "nodes":[ { "name":"Cron", "type":"n8n-nodes-base.cron", "position":[100,160], "parameters":{ "triggerTimes":{ "item":[ { "hour":5, "mode":"everyWeek" } ] } } }, { "name":"MySQL - insert", "type":"n8n-nodes-base.mySql", "position":[500,160], "parameters":{ "table":"books", "columns":"title, price", "options":{ "ignore":true, "priority":"LOW_PRIORITY" } }, "credentials":{ "mySql":{ "id":"82", "name":"MySQL account" } } }, { "name":"Google Sheets - read", "type":"n8n-nodes-base.googleSheets", "position":[300,160], "parameters":{ "options":{}, "sheetId":"qwertz", "authentication":"oAuth2" }, "credentials":{ "googleSheetsOAuth2Api":{ "id":"2", "name":"google_sheets_oauth" } } } ], "connections":{ "Cron":{ "main":[ [ { "node":"Google Sheets - read", "type":"main", "index":0 } ] ] }, "Google Sheets - read":{ "main":[ [ { "node":"MySQL - insert", "type":"main", "index":0 } ] ] } }
}
In this baseline configuration, the Cron node triggers the workflow, the Google Sheets node retrieves the data, and the MySQL node performs the insert operations. The pattern is straightforward but can easily be extended with additional nodes for mapping, filtering, and error handling.
Configuring the workflow in n8n
1. Scheduling the sync with the Cron node
The Cron node defines when the import runs. In the example JSON, it is configured to execute once per week at 05:00:
mode: everyWeekhour: 5
Typical alternative schedules include:
- Daily syncs at a specific time using
mode: everyDayand setting the desired hour. - Multiple runs per day by adding several trigger time entries.
- Near real-time behavior by triggering every few minutes or every hour, depending on load and latency requirements.
When designing the schedule, balance data freshness with system load, API quotas, and MySQL capacity.
2. Reading data from Google Sheets
The Google Sheets – read node is responsible for pulling data from your spreadsheet. Configure it with OAuth2 authentication to access the desired sheet.
- sheetId – the unique identifier of the Google Sheet, found in the URL of the document.
- range (optional) – a standard A1 notation range such as
Sheet1!A1:C100to restrict the rows and columns processed. - Authentication – select OAuth2 credentials that grant access to the target sheet.
If the sheet contains a header row, you have two main options:
- Enable the node option to include the header row and handle it downstream, or
- Exclude the header from the range and use a Set or Function node to assign explicit field names to each column.
For production workflows, explicit field mapping is recommended to avoid ambiguity when columns are added or reordered in the sheet.
3. Writing rows into MySQL
The MySQL – insert node inserts the incoming records into a target table. In the sample workflow, the node is configured as follows:
- table:
books - columns:
title, price(comma-separated list of target columns) - options.ignore:
true, which convertsINSERTintoINSERT IGNOREto silently skip rows that violate unique constraints.
Before enabling the workflow, confirm that:
- The field names produced by the Google Sheets node (or any mapping node in between) align with the MySQL column names.
- Data types are compatible, for example, numeric columns in MySQL receive numeric values from the sheet.
- The MySQL user configured in the credentials has privileges to perform
INSERToperations on the specified table.
When necessary, introduce a Set node between the Sheets and MySQL nodes to rename properties, cast values, or drop unwanted fields.
Improving robustness with mapping and validation
Field mapping with a Set node
For maintainable workflows, it is best practice to explicitly control the shape of the data before writing to the database. A Set node can be added between the Google Sheets and MySQL nodes to produce records that match the schema of the target table.
Example mapping:
- Sheet column
"Book Title"mapped to MySQL columntitle. - Sheet column
"Price (USD)"mapped to MySQL columnprice, with a cast to a numeric type.
This approach reduces the risk of SQL errors due to unexpected field names and simplifies debugging when the sheet structure changes.
Data validation and normalization
Use a Set or Function node to implement basic validation logic before inserting into MySQL. Typical operations include:
- Ensuring numeric fields, such as prices or quantities, are valid numbers.
- Normalizing empty or missing values to
NULLwhere appropriate. - Trimming whitespace and standardizing date formats.
By enforcing these rules upstream, you avoid data quality issues and reduce the likelihood of failed inserts.
Handling duplicates and ensuring idempotency
Repeated imports from Google Sheets can introduce duplicate data if not handled correctly. Several strategies are available:
- Database constraints with INSERT IGNORE – define unique keys in MySQL and use the
ignore: trueoption, as in the example, to skip rows that conflict with existing records. - Pre-insert checks – perform a
SELECTin MySQL prior to insertion to check for existing records. This is explicit but can be slower at scale. - Timestamp or status flags – maintain a “last synced” timestamp or a processed flag in the sheet or in an auxiliary tracking table to avoid re-importing the same rows.
For higher-volume or business-critical workflows, combining database constraints with explicit tracking is often the most reliable pattern.
Error handling, logging, and observability
Production-grade automations require clear visibility into failures and the ability to recover gracefully. n8n provides several mechanisms to implement this.
Using error workflows and Catch nodes
Add a Catch node or configure error workflows so that any failure in the Google Sheets or MySQL nodes is captured and processed. Typical actions include:
- Writing error details to a log file, database table, or monitoring system.
- Sending notifications to Slack, email, or another alert channel when inserts fail.
- Implementing retry strategies, such as exponential backoff for transient network or API issues.
These practices ensure that issues do not go unnoticed and that operators have enough context to resolve them quickly.
Performance and scalability considerations
Large spreadsheets and high-frequency schedules can put pressure on both the Google Sheets API and the MySQL database. To keep the workflow performant and predictable, consider the following approaches.
- Chunked reads – use the
rangeparameter to process the sheet in smaller segments instead of fetching the entire dataset in a single run. - Batch inserts – where supported by the MySQL node, use bulk insert operations to reduce round trips to the database.
- Higher-frequency, smaller batches – run the workflow more often with fewer rows per execution to reduce peak load and memory usage.
Design the batch strategy based on sheet size, acceptable latency, and database capacity.
Troubleshooting common issues
Diagnosing MySQL-related errors
- Verify that column data types in MySQL are compatible with the values provided by the workflow, for example, numeric vs string fields.
- Confirm that the MySQL user configured in n8n has the required privileges, particularly
INSERTand, if needed,SELECT. - For duplicate key errors, either adjust constraints or enable the
IGNOREoption to skip conflicting rows.
Resolving Google Sheets access problems
- Ensure that the OAuth2 credentials configured in n8n include the necessary Google Sheets API scopes.
- If you require non-interactive access, consider using a service account and share the sheet with that account.
- Double-check the
sheetIdand range syntax to ensure they match the actual document and tab names.
Security and access control
Security should be treated as a first-class concern when connecting external services and databases.
- Store credentials securely within n8n and restrict access to them to authorized users only.
- Grant the MySQL user only the minimum privileges required, typically
INSERTand optionallySELECTfor validation or deduplication logic. - Avoid using public Google Sheets for sensitive or regulated data. Prefer private sheets with tightly controlled sharing settings.
When this pattern is a good fit
The Cron → Google Sheets → MySQL pattern is particularly effective in the following scenarios:
- Regular (daily or weekly) synchronization of operational spreadsheets into a central reporting or analytics database.
- Lightweight ETL use cases where a full data pipeline or dedicated ETL platform would be excessive.
- Rapid prototyping of integrations before transitioning to more complex or high-volume architectures.
Because the workflow is simple and extensible, it provides a strong foundation that can evolve with your data requirements.
Scaling and next steps
Once the base workflow is stable, you can extend it to handle more advanced requirements.
- Add dedicated validation and normalization nodes to enforce data quality rules.
- Introduce a queuing or staging pattern for very large imports, for example, writing to an intermediate table before merging into production tables.
- Track processing status using a flag in the sheet or a separate tracking table to avoid reprocessing already imported rows.
These enhancements improve reliability and make the workflow easier to operate at scale.
Conclusion and template access
By combining n8n’s Cron, Google Sheets, and MySQL nodes, you can implement a reliable, scheduled synchronization from Google Sheets into MySQL with minimal custom code. Start with the basic three-node workflow, then incrementally add field mapping, validation, deduplication, and error handling to align with your production standards.
Next action: Import the example workflow into your n8n instance, configure your Google Sheets OAuth2 and MySQL credentials, and run a test with a small subset of data. Iterate on the mapping and validation logic until the integration behaves as required for your environment.
Recommendation: Clone the template, validate it using a small test sheet, then subscribe or follow for additional n8n automation patterns and best practices.
