Automate LinkedIn Contributions with n8n & AI
Use n8n to systematically discover LinkedIn Advice articles, extract their content, and generate AI-assisted contributions that your team can review and post. This reference-style guide documents a reusable n8n workflow that:
- Searches Google for LinkedIn Advice posts on a defined topic
- Extracts article URLs and parses article content, topics, and existing contributions
- Generates new contributions via an AI model (for example, GPT-4o-mini)
- Stores the results in NocoDB and sends them to Slack for review
1. Use case & benefits
1.1 Why automate LinkedIn contributions?
Maintaining consistent, high-quality engagement on LinkedIn is effective for visibility and trust, but manually:
- Searching for relevant LinkedIn Advice threads
- Reading each article and existing contributions
- Drafting original, useful replies
is time-consuming and difficult to scale.
This n8n workflow automates the discovery and drafting steps so that you can:
- Maintain a regular presence without daily manual effort
- Find relevant LinkedIn Advice articles using targeted Google queries
- Generate unique, conversation-starting contributions per topic using AI
- Store all drafts in a database and share them with your team via Slack
Human review is still recommended before posting, but most of the repetitive work is handled by automation.
2. Workflow architecture
2.1 High-level flow
- A trigger node starts the workflow on a schedule or on demand.
- A Set node defines the topic that will be used in the Google search.
- An HTTP Request node runs a Google search scoped to LinkedIn Advice pages.
- A Code node extracts all LinkedIn Advice URLs from the search results HTML.
- A Split Out node converts the URL array into individual items.
- A Merge node optionally deduplicates against previously processed items.
- An HTTP Request node fetches each LinkedIn article’s HTML.
- An HTML node extracts the article title, topics, and existing contributions.
- An AI node generates new contributions per topic based on the extracted data.
- Slack and NocoDB nodes send the results to a channel and store them in a table.
2.2 Core components
- Triggers – Schedule Trigger or manual trigger to control execution cadence.
- Data acquisition – HTTP Request nodes to query Google and fetch LinkedIn HTML.
- Parsing & transformation – Code node (regex) and HTML node (CSS selectors) to extract links and article content.
- AI generation – An OpenAI-compatible node to generate contributions.
- Output & storage – Slack node for team visibility and NocoDB node for persistent storage.
3. Node-by-node breakdown
3.1 Trigger configuration
3.1.1 Schedule Trigger
Node type: Schedule Trigger
Purpose: Start the workflow on a recurring schedule.
Typical configuration:
- Mode:
Every Week - Day of week:
Monday - Time:
08:00(your local time)
You can adjust the schedule to match your desired cadence. Weekly is a good baseline for sustainable engagement. Alternatively, you can use the regular Manual Trigger node when testing or when you want full manual control.
3.2 Topic definition for Google search
3.2.1 Set Topic node
Node type: Set
Purpose: Define the search topic that will be interpolated into the Google search query.
Example configuration:
- Field name:
topic - Value:
Paid AdvertisingorMarketing Automationor any niche you target
This value is referenced later in the HTTP Request node that calls Google. Keeping it in a Set node makes it easy to change or parameterize via environment variables or input data if needed.
3.3 Retrieve LinkedIn Advice articles via Google
3.3.1 HTTP Request – Google search
Node type: HTTP Request
Purpose: Perform a Google search restricted to LinkedIn Advice pages and the configured topic.
Key parameters:
- Method:
GET - URL: typically something like
https://www.google.com/search?q=site:linkedin.com/advice+{{$json["topic"]}}
The query uses site:linkedin.com/advice to limit results to LinkedIn Advice content, then appends the topic from the Set node. The node returns the raw HTML of the Google search results, which is then parsed.
Edge cases:
- Google may present captchas or blocking behavior for frequent or automated requests. Apply rate limiting and use realistic headers (for example, a user-agent string) to reduce the risk of blocks.
- If you switch to a dedicated search API, keep the downstream parsing logic aligned with the new response structure.
3.4 Extract LinkedIn Advice URLs
3.4.1 Code node – extract article links
Node type: Code
Purpose: Run a regular expression on the Google search HTML to capture LinkedIn Advice URLs.
Logic:
- Input: HTML returned by the Google HTTP Request node.
- Regex pattern: targets URLs matching
https://www.linkedin.com/advice/...or similar. - Output: An array of unique URLs that point to LinkedIn Advice articles.
This node filters out non-advice URLs and focuses only on pages under the LinkedIn Advice path.
Potential issues:
- If Google changes the HTML structure of its search results, the regex may need adjustment to continue capturing URLs reliably.
- Ensure you handle duplicates in this node or in a later deduplication step.
3.5 Split results into individual items
3.5.1 Split Out node
Node type: Split Out (Item Lists or similar)
Purpose: Convert the array of URLs from the Code node into individual n8n items so each article can be processed independently.
Each resulting item contains a single LinkedIn Advice URL. This allows n8n to handle each article in its own execution path, either sequentially or in parallel, depending on your configuration and environment.
3.6 Merge and deduplicate items
3.6.1 Merge node – dedupe
Node type: Merge
Mode: Keep Non-Matches
Purpose: Combine the newly extracted URLs with a previous set of processed items and avoid reprocessing duplicates.
Typical usage:
- Input 1: Newly discovered URLs from the current run.
- Input 2: Previously stored URLs (for example, from a database or previous workflow iteration).
- Comparison: Based on the URL field to identify duplicates.
This step is optional but recommended if you are running the workflow regularly and want to avoid generating contributions for the same article multiple times.
3.7 Fetch LinkedIn article HTML
3.7.1 HTTP Request – article fetch
Node type: HTTP Request
Purpose: Retrieve the raw HTML for each LinkedIn Advice article.
Key parameters:
- Method:
GET - URL: the LinkedIn Advice URL from the current item.
Considerations:
- LinkedIn may enforce rate limits or anti-scraping measures. Respectful intervals between requests and realistic headers can reduce the risk of being blocked.
- Monitor HTTP status codes. For example, handle
4xxor5xxresponses gracefully, either via n8n error workflows or conditional logic, so a single failed request does not break the entire run.
3.8 Parse article title, topics, and contributions
3.8.1 HTML node – extract content
Node type: HTML
Purpose: Use CSS selectors to extract structured data from the LinkedIn Advice HTML.
Fields typically extracted:
- ArticleTitle
- Selector:
.pulse-title(or the specific LinkedIn title selector used in your workflow). - Result: The visible title of the LinkedIn Advice article.
- Selector:
- ArticleTopics
- Selector: targets the main content area or a topic list element.
- Result: The primary topics or sections that the article covers.
- ArticleContributions
- Selector: the element(s) that contain existing user contributions or replies.
- Result: A list or concatenated text of visible contributions, used to avoid duplication.
Edge cases:
- If LinkedIn changes the HTML structure or class names, selectors may break. In that case, update the CSS selectors in this node and re-test.
- Some articles may have few or no visible contributions. The AI prompt should handle this case without errors.
3.9 AI-based contribution generation
3.9.1 AI node – LinkedIn Contribution Writer
Node type: OpenAI (or compatible AI node)
Purpose: Generate unique, topic-specific contributions for each LinkedIn Advice article using the extracted data.
Typical input fields to the prompt:
ArticleTitleArticleTopicsArticleContributions(existing replies to avoid repetition)
Model configuration:
- Model: for example,
gpt-4o-minior another OpenAI-compatible model. - Temperature: adjust to control creativity vs. determinism.
Prompt behavior:
- Instruct the model to provide helpful advice for each topic.
- Explicitly request that it avoid repeating points already present in
ArticleContributions. - Optionally specify tone, length, formatting (for example, bullet points), and any brand voice guidelines.
Quality considerations:
- If the AI output is too generic, refine the prompt with clearer constraints and examples.
- If responses are too long, explicitly limit character count or number of bullets.
3.10 Post results to Slack and save to NocoDB
3.10.1 Slack node – share contributions
Node type: Slack
Purpose: Send the AI-generated contributions to a Slack channel for review and collaboration.
Typical message content:
- Article title and URL
- Generated contribution text
- Topic or category
Use your Slack OAuth credentials and select the appropriate channel. This step keeps the team in the loop and ensures that contributions can be edited or approved before posting to LinkedIn.
3.10.2 NocoDB node – store contributions
Node type: NocoDB (Create Row / CreateRows)
Purpose: Persist each generated contribution in a structured database for tracking and analytics.
Typical fields:
Post TitleURLContribution(AI-generated text)TopicPerson(owner, reviewer, or intended poster)
You can later extend the schema to include engagement metrics or posting status.
If you prefer a different storage backend, such as Airtable or Google Sheets, replace the NocoDB node with the corresponding integration node while preserving field mappings.
4. Prerequisites & configuration notes
4.1 Required services
- n8n instance
- Cloud or self-hosted deployment with access to HTTP Request, Code, HTML, Slack, and AI nodes.
- OpenAI (or compatible) API credentials
- Used by the AI node to generate contributions.
- Slack credentials
- Slack OAuth token or app credentials with permission to post to the selected channel.
- NocoDB project & API token
- Configured table to store contribution records.
- Basic knowledge of CSS selectors
- Required to maintain and adjust HTML extraction in case LinkedIn changes its DOM structure.
4.2 Google search query configuration
In the Google HTTP Request node, customize the query string to include your topic. A typical search pattern is:
site:linkedin.com/advice "Paid Advertising"
Adjust the quoted phrase to your target niche. You can also add additional keywords or filters to refine or broaden results.
5. Customization & advanced usage
5.1 Tuning the search query
- Narrow results by using quoted phrases, additional keywords, or negative keywords.
- Broaden results by removing quotes or adding related terms.
- Date filtering can be handled manually in the query or by applying additional logic downstream based on article metadata, if available.
5.2 Refining the AI prompt
To align AI-generated contributions with your brand and goals:
- Specify tone (for example, practical, friendly, analytical).
- Request short, actionable tips or more in-depth commentary depending on your strategy.
- Ask for bullet points if you prefer concise LinkedIn comments.
- Include instructions to end with a question to encourage conversation, such as asking for others’ experiences.
5.3 Changing destination storage
If you prefer a different data store:
- Airtable
- Replace the NocoDB CreateRows node with an Airtable Create or Update node.
- Google Sheets
- Use the Google Sheets node to append rows with the same field mapping (Post Title, URL, Contribution, Topic, Person).
