Oct 12, 2025

Build an AI Newsletter Agent with n8n

Build an AI Newsletter Agent with n8n If you run a tech newsletter or want to automate a regular briefing about AI, you can combine n8n, cloud storage, and modern LLMs to create a reliable pipeline that selects top stories, writes concise segments, and prepares a ready-to-send newsletter. Below I break down the architecture, the […]

Build an AI Newsletter Agent with n8n

If you run a tech newsletter or want to automate a regular briefing about AI, you can combine n8n, cloud storage, and modern LLMs to create a reliable pipeline that selects top stories, writes concise segments, and prepares a ready-to-send newsletter. Below I break down the architecture, the key nodes and steps used in the workflow (from the screenshot you provided), and best practices for production-ready automation.

Why automate a newsletter?

Automating your newsletter saves time, enforces consistency, and makes it easier to scale coverage. With the right setup an automated pipeline can:

  • Aggregate content from markdown files, tweets, and external sources
  • Rank and select the most relevant stories for your audience
  • Use an LLM to write short, Axios-style segments and subject lines
  • Export the final newsletter as a markdown file and share it to Slack for approvals

High-level architecture

The workflow in your screenshot implements a clear modular architecture. Here are the main logical layers and how they map to n8n nodes:

1. Input ingestion

Sources: markdown content files and tweet dumps stored in an object store (S3/R2). The workflow begins with a form trigger that sets the newsletter date and optionally accepts the previous newsletter content to avoid duplicates.

  • Search and download: S3 search nodes scan a date-prefixed bucket and download markdown/tweet objects.
  • Metadata & filtering: HTTP requests fetch metadata for each object so the workflow can exclude newsletter drafts and non-markdown files.

2. Content aggregation

Downloaded content is normalized into a structured string or object: identifier, friendly type, authors, external source URLs, and the full body text. The workflow aggregates this into a single bundle that will be used by the selection and writing stages.

3. Selection: pick top stories

This part uses an LLM node (langchain-style or Gemini/Claude integration) with a carefully crafted prompt to act as an editor. The model reads the aggregated content and must:

  • Choose a lead story and three additional stories (total four)
  • Provide short reasons for selection and include identifiers
  • Produce a subject line and pre-header text optimized for open rates

Important: keep the selection logic strict about avoiding duplicates, not republishing previously covered stories, and preferring substantive sources.

4. Writing each segment

Once the top stories are selected, the workflow iterates over each story and:

  • Gathers all identifiers and source texts for a single topic
  • Fetches external source URLs when present and scrapes them if additional context is needed
  • Invokes a dedicated LLM prompt to write an “Axios-like” segment with The Recap:, an unpacked bullet list, and a two-sentence Bottom line

The LLM prompt enforces formatting rules, short bullets, and strict link usage so the output is consistent and easily compiled into the newsletter body.

5. Intro and “The Shortlist” sections

Another LLM node composes the newsletter intro. The intro follows a fixed structure (dynamic greeting, two short paragraphs, the exact transition phrase In today’s AI recap:, and a short bullet list of the main items).

In parallel, a separate prompt compiles a shortlist of other notable AI stories using a strict URL-handling policy — only verbatim URLs from the provided sources are allowed.

6. Approvals, export, and delivery

  • Slack integration: The pipeline posts the selected stories, subject line options, and editorial reasoning to a Slack channel for human review and approval.
  • File export: The final newsletter is assembled into markdown and converted to a file node for upload to Slack or storage.
  • Optional steps: image extraction nodes pull direct image URLs (jpg/png/webp) for use in the email builder.

Key n8n nodes and tips

Core nodes used

  • Form Trigger – capture the target date and previous content
  • S3 (search & download) – retrieve markdown and tweet objects by prefix
  • HTTP Request – get object metadata and external-source-urls
  • ExtractFromFile – convert downloaded objects into text
  • LangChain / LLM nodes – run selection, segment writing, intro, subject-line generation
  • SplitInBatches & SplitOut – iterate stories and identifiers safely
  • Aggregate & Set nodes – combine multiple pieces of content into a single newsletter body
  • Slack – send for review and upload final files

Prompt engineering tips

  • Make the selection prompt prescriptive: require identifiers, reasons for including/excluding, and a fixed-length list (four stories).
  • For story writing prompts, give explicit format requirements (bolded headings, bullet styles, exact transition phrases) so downstream parsing is predictable.
  • Limit hallucination by instructing the model to only use facts present in the provided inputs and to avoid inventing links.

Testing, monitoring, and safety

Before using the pipeline live, create several test inputs that reflect the wide variety of content you ingest (long articles, short tweets, external blog posts). For robust operation:

  • Validate all external URLs and only include those verbatim from the source materials.
  • Log LLM outputs and compare them to human-written baselines to catch style drift or hallucinations.
  • Use Slack approvals as a gating step to prevent accidental publishing of bad content.
  • Monitor costs for LLM calls and batch work to reduce consumption when possible.

Best practices and editorial guardrails

  • Always keep a human-in-the-loop for the subject line and lead story verification.
  • Enforce a strict blacklist of phrases and an editorial style guide in the prompts.
  • Keep the LLM prompts modular so you can update tone or rules independently for subject lines, intros, and segments.
  • Keep identifiers and external_source_links unchanged once selected so downstream systems can reference the same sources reliably.

Conclusion

The workflow in your screenshot is a robust blueprint for a modern AI-powered newsletter pipeline. It combines structured ingestion, LLM-based editorial decisions, repeatable content generation, and Slack-based approvals to deliver a fast, reliable editorial process. With careful prompt engineering, testing, and guardrails, this design can save hours of manual work while maintaining high editorial quality.

Ready to build it? If you’d like, I can:

  • Review your actual n8n workflow JSON and highlight nodes to change for resilience.
  • Draft the exact prompts used for selection, segment writing, and subject-line generation.
  • Create a test plan and monitoring checklist for production usage.

Contact me or reply with the workflow JSON and a sample markdown input and I’ll produce the exact prompts and a step-by-step deployment guide.

Leave a Reply

Your email address will not be published. Required fields are marked *