Stack & Playbook

AI Automation Stack

How I build it, what I've learned, and where it's going. A practitioner's guide to building reliable AI automation systems.

Published

This is a living document. I update it as the stack evolves, tools change, and I learn what works and what doesn't. The core principles stay, the specifics shift.

Why I'm Writing This

Most automation content online falls into the same pattern: a YouTube tutorial that works in the demo, fails in practice, and covers maybe 20 to 30% of what you actually need. The rest you rebuild yourself. Templates are too basic. Edge cases are ignored. And nobody talks about what it really takes to get a workflow running reliably enough to forget it exists.

This is my attempt at the version I wish existed when I started.

Before You Build: Does It Actually Need Automation?

Not everything should be automated. Before touching a tool, I ask three questions:

  1. 1.Is this high volume or highly repeatable? If you need to do something once or twice, ask an AI directly. Claude, ChatGPT, whatever. Copy the output. Done. Automation only makes sense when the same process runs tens or hundreds of times.
  2. 2.Are the edge cases manageable? If a process has too many variations, automation makes it worse. It fails silently on the cases it wasn't built for. It is better to handle those manually until the pattern is clear enough to encode.
  3. 3.Does it actually need AI, or does it just need logic? This one took me time to learn. Early on I pushed AI into every step. The results were less predictable, more expensive (tokens add up), and harder to debug. JavaScript or Python is often the better call: deterministic, no hallucinations, faster, cheaper. I have replaced AI nodes with code nodes many times and got better results.

The framework: use AI where judgment, language, or pattern recognition is required. Use code where the logic is clear and the output needs to be exact.

The Tool Switching Tax

I started in Make.com. Then moved to n8n. Now I run workflows in both, depending on when they were built and what they do.

Switching tools mid-stack is expensive. You carry the old automations while rebuilding the new ones. Context switches between two different systems slow everything down. The lesson: pick one tool, learn it deeply, and only expand when you genuinely hit its limits. The best tool is not the most powerful one. It is the one you know well enough to build and debug fast.

That said, Make and n8n serve different purposes and can coexist:

  • Make for lighter, SaaS-to-SaaS flows where simplicity matters
  • n8n for complex, multi-step, AI-heavy workflows where you need full control

If starting fresh: begin with whichever has native integrations for the tools you use. HTTP requests and custom API setups can come later. Start with what connects without configuration overhead, get something working, then rebuild it properly once you understand the shape of the problem.

The Foundation Stack

This is what I build on. It changes at the edges depending on the use case: AI calling adds VAPI, complex data pipelines might add a vector database, LinkedIn outreach changes the scraping approach. The core stays constant.

Orchestration

Orchestration tools and their roles
ToolRole
n8nPrimary. Complex, multi-step workflows with branching logic, API chains, and AI calls. Self-hosted. Full control.
Make.comSecondary. Lighter SaaS-to-SaaS connections where n8n is overkill.
GoHighLevel (GHL)CRM and comms layer. Inbound triggers, pipeline moves, lead nurture, email, SMS, WhatsApp. Fewer external dependencies means fewer breakpoints.

AI & Reasoning

AI and reasoning tools and their roles
ToolRole
Claude APILong-form generation, reasoning, classification.
OpenAI APIGeneration, secondary model depending on task.
Google APIsVision and specific generation use cases.
PerplexityLive web research and data enrichment. Pulls current information at the point the workflow runs, not from a static training set.
Claude CodeBuilding and editing code, workflows, and apps from the terminal. Keeps the build loop fast when working alongside automations.

All AI tools connect via HTTP nodes inside n8n or Make, or directly from the terminal via Claude Code. Model choice depends on the task, not preference.

CRM & Data

CRM and data tools
ToolRole
GoHighLevel CRMContact and deal state, pipeline, communication history, lead enrichment.

Any CRM with API access works here. GHL covers the most ground for the price: CRM, pipeline, email, SMS, WhatsApp, automations, and a website builder in one. I cover CRM-driven outreach in detail in the ABM & Outbound System playbook. Lead enrichment runs automatically on new contacts: n8n fires on new contact creation, pulls company, role, and LinkedIn data, writes it back to custom fields.

Sites & Apps

Sites and apps tools
ToolRole
WordPressContent sites and blogs. Large ecosystem, battle-tested.
ReplitWhere new things get built. Faster to ship, easier to iterate.

WordPress works for content. I use n8n to automate social media distribution from WordPress as one example. For building anything new it is slow: plugins, themes, manual configuration at every step. Replit moves faster. More importantly, n8n can send webhooks directly into Replit apps, which pick them up and execute pre-programmed logic. The automation layer and the app layer become one connected system. Build something with AI in Replit, wire n8n into it, and content or data flows in automatically on a trigger or schedule. For anything new, Replit is the default.

Scraping & Research

Scraping and research tools
ToolRole
Browse.aiMonitoring specific web pages on a schedule. Competitor pages, pricing changes, content updates.
ApifyAPI-driven scraping. Runs from inside n8n flows. LinkedIn and broader web scraping.

I wrote about one practical application of this approach in scraping qualified leads from Slack communities - the same enrichment pipeline applies to any lead source.

How to Actually Build This

Break it into the smallest possible steps

The most common mistake is trying to automate too much at once. I built an SEO content workflow that covers keyword input, research, outlining, writing, internal link checking, internal link insertion, external link insertion, slug generation, meta title generation, image brief generation, image generation, image compression and conversion, HTML formatting, and publish. That is not one workflow. It is fifteen decisions, each of which can fail independently.

Every step that can fail needs to be isolated, tested, and confirmed before connecting to the next one. When something breaks in a 50-step workflow, finding the problem is significantly harder than finding it in a 5-step one.

The approach that works: build the smallest unit that produces a useful output. Confirm it works. Then connect the next unit. Build linearly, test constantly, and never add a new step until the previous one is solid.

Start simple, then rebuild

If a native integration exists, use it first. HTTP requests and custom API setups add configuration overhead and more failure points. Get the logic working with the simplest possible connection, understand the data shape, then replace it with the more advanced version once you know what you actually need.

What This Stack Can Do

A few examples of what runs without manual input:

  • SEO content engine: Keyword input triggers research, outline, writing, internal links, external links, meta fields, image generation, and publishing. What used to take two to three days runs in about ten minutes of waiting.
  • Inbound lead handling: A form fill, booking, or email reply fires a GHL workflow. Pipeline updates, follow-up sequences, and notifications run automatically.
  • Lead enrichment: New contacts get enriched with company, role, and live context data before any human sees them.
  • Scheduled web monitoring: Competitor pages, pricing, and content tracked on a schedule.
  • LinkedIn scraping: Apify actors run via n8n on demand, feeding data into downstream steps.
  • App automation: n8n sends webhooks into Replit apps. The automation and the product are directly connected.

The Part Nobody Talks About

Building the automation is roughly 30 to 40% of the work.

The rest is fine-tuning: getting AI models to produce consistently accurate, predictable outputs across every edge case. A workflow that works 80% of the time is not a working workflow. It is a source of errors that now runs automatically, at volume, without you noticing.

The real investment is in prompt engineering, output validation, edge case identification, and iteration. This is where most automation projects stall or get abandoned. It takes longer than expected. It requires patience with things that almost work but do not yet work reliably enough to trust unsupervised.

And this is specifically where AI and automation part ways with simple scripting: the variance. Code does exactly what you write. AI produces a distribution of outputs, and your job is to narrow that distribution until it is tight enough to call reliable. That process is not glamorous, but it is the difference between a proof of concept and a system you can actually forget is running.

AI vs Code: A Practical Rule

Use AI where judgment or language is required. Use code where the output needs to be exact.

Questions to ask before adding an AI node:

  • Could a simple if/else or regex handle this?
  • Does the output need to be in a specific format every time?
  • Will a wrong output here break the next step?

If yes to any of those: write code. JavaScript or Python inside an n8n function node is deterministic, has no hallucinations, runs faster, and costs nothing in tokens. I replaced AI nodes with code nodes many times and got more reliable results. The temptation early on is to use AI everywhere because it feels more powerful. It is not. It is more flexible, which is only useful when you need flexibility.

Where This Is Heading

A year ago, a lot of what this stack does was not practical. The models were not capable enough, the tooling was immature, and getting reliable outputs at any scale required significant workarounds.

That has changed fast.

The convergence of capable AI models, flexible orchestration tools, and a growing ecosystem of APIs means that what used to require an engineering team can now be built by one person who understands the domain and knows the tools. For someone who works in growth marketing and can also build: the gap between having an idea and having it running is now measured in hours, not sprints.

The people who will be uncomfortable with this are the ones who have not started experimenting. The systems are only getting more connected and more capable. The learning curve does not get easier by waiting.

Not everything should be automated, and not every automation should use AI. But the ability to identify which processes are worth automating, build them to a standard where they run without supervision, and iterate on them as the tools improve: that is a compounding advantage that is very hard to replicate without having actually done the work.

Tools Worth Considering

These are not in the current stack but worth knowing. Each fills a specific gap depending on where the stack is pushed.

Additional tools worth considering
ToolUse CaseWhy It Matters
SupabasePersistent data layerGHL stores contact state. It does not store workflow run history, enrichment data, or cross-system state. Supabase fills that gap: run logs, enrichment records, state shared across tools and projects. Free tier is enough to start.
ProxycurlLinkedIn data enrichmentAPI-first LinkedIn data pull. No browser emulation. Clean per-call pricing. Best option when you need reliable LinkedIn data on demand inside a workflow.
PhantombusterLinkedIn automation at scalePurpose-built for LinkedIn: profile scraping, connection exports, message sequences. No-code setup, scheduled runs. Useful when LinkedIn is a primary outreach channel.
VAPIAI voice callingAdds an AI calling layer to the stack. Handles inbound and outbound calls with AI agents. Relevant when phone-based outreach or support is part of the use case.
n8n Queue Mode (Redis)High-volume flow reliabilityDefault n8n runs jobs in-process. Queue mode moves execution to Redis-backed workers: proper retry logic, rate limiting, no parallel runs stepping on each other. Worth adding when any flow hits meaningful volume.
Clearbit / ApolloLead enrichment alternativeApollo has a larger database for B2B enrichment. Clearbit is cleaner for tech companies. Either connects to n8n via HTTP and writes back to GHL custom fields on new contact creation.

Where I'd Take It Next

The stack works. These are the additions that make it more robust.

1. Error alerting

Silent failures go undetected until something is visibly missing. An error trigger node on every critical n8n workflow, piped to Slack with workflow name, error, timestamp, and input data. Two to three hours of setup. Without this, you are monitoring by noticing absence.

2. AI quality gate in content workflows

A second AI call at the end of any generation step, checking output against a checklist before it publishes. Fails route to a review queue. Catches issues before they go live rather than after.

3. Scheduled monitoring flows

  • Daily: workflow execution volume vs baseline. A drop signals something stopped.
  • Weekly: contacts, revenue movement, key site metrics in one Slack message.
  • Keyword rank monitor: alert when a target position drops past a threshold.

4. Persistent data layer

GHL handles contact state well. It does not store workflow history or cross-system state. Supabase as a lightweight data store fills that gap: run logs, enrichment records, state shared across tools and projects.

Architecture Overview

stack-architecture
click tools to explore

Triggers

Orchestration

AI Reasoning

Called via HTTP from n8n/Make, or terminal via Claude Code

Code Nodes

Where determinism matters more than flexibility

Data

Sites & Apps

Actions

Scraping

Observability

Data flows top → bottom. Triggers fire orchestration, which calls AI or code, reads/writes data, and executes actions.

Next Steps

If you're thinking about building something like this, start small. Pick one repeatable process, automate it end to end, and get it running reliably before moving on. The stack grows organically from there.

  1. 1.Identify one workflow you run manually more than 5 times a week.
  2. 2.Map the steps. Write down every input, decision, and output.
  3. 3.Build the smallest version in n8n or Make. Get it working before optimizing.
  4. 4.Add AI only where judgment or language is required. Use code everywhere else.
  5. 5.Monitor it for a week. Fix the edge cases. Then forget it exists.

This stack powers the outreach and enrichment behind the ABM & Outbound System

See real-world results from applying these tools: Case Studies

Want help building an automation stack like this? Get in touch.