BlogAutomation

How AI Agents Built 7,875 Local Business Websites in 14 Days

4 min readAutoWork HQ

Most case studies about AI-built software tell you what worked. This one tells you how, with real numbers.

In early 2026, our team at Zero Human Corp deployed a project called Locosite — a system designed to generate hyperlocal business directory websites at scale. The goal was to produce thousands of them, each targeted to a specific niche and geography, using AI agents to handle the content, structure, and publishing pipeline end to end.

The result: 7,875 websites published in 14 days. No human writers. No manual QA loop. No agency.

Here's what that actually looked like.

The Problem We Were Trying to Solve

Local SEO is a volume game. A plumber in Austin and a plumber in Denver need different pages to rank in their respective markets. A single authoritative site covering both doesn't perform as well as two locally-focused pages with city-specific signals.

The traditional approach — hire writers, brief them, review drafts, publish — doesn't scale to thousands of locations. The cost per site is too high and the turnaround too slow.

We needed a system where an AI could take a business category and a location, generate a complete, useful website with real content, and publish it without a human in the loop.

How We Built It

The pipeline ran on a coordinated set of AI agents, each with a specific role:

Content Agent: Takes a niche (e.g., "emergency locksmith") and location (e.g., "Phoenix, AZ") and generates the full site copy — homepage, services pages, FAQ, and meta descriptions. It pulls from structured templates but writes unique content for each combination.

SEO Agent: Reviews the generated content and applies on-page SEO optimizations: keyword density, internal linking structure, header hierarchy, schema markup targets. It flags any page that doesn't meet minimum standards.

QA Agent: Runs automated checks against a rubric — minimum word count, no placeholder text, correct URL structure, at least one conversion element (phone number or contact form prompt). Pages that fail get flagged for regeneration, not human review.

Publishing Agent: Takes the approved output and publishes it to the configured hosting infrastructure. Handles slug creation, sitemap updates, and indexing requests.

The agents communicate through a shared task queue. When one finishes a unit of work, the next picks it up. There's no synchronous handoff — the pipeline runs continuously.

What 14 Days Looked Like

Day 1-3 were slow. We were debugging the pipeline, fixing edge cases where the content agent generated duplicate content for similar city/niche combinations, and tuning the QA agent's rubric to reduce false positives.

By Day 4, the system was processing roughly 400 sites per day.

By Day 7, we hit 800 per day as we parallelized the content generation step.

The final tally on Day 14: 7,875 published sites, each unique to its niche/location combination.

The QA agent rejected approximately 11% of first drafts, which were regenerated automatically. Human review was used only to audit the system's performance — not to review individual pages.

What It Cost

We ran 10 AI agents for the duration of the project. Monthly agent infrastructure runs us approximately $4,493. Prorated for 14 days, the compute cost for Locosite was roughly $2,100.

At 7,875 sites, that's $0.27 per published website.

A freelance writer charging $50 per site would have cost $393,750 for the same output. The same job would have taken months, not two weeks.

What We Learned

Volume is not the hard part. The hard part is consistency. AI-generated content at scale has a tendency to drift — similar phrases appearing too often, certain structures being overused. We had to tune the content agent's output diversity settings mid-run to reduce homogenization.

The QA agent is worth the investment. Early in the project, we ran without an automated QA pass. The defect rate was around 20% — pages with thin content, missing conversion elements, or malformed structure. Adding the QA agent dropped that to effectively zero for issues that mattered.

Agents need explicit scope. When we first deployed the content agent, it had latitude to decide what sections each site needed. That led to wildly inconsistent page structures. Constraining it to a fixed template with variable fill-in produced better, more consistent output.

The Bigger Point

Locosite wasn't built to be impressive. It was built to answer a question: can AI agents produce useful, publishable content at a scale and cost that makes programmatic local SEO viable?

The answer is yes, with caveats. The system requires careful design, real QA, and ongoing tuning. But it works — and it works at a scale no human team could match.

If you're thinking about deploying AI agents for your own content or operations work, the Locosite story is a data point, not a blueprint. Your use case will have different constraints. But the core insight holds: agents that are well-scoped, well-monitored, and connected through a clean pipeline can do serious work.

---

We've packaged the operational setup we use at Zero Human Corp — including the agent configurations and playbooks that powered projects like Locosite — into the AI Company Starter Kit. It's our working implementation, not a guide about one.

Skip the trial-and-error. Run your company with AI agents.

The AI Company Starter Kit includes 11 agent configs, 4 operations playbooks, and the exact templates we use to run a real AI-first company — instantly downloadable.

Get the Starter Kit — $199

30-day money-back guarantee. Instant download.

Get the AI Agent Playbook (preview)

Real tactics for deploying AI agents in your business. No fluff.

No spam. Unsubscribe anytime.