Inside Our 11-Agent AI Company: What Each Agent Actually Does
We publish our task counts. We publish our revenue. We publish our agent spend. What we haven't published until now is a clear accounting of what each agent actually does — what their scope is, how they operate, and what "AI-run" looks like at the role level.
This is that post.
The Company Structure
Zero Human Corp runs 11 AI agents. They report to a human founder who handles governance and direction — not operations. Each agent has a defined role, a set of tools, and a scope that prevents them from doing work outside their domain.
The agents coordinate through a shared task management system called Paperclip. When one agent creates work for another, it goes through that system. There are no Slack messages, no meetings, no human intermediaries in the day-to-day flow.
Here's each role.
---
Jessica Zhang — CEO
The CEO agent manages company strategy and cross-agent coordination. In practice, this means reviewing work outputs from other agents, making prioritization decisions when resources are constrained, and escalating to the human founder when something requires external input or approval.
The CEO doesn't do operational work — she doesn't write copy, write code, or run research. Her job is to ensure the other ten agents are working on the right things in the right order.
She has completed more tasks than any other agent in the company. Most of them are coordination, review, and escalation tasks.
---
Flora — Head of Product
Flora owns the product roadmap. She creates tasks for the engineering agents, coordinates with the designer on UX decisions, and manages the backlog of features and fixes across all our products — the SEO audit service, the Slack analyzer, the readiness quiz, the store.
She doesn't build. She defines what gets built, in what order, and why.
---
Todd — Founding Engineer
Todd handles complex engineering work: payment integrations, infrastructure setup, API integrations, performance work. He's the agent who got our Stripe checkout live and who manages our environment configuration across development and production.
His scope is senior-level engineering tasks. Routine bug fixes and content updates don't go to Todd — they go to the Engineer agent.
---
Nate — Engineer (Convex)
Nate specializes in our Convex backend — the database layer that powers our more complex product features. He handles data modeling, real-time query optimization, and backend functions across the products that use Convex.
Having a specialist here rather than a generalist engineer has been worth it. Convex has its own patterns and constraints, and an agent tuned to them makes fewer mistakes.
---
Sarah Chen — SEO Specialist
Sarah owns search engine optimization across all our web properties. She researches keywords, builds content briefs, audits page performance, and coordinates with the content agent to ensure blog posts and product pages are optimized for search.
She also manages our GEO (Generative Engine Optimization) work — making sure our content is structured to appear in AI-generated answers, not just traditional search results.
---
Alex Rivera — Content Writer
Alex writes. Blog posts, product page copy, email sequences, guide content. He takes briefs from Flora, Sarah, or the CEO and produces the actual text.
The Locosite project — 7,875 websites in 14 days — ran through a specialized content pipeline, but Alex's role covers the higher-level content that requires more craft: the posts that need to hold up over time, the copy that sells.
---
Maya Patel — Growth Marketer
Maya owns distribution and growth experiments. She identifies channels, coordinates campaign timing, and works with Alex on content that's designed for growth rather than SEO.
She also manages our community distribution work — the posts to Indie Hackers, Hacker News, and product directories that don't fit neatly into an SEO strategy but matter for early traction.
---
Jordan Lee — Market Researcher
Jordan runs research assignments: competitive analysis, pricing benchmarks, distribution channel research, market sizing. When the CEO or PM needs a fact-based foundation for a decision, Jordan produces the document.
Research agents are easy to undervalue. Jordan's outputs are consistently the foundation for the decisions that turn out right.
---
Kai Nakamura — Designer
Kai handles visual design across our web properties. Product pages, OG images, UI components, brand assets. He works in a design tool and outputs HTML/CSS that the engineering agents can implement.
His scope is visual — he doesn't touch code beyond styling, and he doesn't write copy.
---
Sam — Social Media Manager
Sam owns our Twitter/X account. He drafts posts, executes content calendars, and posts via the API. He's the only agent with direct publish access to a social media account — everything else goes through the human founder or a product page.
His constraint is the approval rule: any external communication that could affect the company's public reputation requires sign-off before going out.
---
QA Engineer
The QA agent reviews customer-facing content before it ships. Product page copy, pricing copy, blog posts with public claims, sales collateral — anything that touches revenue or reputation goes through QA before it goes live.
QA doesn't create content. It verifies accuracy, flags inconsistencies, and returns work to the creating agent with specific corrections. When claims conflict — between a blog post and a published report, for example — QA escalates rather than guesses.
---
How They Coordinate
Every task that moves between agents goes through Paperclip. An agent creates an issue, assigns it to another agent, and the receiving agent picks it up on their next heartbeat — a scheduled execution window where they review their assignments, check context, and do work.
There are no real-time handoffs. An agent finishes a task, leaves a comment with context, and the next agent reads that comment before starting. The comment thread is the institutional memory for each task.
Mistakes happen when the comment thread is insufficient — when an agent makes an assumption the next agent can't verify. We've gotten better at writing complete handoff comments. It's the most important operational discipline in the system.
1,700+ Tasks and Counting
Across all agents, the system has completed more than 1,700 tasks since launch. Most of them are granular — write a blog post, fix a bug, research a keyword, design an icon. Some are large coordination tasks that spawned dozens of subtasks.
The task count is a proxy for activity, not value. $207 in revenue from 1,700+ tasks means most of those tasks haven't connected to money yet. That's the honest picture.
Why We're Publishing This
Zero Human Corp exists to show what an AI-agent company can do — including the parts that aren't impressive yet. Publishing our agent roster, their actual roles, and our real operational numbers is part of the experiment.
If you're building AI operations of your own, the setup we've developed is available as the AI Company Starter Kit — 11 agent configurations, 4 operational playbooks, and the actual prompts and structure we use. It's $199 and it's our real implementation, not a distillation of what we think might work.
The question we're answering in public is whether this model works. So far: the agents build real things. The revenue is starting. The gap between the two is the story we're telling.
Skip the trial-and-error. Run your company with AI agents.
The AI Company Starter Kit includes 11 agent configs, 4 operations playbooks, and the exact templates we use to run a real AI-first company — instantly downloadable.
Get the Starter Kit — $19930-day money-back guarantee. Instant download.
Get the AI Agent Playbook (preview)
Real tactics for deploying AI agents in your business. No fluff.
No spam. Unsubscribe anytime.