Skip to content

AEO Bunny -- System Overview

Start here. This document explains what AEO Bunny is, how it works, and how the pieces fit together. Written for anyone joining the team, regardless of technical background.


What AEO Bunny Does

AEO Bunny is an automated content engine for local service businesses. A customer -- say, a plumber in Austin or a dentist in Tampa -- pays between $497 and $997 and receives 50 professionally written, search-optimized web pages tailored to their business, their service area, and the specific questions people are asking AI search tools like ChatGPT and Perplexity. Each page comes complete with written content, structured data markup (the behind-the-scenes code that helps search engines understand the page), relevant images, and internal links connecting the pages into a cohesive content library. The pages are delivered in five batches of ten, reviewed by the customer along the way, and deployed to their website. After deployment, the system monitors how visible the business becomes in AI-powered search results and alerts the team if anything changes.


Customer Journey

Here is the path a customer takes from purchase to ongoing monitoring:

flowchart LR
    A[Purchase via GHL] --> B[Account Created]
    B --> C[Onboarding Form]
    C --> D[Photo Upload\n100+ Photos]
    D --> E[Pipeline Runs\n5 Batches of 10 Articles]
    E --> F[Customer Reviews\nEach Batch]
    F --> G[Pages Deployed\nto Website]
    G --> H[Visibility Monitoring\nOngoing]

    style A fill:#f4f0eb,stroke:#c4a882
    style E fill:#f4f0eb,stroke:#c4a882
    style G fill:#f4f0eb,stroke:#c4a882
    style H fill:#f4f0eb,stroke:#c4a882
  1. Purchase -- The customer buys through GoHighLevel (our CRM and communication platform). A webhook notifies AEO Bunny that a new customer exists.
  2. Account Created -- The system creates a pending account and sends the customer a link to set their password.
  3. Onboarding Form -- The customer fills out an intake form with their business details: name, address, services offered, service areas, website URL, and any preferences about voice or tone.
  4. Photo Upload -- The customer uploads at least 100 business photos through the customer portal. These photos are the primary image source for the generated pages. The pipeline is gated on this step for batch 1 and will not proceed until the upload is complete.
  5. Pipeline Runs -- The automated pipeline kicks off, researching the business and local market, building a content plan, then writing, assembling, and packaging articles in five batches.
  6. Customer Reviews -- After each batch is produced, the customer reviews the articles. They can approve individual articles or click "Request Changes" to open a chat where they describe what they would like changed. The "Finish Review" button becomes active when all articles in the batch are approved or edited (in practice, all articles must be approved since the "edited" status is set by the backend finish-editing endpoint which the frontend does not currently call). Clicking Finish Review triggers Haiku dossier extraction and resumes the pipeline. Feedback from each batch is extracted into a Dossier that improves subsequent batches.
  7. Deployment -- Once a batch is approved, the customer downloads the ZIP file, uploads the pages to their website, and confirms the deployment through a checklist. The system then measures how the new content affects their AI search visibility.
  8. Visibility Monitoring -- The system periodically checks how the business appears in AI search tools and tracks changes over time, alerting the team if scores drop.

Architecture Overview

These are the major services that make AEO Bunny work and how they connect to each other:

flowchart TB
    Portal["Next.js Portal\n(What people see)"]
    API["FastAPI Backend\n(The brain)"]
    DB["Supabase\n(Database + Auth)"]
    R2["Cloudflare R2\n(File Storage)"]
    GHL["GoHighLevel\n(Customer Communication)"]
    Claude["Anthropic Claude\n(Content Generation)"]
    Visibility["OpenAI + Perplexity\n(Visibility Scoring)"]
    Redis["Redis\n(Queue + State)"]

    Portal <-->|API calls| API
    API <-->|Read/write data| DB
    API <-->|Store/retrieve files| R2
    API -->|Send webhooks + notifications| GHL
    API <-->|Generate content, analyze data| Claude
    API <-->|Measure AI search presence| Visibility
    API <-->|Job queue, caching| Redis

    style Portal fill:#e8f4f8,stroke:#5ba3c9
    style API fill:#fef3e2,stroke:#d4a843
    style DB fill:#e8f4e8,stroke:#5ba35b
    style Claude fill:#f4e8f4,stroke:#a35ba3
Service What It Does
Next.js Portal The web interface that operators (our team) and customers use. Operators manage projects, review content, and monitor the pipeline. Customers review their articles, request changes, and track progress. The customer portal has tabs for: Home, Review, Photos, Visibility, Deployment, and Settings. Site Health is a card on the Home dashboard, not a separate tab.
FastAPI Backend The central server that runs the pipeline, coordinates the AI agents, handles API requests, enforces quality gates, and manages all business logic. Hosted on Railway.
Supabase Stores all project data (businesses, locations, articles, scores) in PostgreSQL and handles user authentication with secure JWT tokens.
Cloudflare R2 Stores generated files -- HTML pages, images, ZIP downloads, and sitemaps. Think of it as the file cabinet.
GoHighLevel (GHL) Our CRM. Customer-facing communication (status updates, notifications, password resets) routes through GHL via webhooks. We never send emails directly. All pipeline events are also dispatched as in-app notifications (stored in the Notification model, surfaced via the NotificationBell component with cross-tab sync via BroadcastChannel). Both channels fire from the same dispatch_event() call.
Anthropic Claude The AI that does the creative and analytical work -- researching markets, planning content strategy, writing articles, generating image descriptions, and creating schema markup. Uses two model sizes: a larger one (Sonnet) for complex tasks and a smaller one (Haiku) for simpler extractions.
OpenAI + Perplexity Used exclusively for visibility scoring. These services let us test how visible the business is in AI-powered search results by asking real AI search tools about the customer's industry and location.
Redis Manages background job queues and temporary state (planned -- currently using in-process asyncio.create_task()).

Mobile experience: On screens narrower than 1024px the customer portal adapts its layout in several ways:

  • Bottom tab bar -- The standard top navigation is replaced by a fixed bottom tab bar with five destinations: Home, Review, Photos, Visibility, and a "More" overflow menu for Settings and other secondary pages. This follows native mobile conventions and keeps primary actions within thumb reach.
  • Metric card carousel -- The dashboard's summary cards (visibility score, site health, content delivered) are laid out in a horizontal snap-scrolling carousel rather than a grid, so each card has full width and can be browsed one at a time.
  • Vertical timeline -- The pipeline progress indicator, which appears as a horizontal stepper on desktop, becomes a vertical timeline on mobile. Each stage is stacked with a status icon, label, and timestamp.
  • Photo grid -- The photo upload portal displays photos in a two-column grid rather than a four-column grid.
  • Article detail -- The article detail page gains a breadcrumb header, increased padding, and safe-area viewport adjustments for devices with notches or home indicators.

The breakpoints are 375px (phone), 768px (tablet portrait), and 1024px (tablet landscape / desktop).


Pipeline Stages Explained

The pipeline is the heart of AEO Bunny. It runs in three phases with quality gates (human review checkpoints) along the way.

Phase A -- Strategic Foundation (runs once per project)

Step 1: Business Intelligence The BI Agent researches the customer's business, industry, and local market. It identifies competitors, discovers the questions real people are asking about this type of service in this area, and builds a comprehensive research document called a Data Card. The system also takes a baseline visibility measurement at this point -- a "before" snapshot of how visible the business currently is in AI search results.

What the customer sees: A notification that "our team is researching your market."

Quality Gate: BI Review -- An operator reviews the research to make sure it is accurate and thorough before the system moves on.

Step 2: Content Strategy The Strategist Agent takes the research from Step 1 and creates a Content Matrix -- a plan for all 50 articles. Each article gets a topic, target keywords, a content type (service page, how-to guide, case study, etc.), and a cluster assignment. Clusters are groups of related articles (for example, "emergency plumbing services" might be one cluster with a hub page and several supporting spoke pages).

What the customer sees: A notification that "we've mapped out your content strategy -- 50 pages planned."

Quality Gate: Matrix Review -- An operator reviews the 50-article plan to make sure it covers the right topics and is well-organized.

After both strategic gates are approved, the system divides the 50 articles into 5 batches of 10 and enters the batch loop.


Phase B -- Batch Loop (repeats 5 times)

For each batch of 10 articles, the following steps happen in order:

Step 3: Article Writing The Writer Agent writes the 10 articles for this batch. Each article includes a title, full body content, and a meta description. The writer receives the business research (Data Card), the content plan (Matrix), and -- starting from batch 2 -- the customer's accumulated preferences and feedback (the Dossier). This means each batch gets better because the system learns from prior feedback.

What the customer sees: A notification that their batch is ready for review. They can read each article, approve it, or click "Request Changes" to open a chat where they describe what they would like changed.

Quality Gate: Article Review -- For batch 1, an operator reviews first to establish quality, then the customer reviews. For batches 2 through 5, the customer reviews directly.

Steps 4-6: Media and Assembly Once the articles are approved, three things happen in sequence. For batch 1, a photo upload gate pauses the pipeline until the customer has uploaded at least 100 business photos through the customer portal. Batches 2-5 skip this gate. - Image Processing -- The system uses the customer-uploaded business photos as the primary image source, processes them (compression, geotagging with the business's location data), and generates descriptive alt text. - Brand & Schema -- The Designer Agent extracts brand colors and styling from the customer's existing website (via Playwright screenshot + Claude Vision), then the Schema Agent creates structured data for each article (the machine-readable code that helps search engines and AI tools understand the content -- things like business name, address, services, FAQs). The pipeline step constant is BRAND_SCHEMA. - HTML Assembly -- The system builds the final web pages using templates, wiring together the article content, images, schema markup, and internal links. It also generates a sitemap and packages everything into a downloadable ZIP file.

What the customer sees: A notification that their pages are ready for final review, with a preview of how the actual HTML pages will look.

Quality Gate: HTML Review -- Same pattern as article review: operator reviews batch 1 first, customer reviews all batches.

Step 7: Deployment After HTML approval, the batch is marked as "ready to ship." The customer downloads the ZIP file, uploads the pages to their website, and confirms the deployment through a checklist (pages uploaded, submitted to Google Search Console). Once confirmed, the system measures visibility again to see the impact of the new content.

What the customer sees: A deployment confirmation page with a checklist and a place to enter their hub page URL.

Quality Gate: Deploy Confirmation -- The customer confirms that they have uploaded the pages and completed the checklist.

After each batch ships, the pipeline advances to the next batch and repeats Steps 3 through 7.


Phase C -- Completion

Once all 5 batches have been shipped and confirmed, the pipeline marks the project as complete.

Step 8: Visibility Monitoring This is not a one-time step but an ongoing process. After deployment, the system periodically checks how visible the business is in AI-powered search results. If visibility drops significantly, the system triggers an alert so the team can follow up with the customer about additional services.

What the customer sees: A visibility score on their dashboard that updates over time, with breakdowns by search engine and content cluster.


Quality Gates

Quality gates are pause points built into the pipeline where a human being reviews the work before it continues. They exist because AI-generated content, while powerful, benefits from human judgment -- catching factual errors, ensuring the tone matches the business, and verifying that the strategy makes sense for the local market.

There are two types of reviewers and one data prerequisite gate: - Operators (our team) review the strategic foundation (research and content plan) and the first batch of articles and pages, setting the quality bar. - Customers review their own content for batches 2 through 5 and confirm each deployment. - Photo Upload Gate -- a customer-facing, data prerequisite gate that fires before media assembly on batch 1. The pipeline will not proceed until the customer has uploaded at least 100 business photos. This is not a review step and cannot be skipped or approved by an operator. Batches 2-5 are exempt.

When a gate pauses the pipeline, the reviewer is notified through GoHighLevel. The pipeline stays paused until the reviewer approves (or requests changes, which triggers a revision cycle). See the Operator Handbook for detailed gate procedures.


Website Readiness Score

Before generating any content, the system also evaluates how well-prepared the customer's existing website is to receive and benefit from the new pages. This is the Readiness Score -- a 0-to-100 composite measure of the site's technical health across four areas:

Checker What It Measures Weight
Crawlability Can search engines and AI tools access the site? Checks robots.txt, canonical tags, meta-noindex, and sitemap availability. 35%
Schema Presence Does the site already have structured data markup? Looks for JSON-LD blocks, LocalBusiness schema, and NAP (name/address/phone) consistency against the customer's onboarding data. 25%
Page Speed How fast does the site load? Uses the Google PageSpeed Insights API to get real performance scores for both mobile and desktop. 20%
Structured Data Correctness Are the existing schema blocks valid? Extracts JSON-LD, validates required fields, and checks for common errors that would prevent search engines from using the markup. 20%

The four checkers run in parallel. The structured data validator runs sequentially after the others because it depends on the JSON-LD blocks discovered during the schema check.

If any category cannot be measured (for example, the site does not have a public URL yet), the remaining weights are redistributed so the composite still sums to 100.

When checks happen: - Intake check -- fired automatically after the onboarding form is submitted. Gives the team an early warning about any technical problems before content production begins. - Post-deployment check -- fired automatically after each batch deployment is confirmed. Measures whether the new pages improved the site's technical health. - Critical alerts -- if the intake check finds blocking issues (e.g., robots.txt disallowing all crawlers), a GHL webhook fires immediately so the operator can address the problem before the pipeline runs.

The Readiness Score is visible on the customer portal's Site Health card on the Home dashboard and in the operator's project detail view.


Revision Workflow

After each batch of articles is produced, customers review the content before the pipeline continues. The revision workflow is built around a per-article edit flow and a per-batch submission step.

Per-Article Review

On the Review page, each article shows "Request Changes" and "Approve Article" buttons. "Request Changes" opens a chat where the customer describes what needs changing (tone, factual corrections, missing information, etc.). "Approve Article" sets the article to approved status. The Phase 9a "Edit/Finish Editing" design exists in the backend (the finish-editing endpoint sets review_status="edited") but was never implemented in the frontend.

An article can be in one of four states: Pending (not yet reviewed), Flagged (marked for attention), Approved (customer clicked Approve Article), or Edited (feedback processed via the backend finish-editing endpoint).

Per-Batch Submission

The "Finish Review" button appears once every article in the batch is either approved or edited -- none may remain in the Pending state. When the customer clicks Finish Review, two things happen:

  1. Haiku Dossier Extraction -- The system uses the smaller Claude model (Haiku) to read all unprocessed chat messages from this review round and extract structured feedback: voice/tone preferences, factual corrections, approved writing patterns, things to avoid, and any special instructions. This is appended to the customer's Dossier, which the Writer Agent will use when generating the next batch.

  2. Pipeline Resume -- The pipeline unpauses and begins producing the next batch, carrying the updated Dossier forward so each subsequent batch reflects the customer's accumulated feedback.

Batch 1 Dual Gate

Batch 1 has an additional review layer. Before the customer sees the articles, an operator reviews them first. This sets the quality bar for the entire project. Only after the operator marks batch 1 as reviewed does the pipeline pause again for the customer review gate. Batches 2 through 5 go directly to the customer.


Photo Collection

Business photos are the primary image source for the generated web pages. AEO Bunny uses real photos of the customer's work, team, and location rather than stock imagery -- this significantly improves the trustworthiness and local relevance of the pages.

How It Works

The customer uploads their photos through a dedicated Photos tab in the customer portal. The upload interface supports drag-and-drop, shows real-time upload progress, and allows per-file retry on failure. Photos are stored in Cloudflare R2 and associated with the customer's location.

Each uploaded photo is scored by the Image Intel Agent for visual quality and relevance:

Badge Meaning
Good Clear, well-lit photo suitable for the web pages
Fair Acceptable quality; may be used but lower-priority
Poor Blurry, overexposed, or otherwise unsuitable; excluded from assembly

The 100-Photo Gate

The pipeline will not begin media assembly for batch 1 until the customer has uploaded at least 100 photos. This is a hard gate -- the pipeline pauses automatically at the media assembly step and sends a notification to both the operator (so they can follow up) and the customer (prompting them to upload). When the 100-photo threshold is reached, the gate opens automatically without any manual intervention.

There is a maximum of 300 photos per location. Batches 2 through 5 skip the photo gate entirely; the photos already on file are used.

Dashboard Integration

The customer dashboard includes a Photo Progress card showing how many photos have been uploaded, a progress bar toward the 100-photo minimum, and a link to the upload portal. The operator's project view includes a quality breakdown showing the count of good, fair, and poor photos.


Visibility Score

The Visibility Score answers one question: "How visible is this business when people ask AI search tools for recommendations?"

The system supports four AI search engines: ChatGPT (via OpenAI), Perplexity, Google AI Overviews (via DataForSEO), and Gemini. All four are dormant by default and activate when their API credentials are configured. The system queries these engines with prompts like "Who are the best plumbers in Austin?" or "I need emergency pipe repair in Tampa, who should I call?" It then analyzes the responses to see whether the customer's business is mentioned, how prominently it appears, and in what context.

What the numbers mean: - 0-100 composite score -- higher is better. A weighted average across all active search engines. - Per-engine scores -- how the business performs on each individual AI search platform. - Per-cluster scores -- how visible the business is for each topic cluster (e.g., the business might score well for "emergency services" but poorly for "maintenance tips").

When scans happen: - Baseline -- right after the BI research step, before any content is created. This is the "before" measurement. - Post-deployment -- after each batch is confirmed as deployed. This measures the impact of the new content. - On-demand -- operators or customers can trigger a scan manually (rate-limited to once per hour). - Scheduled -- periodic checks to track trends over time.

If the score drops below a threshold, the system sends an alert through GoHighLevel, which can trigger an upsell conversation about additional content or optimization services.


Batch Delivery Model

Instead of producing all 50 articles at once and delivering them in a single package, AEO Bunny delivers content in 5 batches of 10 articles each. There are three reasons for this:

  1. Faster time-to-value -- The customer gets their first 10 pages deployed quickly rather than waiting for all 50 to be completed. They start seeing results sooner.

  2. Feedback loop -- After each batch, the customer provides feedback through article reviews and chat conversations. The system extracts this feedback into a Dossier -- a running document of the customer's preferences, corrections, approved writing patterns, and things to avoid. Each subsequent batch incorporates this feedback, so the content gets more aligned with the customer's expectations over time.

  3. Progressive interlinking -- Each batch's pages include links to all previously deployed pages, not just the pages in the current batch. This means the internal link structure grows stronger with each delivery.

The batch model also means the operator only needs to deeply review batch 1. Once the quality bar is set, batches 2 through 5 go directly to the customer for review, keeping the process efficient.


Glossary

Term Definition
Pipeline Run One execution of the full pipeline for a customer. A pipeline run processes all 50 articles across 5 batches for a single location.
Location A customer project -- one business at one physical address equals one pipeline run. A plumber with offices in Austin and Dallas would have two locations and two separate pipeline runs.
Business The customer's company. A business may have multiple locations, each with its own pipeline run and content.
Data Card The AI-generated research document produced in Step 1. Contains market analysis, competitor information, local search trends, and the questions people are asking about this type of business in this area.
Content Matrix The 50-article plan created in Step 2. Each entry specifies the article's topic, target keywords, content type, cluster assignment, and batch number.
Cluster A group of related articles organized around a theme. For example, a plumber's clusters might include "emergency services," "bathroom remodeling," and "water heater installation." Each cluster has one hub page and several spoke pages.
Hub Page The main page for a cluster. It covers the topic broadly and links to all the spoke pages in its cluster.
Spoke Page A supporting article within a cluster. It covers a specific subtopic in depth and links back to its hub page.
Article One generated web page. Includes a title, body content, meta description, and schema markup. Articles belong to a cluster and a batch.
Batch A group of 10 articles that are written, reviewed, assembled, and deployed together. There are 5 batches per project, delivered sequentially.
Quality Gate A pause point where a human reviews and approves before the pipeline continues. Gates ensure content quality and give customers control over what gets published.
Dossier Accumulated notes about a customer's preferences, built up over the course of a project. Includes voice and tone preferences, corrections, approved patterns, things to avoid, and special instructions. Updated automatically from review conversations.
Visibility Score A 0-to-100 measurement of how visible the business is in AI-powered search tools like ChatGPT and Perplexity. Higher is better. Measured at baseline, after each deployment, and periodically thereafter.
Readiness Score A 0-to-100 assessment of how well-prepared the customer's existing website is to receive the new content. Checks crawlability, existing schema markup, page speed, and structured data correctness.