Quality

The ApifyForge Testing Suite

Four cloud-powered testing tools for Apify actors: Output Guard, Deploy Guard, Cloud Staging, and Regression Suite. How they work together and when to use each one.

By Ryan ClintonLast updated: March 27, 2026

The ApifyForge Testing Suite is a set of four cloud-powered actors that test your Apify actors before, during, and after deployment. Each tool targets a specific failure mode: schema violations, functional regressions, production environment issues, and output quality drift. Together they form a complete quality pipeline that catches problems before your users do.

Every tool in the suite runs as an Apify actor on your account. You trigger them through the ApifyForge dashboard or via the Apify API. Each tool charges a flat PPE fee per run — you pay once regardless of how many checks the tool performs internally. Results are cached in your dashboard so you never pay twice to view previous reports.

The four tools at a glance

ToolWhat it checksWhen to use itCost
Output GuardOutput fields match declared schema typesBefore every push$0.35/run
Deploy GuardMultiple test cases with assertionsBefore every push, in CI/CD$0.35/suite
Cloud StagingFull production environment validationBefore publishing to Store$0.50/run
Regression SuiteHistorical comparison — what changed since last runAfter code changes, weekly$0.35/suite

Tool 1: Output Guard ($0.35/run)

The Output Guard fetches your actor's declared dataset schema from its latest build, runs the actor with your test input, then compares every output field against the schema definition. It checks:

  • Type mismatches — schema says number, actor outputs "$19.99" as a string
  • Missing required fields — schema declares phoneNumber but no output item has it
  • Undeclared fields — output contains _debug or scrapedAt not in the schema
  • Nullable violations — field has null values but schema doesn't declare nullable: true
  • Type inconsistenciesrating is sometimes a string, sometimes a number

The report includes a 0-100 compliance score weighted by severity. Errors deduct 10 points, warnings 3, undeclared fields 2, type inconsistencies 5. A score of 90+ means minor issues only. Below 70 means serious violations that will trigger maintenance flags.

When to use the Output Guard

  • Before every `apify push` — catch schema drift before it reaches production
  • After changing output structure — verify new fields are declared in the schema
  • When building a new dataset schema — iterate: add fields, validate, fix, repeat
  • When evaluating third-party actors — check if their schema matches actual output

Example: validating a scraper

{
  "targetActorId": "ryanclinton/website-contact-scraper",
  "testInput": {
    "urls": ["https://example.com"],
    "maxPagesPerDomain": 3
  }
}
json

The validator runs the actor, fetches the schema from the latest build, compares every field, and returns a report like:

Score: 72/100 — FAIL
Mismatches:
  [error] price: expected number, got string
  [warning] email: null values found, schema says non-null
Undeclared: _debug, scrapedAt, rawHtml
Missing: phoneNumber

Tool 2: Deploy Guard ($0.35/suite)

The Deploy Guard executes your actor multiple times with different inputs, each with its own assertion set. This catches functional issues that single-input testing misses: edge cases, boundary conditions, and input-specific bugs.

Assertion types

AssertionWhat it checksExample
minResultsDataset has at least N items"minResults": 3
maxResultsDataset has at most N items"maxResults": 100
requiredFieldsFields exist with non-null values["name", "url"]
fieldTypesField values match declared types{"rating": "number"}
maxDurationTest completes within N seconds"maxDuration": 60
noEmptyFieldsNo null, empty string, or empty array["name", "email"]

Example: multi-case test suite

{
  "targetActorId": "ryanclinton/google-maps-email-extractor",
  "testCases": [
    {
      "name": "Basic search",
      "input": { "query": "plumbers Chicago", "maxResults": 5 },
      "assertions": {
        "minResults": 3,
        "requiredFields": ["businessName", "address"],
        "maxDuration": 60
      }
    },
    {
      "name": "Single result",
      "input": { "query": "Statue of Liberty", "maxResults": 1 },
      "assertions": {
        "minResults": 1,
        "maxResults": 1,
        "requiredFields": ["businessName", "rating"]
      }
    },
    {
      "name": "Performance check",
      "input": { "query": "restaurants NYC", "maxResults": 20 },
      "assertions": {
        "minResults": 15,
        "maxDuration": 120,
        "noEmptyFields": ["businessName"]
      }
    }
  ]
}
json

Test cases run sequentially to avoid overwhelming the target actor. One PPE charge covers the entire suite regardless of how many test cases you include.

When to use the Deploy Guard

  • Before every deploy — run your standard test suite as a quality gate
  • In CI/CD pipelines — trigger via API, parse the JSON report, block deploys on failure
  • When onboarding a new actor — establish baseline test cases that define "working correctly"
  • For edge case coverage — test empty inputs, special characters, boundary values

Tool 3: Cloud Staging ($0.50/run)

Cloud Staging runs your actor in Apify's actual production environment — the same Docker container, network, and proxy infrastructure your users will see. It validates:

  • Docker build success — your Dockerfile compiles on Apify's infrastructure
  • Schema compliance — output matches the declared dataset schema in production
  • Structural validation — field consistency, type consistency, empty array detection
  • Custom assertions — minResults, requiredFields, fieldTypes (same as Deploy Guard)
  • Run success — the actor completes without crashing

The local-vs-cloud gap

Your actor works locally but fails in the cloud. This happens because:

  • Missing dependencies — a package in devDependencies is used in production code
  • Docker build issues — Dockerfile installs packages in a different order than local npm
  • Proxy differences — local runs use your IP, cloud runs use Apify's proxy pool
  • Memory limits — local machines have 16GB RAM, Apify actors get 256MB-4096MB
  • Network routing — some websites block Apify's IP ranges but not your home IP

Cloud Staging catches all of these by running in the real environment.

When to use Cloud Staging

  • Before publishing to the Store — the highest-stakes moment for your actor
  • After Dockerfile changes — verify the build works on Apify's infrastructure
  • After dependency updates — catch breaking changes from package upgrades
  • When switching proxy types — verify the new proxy works in production

Tool 4: Regression Suite ($0.35/suite)

The Regression Suite extends the Deploy Guard with historical comparison. It runs the same test cases and adds a classification layer: was this test passing before? Is it failing now? Each test gets one of six statuses:

PreviousCurrentClassificationWhat it means
passpasspassStable — no change
passfailregressionSomething broke
failpassresolvedSomething got fixed
failfailfailKnown issue — unchanged
(new)passnew_passNew test, passes
(new)failnew_failNew test, fails

Automatic previous result injection

When you use the Regression Suite through the ApifyForge dashboard, previous results are automatically loaded from your last cached run. You don't need to manually track or pass previous results — the API route handles it.

On first run, all tests are classified as new_pass or new_fail. On subsequent runs, the system compares against the prior run and highlights regressions and resolutions.

When to use the Regression Suite

  • After every code change — detect regressions before they reach users
  • Weekly scheduled runs — catch upstream changes (website redesigns, API changes)
  • After migrations — switching scraping approach? Run the suite before and after
  • For release notes — "2 regressions fixed, 1 new test added, 0 regressions introduced"

The four tools work best as a pipeline, not in isolation. Here is the recommended workflow for a typical actor deployment:

Pre-push (catches 80% of issues)

  1. Output Guard — Run against your actor with test input. Fix any type mismatches or undeclared fields. This takes 1-2 minutes and costs $0.35.
  2. Deploy Guard — Run your standard test suite (3-5 test cases). Fix any assertion failures. This takes 2-5 minutes and costs $0.35.

Pre-publish (catches the remaining 20%)

  1. Cloud Staging — Run in Apify's production environment. Verify Docker build, schema compliance, and output quality in the real environment. This takes 2-5 minutes and costs $0.50.

Post-publish (ongoing quality)

  1. Regression Suite — Run weekly or after every code change. Compare results against previous runs. Investigate any regressions immediately. This costs $0.35 per run.

Total cost per deployment cycle

StepToolCost
Pre-pushOutput Guard$0.35
Pre-pushDeploy Guard$0.35
Pre-publishCloud Staging$0.50
Post-publishRegression Suite$0.35
Total$1.55

For context, a single maintenance flag on the Apify Store can reduce your actor's visibility for weeks, costing far more in lost PPE revenue than $1.55 spent on pre-deploy testing.

API integration

Every tool in the suite can be triggered via the Apify API, making them ideal for CI/CD pipelines.

Python example: CI/CD quality gate

from apify_client import ApifyClient

client = ApifyClient("YOUR_API_TOKEN")

# Step 1: Schema validation
schema_run = client.actor("ryanclinton/actor-schema-validator").call(run_input={
    "targetActorId": "your-username/your-actor",
    "testInput": {"query": "test", "maxResults": 3},
})
schema_report = list(client.dataset(schema_run["defaultDatasetId"]).iterate_items())[0]

if not schema_report["passed"]:
    print(f"Schema validation FAILED (score: {schema_report['score']})")
    for m in schema_report["mismatches"]:
        print(f"  [{m['severity']}] {m['path']}: expected {m['expected']}, got {m['actual']}")
    exit(1)

# Step 2: Test suite
test_run = client.actor("ryanclinton/actor-test-runner").call(run_input={
    "targetActorId": "your-username/your-actor",
    "testCases": [
        {"name": "Basic", "input": {"query": "test"}, "assertions": {"minResults": 1}},
    ],
})
test_report = list(client.dataset(test_run["defaultDatasetId"]).iterate_items())[0]

if test_report["failed"] > 0:
    print(f"Test suite FAILED: {test_report['failed']}/{test_report['totalTests']} failed")
    exit(1)

print("All checks passed — safe to deploy")
python

Dashboard access

All four tools are available in the ApifyForge dashboard under the Tools section in the sidebar:

  • /dashboard/tools/schema-validator — Output Guard
  • /dashboard/tools/test-runner — Deploy Guard
  • /dashboard/tools/cloud-staging — Cloud Staging
  • /dashboard/tools/regression-tests — Regression Suite

Each page follows the same pattern: configure inputs, click Run, view results. Previous results are cached and loaded automatically on page load.

  • Actor Testing Best Practices (/learn/actor-testing) — Local testing strategies, pre-push hooks, and debugging failed runs
  • Store SEO Optimization (/learn/store-seo) — How quality score (which testing improves) affects Store ranking
  • Schema Tools (/learn/schema-tools) — Deep dive into schema validation and the Schema Registry
  • PPE Pricing (/learn/ppe-pricing) — How to price your actors and track revenue

Related guides

Beginner

Getting Started with Apify Actors

To build an Apify actor, install Node.js 18+ and the Apify CLI, scaffold a project with apify create, write your logic inside Actor.main(), define an input_schema.json, and deploy with apify push. This guide walks through every step from zero to a published Apify Store listing.

Essential

Apify PPE Pricing Explained: Pay Per Event Model, Strategy, and Code Examples

Pay Per Event (PPE) is Apify's usage-based monetization model for actors on the Apify Store. Developers set a price per event (typically $0.001 to $0.50), call Actor.addChargeForEvent() in their code, and keep 80% of revenue while Apify takes 20%. This ApifyForge guide covers the 80/20 revenue split, actor.json configuration, charging code patterns, the 14-day price change rule, and pricing strategy by actor type.

Revenue

How to Monetize Your Actors

To monetize Apify actors, start with Pay Per Event pricing at $0.01-$0.25 per result, then layer on tiered pricing for power users, free-tier funnels to drive adoption, and MCP server bundles that combine multiple actors into a single subscription. ApifyForge analytics tracks revenue per actor so you know which strategies work. This guide covers each revenue model with real pricing examples.

Quality

Actor Testing Best Practices

To test an Apify actor, define input/output test cases in a JSON fixture, run them with the ApifyForge test runner before every deploy, and set assertions on output shape, field counts, and error rates. The regression suite catches breaking changes by comparing current output against a saved baseline. This guide covers the full testing workflow from local validation to CI/CD integration.

Growth

Store SEO Optimization

Apify Store search ranks actors by title match, README keyword density, category tags, run volume, and a quality score out of 100. To rank higher, write a README that opens with a plain-language description of what the actor does, include target keywords in the first 100 words, set accurate categories in actor.json, and maintain a success rate above 95%. This guide breaks down every ranking factor and shows how ApifyForge tracks your score.

Scale

Managing Multiple Actors

To manage 10, 50, or 200+ Apify actors, use the ApifyForge fleet dashboard to monitor health, revenue, and quality scores across your entire portfolio in one view. Group actors by category, run bulk updates on pricing and metadata, set up failure alerts, and track maintenance pulse to catch stale actors before users complain. This guide covers fleet management workflows at every scale.

Essential

Cost Planning Tools: Calculator, Plan Advisor & Proxy Analyzer

How to use ApifyForge's cost planning tools to estimate actor run costs, choose the right Apify subscription plan, and pick the most cost-effective proxy type for each scraper.

Essential

AI Agent Tools: Pipeline Preflight, LLM Optimizer & Integration Templates

How to use ApifyForge's AI agent tools to debug MCP server connections, design multi-actor pipelines, optimize actor output for LLM token efficiency, and generate integration templates.

Quality

Schema Tools: Diff, Registry & Input Guard

How to use ApifyForge's schema tools to compare actor output schemas, browse the field registry, and test actor inputs before running — preventing wasted credits and broken pipelines.

Essential

Compliance Scanner, Actor Recommender & Comparisons

How to use ApifyForge's compliance risk scanner to assess legal exposure, the actor recommender to find the best tool for your task, and head-to-head comparisons to evaluate competing actors.

Essential

The Complete ApifyForge Tool Suite

All 15 developer tools in one guide: testing, schema analysis, cost planning, compliance scanning, LLM optimization, pipeline building, and privacy reporting. What each tool does, when to use it, and how they work together.

Beginner

What Is an Apify Actor?

An Apify actor is a serverless cloud program that runs on the Apify platform. It accepts JSON input, executes a task (scraping, data processing, API calls, or AI tool serving), and produces structured output in datasets, key-value stores, or request queues. Actors are packaged as Docker containers and can be run via API, scheduled, or chained together.

Essential

What Are MCP Servers on Apify?

MCP (Model Context Protocol) servers are Apify actors that run in standby mode and expose tools via an HTTP endpoint for AI assistants like Claude Desktop, Cursor, and Windsurf. They connect large language models to real-world data sources -- APIs, databases, web scrapers, and intelligence feeds -- so AI agents can take actions beyond text generation.

Beginner

How to Choose the Right Apify Actor

With over 3,000 actors on the Apify Store, choosing the right one for your task requires evaluating success rates, run history, pricing, maintenance frequency, and input schema quality. This guide provides a decision framework for selecting actors based on measurable quality metrics, plus tools to automate the comparison process.

Scale

How to Manage a Large Apify Actor Portfolio

Managing 10 Apify actors is straightforward. Managing 50 requires dashboards and cost tracking. Managing 200+ demands automated regression testing, schema validation, revenue analytics, and failure alerting. This guide covers the tools, processes, and hard-won lessons from scaling an Apify actor portfolio.