Tools

What is the Release Gate?

The ApifyForge Release Gate is a CI/CD pre-release validation tool that runs 9 automated checks on your Apify actors before every deploy, at a cost of $0.42 per actor. It replicates the 4 exact checks the Apify Store runs daily — effective input validation, run success, non-empty dataset, and duration under 300 seconds — and adds 5 extended data quality checks the Store does not perform: schema conformance, field completeness, golden baseline comparison, log anomaly detection, and performance regression tracking. If your actor would fail the Store's automated tests and receive a maintenance flag, the Release Gate catches it before you push.

The 9 checks explained

The checks are split into two groups: 4 Store-equivalent hard gates and 5 extended data quality checks.

Store-equivalent checks (A through D):

  1. Check A — Effective Input Validation: Parses your actor's input schema, builds the effective test input from prefill and default values the same way the Store does, and verifies all required fields are satisfied.
  2. Check B — Run Succeeded: Starts your actor via the Apify API with the effective input and verifies it reaches SUCCEEDED status.
  3. Check C — Non-Empty Dataset: Verifies the default dataset contains at least one item.
  4. Check D — Duration Under 300s: Verifies the run completes within the Store's 5-minute (300-second) window.

Extended data quality checks (E through I):

  1. Check E — Schema Conformance: Validates output items against expected field types, catching type drift (e.g., a price field changing from number to string).
  2. Check F — Field Completeness: Enforces per-field null and empty rate thresholds — if your email field is suddenly null in 80% of items, this check fails.
  3. Check G — Golden Baseline Comparison: Compares current output against a pinned golden dataset, detecting schema drift (added or removed fields), count drift, and value changes.
  4. Check H — Log Pattern Detection: Scans run logs for 13 known-bad patterns including CAPTCHA, 429 rate limits, TypeError, ReferenceError, blocked, login required, ECONNREFUSED, and socket hang up.
  5. Check I — Performance Regression: Warns if run duration exceeds 2x the baseline, catching performance regressions even when the actor still finishes under 5 minutes.

Operating modes

The Release Gate supports 4 operating modes:

  1. Gate mode: Runs all checks and fails if any ERROR-severity check fails — this is what you use in CI/CD pipelines.
  2. Dry Run mode: Runs all checks but never fails, producing a report you can review without blocking deploys.
  3. Approve Baseline mode: Runs checks and saves the current output as the new golden baseline for future comparisons.
  4. Bootstrap Baseline mode: Same as Approve but intended for first-time setup when no prior baseline exists.

Three profiles control which checks run. Store Default runs checks A through D only — exactly what the Apify Store tests. Extended runs all 9 checks. Custom lets you configure checks per test case.

Best for

  • Developers who publish actors to the Apify Store and need to prevent maintenance flags
  • Teams managing portfolios of 10 to 100+ actors who need fleet-wide pre-release validation
  • CI/CD pipelines using GitHub Actions that need automated go/no-go gating
  • Catching silent data quality regressions that pass the Store's basic checks but degrade output

Not ideal for

  • One-time actor runs that are not published to the Store (maintenance flags only apply to Store-listed actors)
  • Actors that intentionally produce empty datasets (Check C will fail)
  • Free-tier Apify accounts with limited monthly credits — each gate run costs $0.42 per actor plus the target actor's own PPE cost

Portfolio gating

The Release Gate validates 1 to 100+ actors in a single run with configurable concurrency. It processes targets in parallel using a semaphore pattern, respects PPE spending limits, and can stop on the first failure if configured. For a portfolio of 50 actors, a full gate run costs $21.00 and completes in under 10 minutes — compared to 12-25 hours of manual pre-release testing at 15-30 minutes per actor.

Golden baseline management

Golden baselines use a named key-value store called release-gate-baselines with a BASELINES.json manifest that maps each actor, test case, and channel to an immutable dataset snapshot. Baselines are never updated automatically in gate mode — you must explicitly approve new baselines, following the pytest-regressions "fail with diff, approve to update" pattern.

Output and CI/CD integration

The Release Gate outputs a structured REPORT.json to the run's default key-value store, making it consumable by GitHub Actions workflows. The report includes per-target, per-test-case, per-check results with evidence strings, plus optional artifacts like sample items, log tails, and golden diffs. The same report is also pushed to the dataset for viewing in the Apify Console.

For GitHub Actions integration, trigger the ApifyForge Release Gate after pushing an actor build. The gate starts a test run of the exact build you pushed, validates all checks, and the GitHub Actions step fails if the gate fails. The REPORT.json can be fetched from the run's key-value store and written to GitHub's job summary for PR-level visibility.

Pricing

The ApifyForge Release Gate costs $0.42 per target actor checked. This covers the gate logic only — you also pay standard PPE for the target actor's test run. A typical development cycle using the Schema Validator ($0.35), Test Runner ($0.35), and Release Gate ($0.42) costs about $1.12 total. Apify's free tier includes $5/month in credits, which covers approximately 11 full gate runs.

Frequently asked questions

How does the Release Gate differ from the Apify Store's automated tests?

The Apify Store runs 4 checks (input validation, run success, non-empty dataset, duration under 300s) on your actors periodically. The ApifyForge Release Gate replicates those same 4 checks and adds 5 more: schema conformance, field completeness, golden baseline comparison, log anomaly detection, and performance regression tracking. The Store checks reactively — after your actor is already published. The Release Gate checks proactively — before you deploy.

Can I use the Release Gate to prevent maintenance flags?

Yes. The Release Gate's Store Default profile runs the exact same 4 checks the Store uses to decide whether to flag an actor for maintenance. If your actor passes the Release Gate in Store Default mode, it will pass the Store's automated tests. Running the gate before every deploy is the most reliable way to prevent maintenance flags, which reduce your actor's Store search ranking and visibility.

How do golden baselines work?

A golden baseline is a snapshot of your actor's "known good" output — a specific dataset captured at a point when the output was correct. On subsequent gate runs, Check G compares the current output against this baseline and flags differences: new fields added, fields removed, item count changes, or value drift. You approve new baselines explicitly using Approve Baseline mode after verifying the changes are intentional.

What happens if a check fails in gate mode?

The gate run completes all checks but reports an overall FAILED status. The structured REPORT.json lists every check result with evidence strings explaining exactly what failed and why. In a GitHub Actions pipeline, the step exits with a non-zero code, blocking the deploy. You fix the issue, push again, and re-run the gate.

How long does a gate run take?

A single-actor gate run typically completes in 1-3 minutes, depending on how long the target actor takes to run. Portfolio runs of 50 actors with concurrency of 5 complete in under 10 minutes. The Release Gate itself adds minimal overhead — most of the time is spent waiting for the target actor's test run to finish.

Can I run the Release Gate locally?

The Release Gate runs as an Apify actor on your account, so it requires an active Apify API token and internet access. It cannot run fully offline. However, for local pre-push checks, the ApifyForge Schema Validator and Test Runner both support local execution at zero cost.

Last updated: March 28, 2026

Related term

Maintenance Flag

The Apify Maintenance Flag (UNDER_MAINTENANCE) is a warning state applied to actors that fail Apify's automated quality checks for 3 consecutive days.

Related questions