CI/CD pre-release validation for Apify actors
ApifyForge Release Gate is a pre-deploy quality gate that runs 9 automated checks on your Apify actors before every deploy — 4 Store-equivalent hard gates plus 5 extended data quality checks. Prevents maintenance flags, validates golden baselines, detects log anomalies across 13 known-bad patterns, and catches performance regressions — all for $0.42 per actor with structured JSON output for GitHub Actions integration.
Apify's Store test runs after you publish. If it fails, your actor gets a maintenance flag visible to all users. ApifyForge Release Gate runs the same checks before you publish, plus 5 additional quality checks. Prevention, not damage control.
Parses target actor input schema, builds effective input from prefill and default values, and verifies all required fields are satisfied — exactly how the Apify Store automated test works.
Starts the target actor via API, polls until terminal, and enforces SUCCEEDED status with a non-empty default dataset. Catches crashes, timeouts, and empty outputs before publishing.
Enforces the Apify Store's 300-second completion window using run timestamps. Configurable per test case for actors with known longer runtimes.
Validates output items against expected field types and enforces per-field null/empty rate thresholds. Catches silent data quality regressions.
Compares current output against a pinned golden dataset. Detects schema drift (added/removed fields), count drift, and sample value changes with configurable tolerances.
Scans run logs for 13 known-bad patterns (CAPTCHA, 429, TypeError, etc.) and warns if duration exceeds 2x the baseline. Supplemental checks that catch issues other checks miss.
There are several ways to validate Apify actors before deploying. Each trades off check depth, timing, and automation.
| Method | Checks | Timing | Cost |
|---|---|---|---|
| ApifyForge Release Gate | 9 checks (4 hard + 5 extended) | Before publish (prevention) | $0.42/actor + compute |
| Apify Store automated test | 4 checks (input, status, dataset, duration) | After publish (reactive) | Free (risks maintenance flag) |
| Manual testing | Ad hoc — depends on developer | Before publish (manual) | Compute cost + 15-30 min time |
| No validation | None | N/A | Free (risks maintenance flag + user impact) |
{
"type": "gate-report",
"summary": {
"totalTargets": 1,
"passedTargets": 1,
"failedTargets": 0,
"errorChecks": 0,
"warnChecks": 0,
"overallPassed": true
},
"targets": [{
"actorIdOrName": "ryanclinton/usgs-earthquake-search",
"passed": true,
"testCases": [{
"id": "default-smoke",
"run": {
"status": "SUCCEEDED",
"durationSeconds": 16,
"datasetItemCount": 100
},
"checks": [
{ "id": "A-effective-input", "passed": true },
{ "id": "B-run-succeeded", "passed": true },
{ "id": "C-non-empty-dataset", "passed": true },
{ "id": "D-duration", "passed": true, "evidence": "16s (max: 300s)" }
]
}]
}]
}Specify target actors (1 to 100+) with optional golden baselines and custom thresholds
ApifyForge Release Gate runs all 9 checks against each actor with configurable concurrency
Get a structured REPORT.json with per-actor results — ready for CI/CD integration
Several approaches exist for validating actors before deploying, from reactive Store tests to manual checking.
Apify's built-in quality check runs 4 checks after publishing. If your actor fails, it gets a maintenance flag visible to all Store users. No golden baselines, no log anomaly detection, no field completeness thresholds.
Best for: understanding Store requirements (but run ApifyForge Release Gate first to prevent failures).
Run the actor in the Apify Console and manually verify output. Ad hoc coverage depends on developer thoroughness. No structured report, no baseline comparison, no log scanning. Takes 15-30 minutes per actor.
Best for: quick sanity checks on a single actor when you know exactly what to look for.
Write a custom script that runs actors, checks output, and gates deployments. Fully customizable but requires 4-8 hours of development and ongoing maintenance. No built-in golden baselines or log anomaly detection.
Best for: teams with unique validation requirements that go beyond standard quality checks.
Tests a single actor in production with schema validation and custom assertions. Simpler than Release Gate (no multi-actor gating, no golden baselines, no log scanning) but effective for single-actor validation. $0.50 per run.
Best for: single-actor pre-publish validation without CI/CD integration needs.
9 automated checks including 4 Store-equivalent hard gates plus golden baseline comparison, field completeness, log anomaly detection, and performance regression. Gates 1 to 100+ actors per run. $0.42 per actor with structured JSON output for GitHub Actions.
Best for: teams deploying multiple actors who need comprehensive, automated pre-release validation.
Every release gate run executes on your own Apify account at $0.42 per actor gated plus each actor's compute cost. The ApifyForge platform itself is free — no subscription, no premium tier. Preventing a single maintenance flag is worth far more than the gate cost.
ApifyForge Release Gate runs 9 checks in two tiers. Hard gates (Checks A-D): A) effective input validation against input schema, B) run status verification (must be SUCCEEDED), C) non-empty dataset check, D) duration under 5 minutes (configurable). Extended checks (E-I): E) schema conformance against declared types, F) field completeness with per-field null/empty rate thresholds, G) golden baseline comparison for schema drift and count drift, H) log anomaly detection across 13 known-bad patterns (CAPTCHA, 429, TypeError, etc.), I) performance regression warning if duration exceeds 2x baseline.
Each ApifyForge Release Gate run costs $0.42 per actor checked, charged as a pay-per-event (PPE) fee on your own Apify account, plus the compute cost of running the target actor. You can gate 1 to 100+ actors per run with configurable concurrency. Apify's free tier includes $5/month in credits.
Apify's built-in Store test runs after you publish and checks 4 things: input schema, run status, dataset output, and duration under 5 minutes. If your actor fails, it gets a maintenance flag visible to all Store users. ApifyForge Release Gate runs the same 4 checks before you publish, plus 5 additional data quality checks (schema conformance, field completeness, golden baselines, log anomalies, performance regression). Prevention instead of damage control.
Check G compares your actor's current output against a pinned 'golden' dataset that represents known-good output. It detects schema drift (added or removed fields), count drift (significantly more or fewer results), and sample value changes. Configurable tolerances let you set how much drift is acceptable. This catches subtle data quality regressions that pass all other checks.
Check H scans run logs for 13 known-bad patterns: CAPTCHA challenges, HTTP 429 rate limits, TypeError exceptions, UnhandledPromiseRejection, ECONNREFUSED, ETIMEDOUT, proxy errors, out-of-memory warnings, navigation timeout, ERR_NAME_NOT_RESOLVED, SSL certificate errors, socket hang up, and 'blocked' messages. Each detection is a supplemental warning, not a hard failure.
Yes. ApifyForge Release Gate outputs a structured REPORT.json with a clear overallPassed boolean. In GitHub Actions, run the release gate actor via the Apify API, download the report, and fail the workflow if overallPassed is false. This prevents broken actors from being deployed through your CI/CD pipeline.
Yes. ApifyForge Release Gate accepts an array of target actors and runs them with configurable concurrency. A single run can gate your entire actor fleet — 1 to 100+ actors. The output report includes per-actor results so you can identify exactly which actors passed and which failed.
Extended checks (E-I) produce warnings, not hard failures. The overallPassed result is determined by hard gates (A-D). This means your pipeline won't break due to a log anomaly warning or a minor golden baseline drift. You can configure your CI/CD pipeline to treat warnings as failures if you want stricter gating.