Know what changed since last time
ApifyForge Regression Suite is a build verification tool that runs your Apify actor test cases and compares results against previous cached runs. It classifies every test into 6 states — pass, fail, regression, resolved, new_pass, and new_fail — using 6 assertion types to detect regressions before they reach production, at $0.35 per suite.
Regression testing is a core practice in Apify actor development. Web scrapers break when target sites change HTML structure — ApifyForge Regression Suite detects these breaks automatically by comparing the current run against the last known-good baseline, reducing mean time to detection from days of manual checking to under 60 seconds.
Every test is classified into one of 6 states: pass, fail, regression (was passing, now fails), resolved (was failing, now passes), new_pass (first run, passing), and new_fail (first run, failing). Regressions are the critical signal for blocking deploys.
Previous test results are automatically loaded from the last cached run in the ApifyForge dashboard. No manual baseline management, no spreadsheets, no test result files to maintain across builds.
Uses the same assertion engine as ApifyForge Test Runner: minResults (minimum output count), maxResults (maximum output count), requiredFields (field existence), fieldTypes (type checking), maxDuration (run time limit), and noEmptyFields (no nulls or empty strings).
Regressions are highlighted at the top of the JSON report for immediate attention. Stable passes and known failures are listed separately — the report is structured so CI/CD pipelines can gate on 'regressions > 0' without parsing individual tests.
Each run is auto-stamped with the current date. Over multiple runs, you can track regression patterns — for example, identifying that an actor breaks every 2 months when the target site deploys new HTML structure.
Trigger ApifyForge Regression Suite via the Apify API, parse the JSON report, and block deploys when regressions are detected. Compatible with GitHub Actions, GitLab CI, Jenkins, CircleCI, and any tool that calls REST APIs.
There are 4 common approaches to detecting regressions in Apify actor output. Each has trade-offs in automation level, detection granularity, and setup cost.
| Method | Detection speed | What it catches | Automation | Cost |
|---|---|---|---|---|
| ApifyForge Regression Suite | Under 60 seconds | 6 states: regression, resolved, pass, fail, new_pass, new_fail across 6 assertion types | Fully automated with CI/CD integration | $0.35/suite |
| Manual Apify Console comparison | 15-45 minutes per actor | Visual differences between run outputs (no formal classification) | Fully manual | Free (time cost only) |
| Apify Monitoring Suite | Minutes to hours (scheduled checks) | Output count thresholds, run failures, data freshness | Automated after configuration | Included with Apify platform usage |
| Custom Jest/Vitest test suite | Seconds per run (hours to build) | Whatever you code — fully customizable assertions | Automated after 4-8 hours setup | Free (development time cost) |
No single testing method catches every type of regression — the most robust setups layer automated regression suites with monitoring alerts and periodic manual review.
{
"actorName": "ryanclinton/website-contact-scraper",
"suiteVersion": "2026-03-18",
"totalTests": 3,
"passed": 2,
"failed": 1,
"regressions": 1,
"resolved": 0,
"details": [
{
"name": "Basic scan",
"status": "pass",
"previousStatus": "pass",
"currentStatus": "pass"
},
{
"name": "Multiple domains",
"status": "regression",
"previousStatus": "pass",
"currentStatus": "fail",
"assertions": [
{ "assertion": "minResults >= 2", "passed": false, "actual": 1 }
]
},
{
"name": "Empty input handling",
"status": "new_pass",
"previousStatus": null,
"currentStatus": "pass"
}
]
}Connect your Apify token and enter the actor ID with test cases
ApifyForge Regression Suite runs all tests, loads previous cached results, and compares each test's status
Get a regression report with 6-state classifications, assertion details, and a top-level regression count for CI/CD gating
There are several approaches to detecting regressions in Apify actor output, from fully manual to fully automated. The right choice depends on portfolio size, deployment frequency, and team resources.
Open two consecutive runs in the Apify Console, compare output counts and field values visually. Effective for small changes but impractical for actors with 50+ output fields or test suites with 10+ test cases. Takes 15-45 minutes per actor per build.
Best for: one-off checks on a single actor before a critical deploy.
Apify's built-in monitoring checks output counts, run durations, and data freshness on a schedule. Detects quantity-level regressions (e.g., output dropped 80%) but does not perform assertion-level comparison or classify tests into regression/resolved states.
Best for: production monitoring of running actors, not pre-deploy build verification.
Write a test framework that calls the Apify API, runs the actor, and asserts on output fields. Requires 4-8 hours of initial development, ongoing maintenance as actors evolve, and custom baseline management for regression detection.
Best for: teams with dedicated QA engineers who need highly customized assertion logic.
Build a GitHub Action that runs the actor, saves output to a file, and uses git diff to compare against the previous commit's output. Detects structural changes but produces noisy diffs for large datasets and does not classify changes as regressions vs. improvements.
Best for: teams already using GitHub for actor source code who want a lightweight smoke test.
Automated end-to-end regression detection: runs all test cases, injects previous cached results, classifies every test into 6 states, and produces a structured JSON report. No scripting or baseline management required. $0.35 per suite.
Best for: developers who maintain multiple actors and want fast, repeatable regression detection integrated into CI/CD.
Each approach has trade-offs in setup cost, detection granularity, and maintenance burden. The right choice depends on how many actors you maintain and how frequently you ship updates.
Every suite run executes on your own Apify account at the standard pay-per-event rate of $0.35 per suite. The ApifyForge platform itself is free — no subscription, no premium tier. The charge appears in your Apify console like any other actor run. Apify's free plan includes $5/month in credits, enough for approximately 14 regression suite runs per month.
A test runner tells you what is broken right now. ApifyForge Regression Suite tells you what broke since last time by comparing current results against previous cached results. It classifies every test into 6 states: pass, fail, regression (was passing, now fails), resolved (was failing, now passes), new_pass, and new_fail. This distinction is critical for CI/CD pipelines where you need to know whether a failure is pre-existing or newly introduced.
ApifyForge Regression Suite automatically loads your previous test results from the cached run in your ApifyForge dashboard. It runs the same test cases against the current actor build, then compares each test's status. If a test was 'pass' in the previous run and is now 'fail', it is classified as a 'regression'. If a test was 'fail' and is now 'pass', it is classified as 'resolved'. No manual tracking is required.
Each ApifyForge Regression Suite run costs $0.35, charged as a pay-per-event (PPE) fee on your own Apify account. ApifyForge has no platform fee or subscription. Apify's free tier includes $5/month in credits, enough for approximately 14 regression suite runs per month.
ApifyForge Regression Suite uses the same 6 assertion types as ApifyForge Test Runner: minResults (minimum output items), maxResults (maximum output items), requiredFields (fields that must exist), fieldTypes (expected type per field), maxDuration (maximum run time in seconds), and noEmptyFields (no null or empty string values). Each assertion produces a pass or fail verdict per test case.
Yes. ApifyForge Regression Suite is designed for CI/CD integration. Trigger it via the Apify API, parse the JSON report, and block deploys when the regressions count is greater than 0. It works with GitHub Actions, GitLab CI, Jenkins, and any automation tool that can call a REST API and parse JSON. The structured output includes a top-level 'regressions' count for easy gating.
Run ApifyForge Regression Suite after every code change to the actor source. For actors that scrape websites, also run weekly even without code changes — the target site's HTML structure may change, causing regressions in your selectors. At $0.35 per run, weekly testing of 10 actors costs $3.50/week, well within Apify's free $5/month tier for smaller portfolios.
On the first run, ApifyForge Regression Suite has no cached history to compare against. Every test is classified as either 'new_pass' or 'new_fail' instead of 'pass' or 'fail'. The results are then cached as the baseline for future comparisons. Subsequent runs will detect regressions and resolved issues relative to this baseline.
No. Regression testing catches structural and quantitative changes in actor output — field presence, result counts, run duration, and type correctness. It does not validate data accuracy, content quality, or business logic. Manual review is still needed for verifying that scraped data is semantically correct and meets downstream requirements.