What are Regression Tests?
Regression tests are automated test suites that run before every publish to verify that new code changes have not broken existing functionality in your Apify actors. They are your safety net against deploying breaking changes that cause failures, empty outputs, or maintenance flags on the Store. The concept is simple but powerful. For each actor, you define multiple test cases that cover different input combinations, edge cases, and expected behaviors. When you push new code, the regression suite runs every test case and reports any failures. If a test that was passing before now fails, you have a regression — something in your new code broke previously working functionality. The suite catches this before the broken code reaches production and your users. Here are the types of regressions these tests catch that simple smoke tests miss. Broken pagination — your actor used to return 100 results but now returns only 10 because a pagination handler broke. Changed field names — a refactor renamed 'productPrice' to 'price' but your schema and downstream integrations expected the old name. New anti-bot measures — the target website added Cloudflare protection and your actor no longer gets past the landing page. Rate limit changes — the target API reduced its rate limit and your actor now gets throttled, returning partial results. Dependency updates — a new version of Cheerio or Playwright changed an API method signature, causing silent failures. To set up regression tests, define your test cases in the ApifyForge Test Runner with specific inputs and expected output assertions. Then integrate the regression suite with your Git workflow by adding a pre-push hook that runs all test cases before allowing a push to proceed. If any test fails, the push is blocked and you see exactly which test case failed and why. This prevents broken code from ever reaching your Apify builds. The ApifyForge Test Runner stores your regression test history, so you can track when specific tests started failing, correlate failures with code changes, and identify patterns over time. This historical data is invaluable for debugging intermittent failures. For related testing tools, see the questions about the Test Runner, Schema Validator, and Cloud Staging. Visit apifyforge.com/tools/regression-tests for setup guides and best practices.
Related questions
The Schema Validator checks your actor's actual dataset output against the schema you declared in your dataset_schema.js...
What is the Test Runner?The Test Runner is a testing tool that runs your Apify actor with predefined test inputs and automatically validates the...
What is Cloud Staging?Cloud Staging runs your actor in Apify's actual production environment before you make it public on the Store, catching ...
How do I validate my actor's output schema?Use the ApifyForge Schema Validator to compare your actor's actual dataset output against the schema declared in your da...