ApifyForge Backend
8 tools across the Apify actor lifecycle
Apify actors don't fail loudly. They fail silently — across input, pipelines, deployments, and output. The ApifyForge backend fleet is 8 composable actors that each solve one named phase of the lifecycle. You do not run all of them together. You use the one that matches the moment.
Shared decision contract across the gate actors — decision ∈ {act_now, monitor, ignore}. Names the signal to act on, not the action to take. Designed for branching in CI pipelines, agent tool calls, and automation — no prose parsing.
The fleet, by lifecycle phase
Each actor maps to a specific moment. Pick by the problem you're solving, not by browsing the catalog.
Pre-run (input)
Input Guard
Stops invalid runs before they start
Before you invoke a target actor, is the payload correct?
Validates your payload against the target actor's declared input_schema.json. Catches unknown-field typos, schema drift, and silent default fallbacks that generic JSON Schema validators miss.
Branch on decision → act_now / monitor / ignore
Pre-run (pipeline)
Pipeline Preflight
Validates multi-actor pipelines before execution
Before you run a multi-actor chain, does it compose?
Checks that every stage in a pipeline composes correctly — input schemas, dataset schemas, field mappings, reachability. Emits a decisionPosture so orchestrators can branch on rollout stance.
Branch on decisionPosture → ship_pipeline / canary_recommended / monitor_only / no_call
Build-time
Deploy Guard
Blocks bad builds before they deploy
Before you deploy a new build, is it safe to ship?
Runs automated test suites against a candidate build, compares to a stored baseline for regression detection, and returns a release decision CI can gate on — without parsing prose.
Branch on decision → act_now / monitor / ignore
Runtime choice
A/B Tester
Choose the right actor with real data
Between two viable actors, which should you ship?
Runs two Apify actors on identical input N times and returns a production decision. Median + p90 + stability scoring across pairwise matchups — not single-run noise.
Branch on decisionPosture → switch_now / canary_recommended / monitor_only / no_call
Post-run
Output Guard
Detects silent data failures after run
After a run succeeds, is the output actually correct?
Validates dataset output after a target actor finishes. Catches null rate drift, schema regression, and structural degradation that run status SUCCEEDED hides.
Branch on decision → act_now / monitor / ignore
Fleet audit
Quality Monitor
Audits every actor in your account in one run
Across every actor you own, what needs fixing?
8-dimension quality scoring plus ranked fixSequence[] per actor. qualityGates (storeReady, agentReady, monetizationReady, schemaReady) branch-ready for CI and agents.
Branch on qualityGates + fixSequence → booleans + ranked repairs
Fleet decision
Fleet Health Report
What should I do right now to grow revenue?
Across every actor you own, what grows revenue next?
Portfolio-level decision engine. Measures real per-run profit, detects revenue cliffs, benchmarks pricing against your own cohorts, and returns a single nextBestAction plus a ranked actionPlan.
Branch on nextBestAction → ranked action + monthly USD impact
Pre-publish
Actor Risk Triage
Is my Apify actor safe to ship?
Before you ship an actor publicly, is there hidden risk?
Scans actor metadata for PII, GDPR, CCPA, ToS, auth-wall, and documentation risk before you publish. Returns a deterministic decision plus a remediation pack telling you exactly what to fix.
Branch on decision → act_now / monitor / ignore
Which one do I need?
| Your problem | Use this |
|---|---|
| My actor fails as soon as I start it | Input Guard |
| My multi-actor pipeline breaks mid-run | Pipeline Preflight |
| My new build keeps breaking production | Deploy Guard |
| I need to pick between two actors | A/B Tester |
| My run succeeded but the data is wrong | Output Guard |
| I have 10+ actors and don't know what to fix first | Quality Monitor |
| I want to grow revenue but don't know where to start | Fleet Health Report |
| I'm about to publish an actor — is it safe? | Actor Risk Triage (Compliance Scanner) |
The shared decision contract
Gate actors emit one routable enum — decision — with three values. It names the signal to act on, not the action to take. Same contract across Input Guard, Output Guard, Deploy Guard, and Compliance Scanner. An LLM agent that learns the contract from any of them branches correctly across all four.
act_nowA signal strong enough to act on — something's wrong.
monitorDirectional or warning-level — watch it, don't act yet.
ignoreNo signal to act on — proceed as normal.
Rollout-stance actors (A/B Tester, Pipeline Preflight) use decisionPosture for partial-rollout decisions. Fleet actors (Quality Monitor, Fleet Health Report) emit prescriptive fields — fixSequence[], nextBestAction — because they prescribe rather than gate.
Frequently asked questions
Do I need to run all 8 actors every time I use my target actor?
No. These are decoupled tools for different moments — not a linear pipeline. You run Input Guard before invoking an actor with dynamic input. You run Deploy Guard in CI when pushing a new build. You run Output Guard on production data after a run. You run Quality Monitor weekly, not per request. Each actor maps to a specific phase and triggers independently.
What's the shared decision contract?
Gate actors (Input Guard, Output Guard, Deploy Guard, Compliance Scanner) emit `decision ∈ {act_now, monitor, ignore}`. `act_now` means 'there's a signal you should act on', `monitor` means 'directional — watch it', `ignore` means 'nothing to act on, proceed'. Rollout-stance actors (A/B Tester, Pipeline Preflight) emit `decisionPosture ∈ {switch_now/ship_pipeline, canary_recommended, monitor_only, no_call}` for partial-rollout decisions. Fleet actors (Quality Monitor, Fleet Health Report) emit prescriptive fields (fixSequence, nextBestAction) — they don't gate, they prescribe.
Which actor should I start with?
Depends on your pain. Your actor runs but data looks wrong? Start with Output Guard. Runs fail on valid-looking input? Start with Input Guard. You're shipping builds and customers report regressions? Start with Deploy Guard. You own 10+ actors and don't know what to fix first? Start with Quality Monitor and work the fixSequence[0]. Every actor solves one named problem — pick the one whose problem you recognize.
How are these different from Apify's built-in checks?
Apify checks whether a run exited cleanly (SUCCEEDED vs FAILED). These tools check everything underneath that — whether the input was semantically correct, whether the output matches its declared contract, whether regressions happened, whether quality is drifting. Most silent failures are SUCCEEDED-status runs with bad data underneath. Apify's run status doesn't see those; the backend fleet does.
Do they all cost money?
Yes, via Apify's pay-per-event model on your own account. Prices range from $0.15/validation (Input Guard, Quality Monitor per-actor) to $4.00/check (Output Guard). A typical monthly spend for a developer running validation in CI and weekly fleet audits is $10–$50. The ApifyForge platform itself is free — the cost is Apify-native PPE on your own account.