ApifyForge Backend

8 tools across the Apify actor lifecycle

Apify actors don't fail loudly. They fail silently — across input, pipelines, deployments, and output. The ApifyForge backend fleet is 8 composable actors that each solve one named phase of the lifecycle. You do not run all of them together. You use the one that matches the moment.

Shared decision contract across the gate actors — decision{act_now, monitor, ignore}. Names the signal to act on, not the action to take. Designed for branching in CI pipelines, agent tool calls, and automation — no prose parsing.

The fleet, by lifecycle phase

Each actor maps to a specific moment. Pick by the problem you're solving, not by browsing the catalog.

Pre-run (input)

Input Guard

$0.15/validation

Stops invalid runs before they start

Before you invoke a target actor, is the payload correct?

Validates your payload against the target actor's declared input_schema.json. Catches unknown-field typos, schema drift, and silent default fallbacks that generic JSON Schema validators miss.

Branch on decisionact_now / monitor / ignore

Pre-run (pipeline)

Pipeline Preflight

$0.40/build

Validates multi-actor pipelines before execution

Before you run a multi-actor chain, does it compose?

Checks that every stage in a pipeline composes correctly — input schemas, dataset schemas, field mappings, reachability. Emits a decisionPosture so orchestrators can branch on rollout stance.

Branch on decisionPostureship_pipeline / canary_recommended / monitor_only / no_call

Build-time

Deploy Guard

$2.50/suite

Blocks bad builds before they deploy

Before you deploy a new build, is it safe to ship?

Runs automated test suites against a candidate build, compares to a stored baseline for regression detection, and returns a release decision CI can gate on — without parsing prose.

Branch on decisionact_now / monitor / ignore

Runtime choice

A/B Tester

$0.30/run

Choose the right actor with real data

Between two viable actors, which should you ship?

Runs two Apify actors on identical input N times and returns a production decision. Median + p90 + stability scoring across pairwise matchups — not single-run noise.

Branch on decisionPostureswitch_now / canary_recommended / monitor_only / no_call

Post-run

Output Guard

$4.00/check

Detects silent data failures after run

After a run succeeds, is the output actually correct?

Validates dataset output after a target actor finishes. Catches null rate drift, schema regression, and structural degradation that run status SUCCEEDED hides.

Branch on decisionact_now / monitor / ignore

Fleet audit

Quality Monitor

$0.15/actor

Audits every actor in your account in one run

Across every actor you own, what needs fixing?

8-dimension quality scoring plus ranked fixSequence[] per actor. qualityGates (storeReady, agentReady, monetizationReady, schemaReady) branch-ready for CI and agents.

Branch on qualityGates + fixSequencebooleans + ranked repairs

Fleet decision

Fleet Health Report

$0.50/run

What should I do right now to grow revenue?

Across every actor you own, what grows revenue next?

Portfolio-level decision engine. Measures real per-run profit, detects revenue cliffs, benchmarks pricing against your own cohorts, and returns a single nextBestAction plus a ranked actionPlan.

Branch on nextBestActionranked action + monthly USD impact

Pre-publish

Actor Risk Triage

$0.15/scan

Is my Apify actor safe to ship?

Before you ship an actor publicly, is there hidden risk?

Scans actor metadata for PII, GDPR, CCPA, ToS, auth-wall, and documentation risk before you publish. Returns a deterministic decision plus a remediation pack telling you exactly what to fix.

Branch on decisionact_now / monitor / ignore

Which one do I need?

Your problemUse this
My actor fails as soon as I start itInput Guard
My multi-actor pipeline breaks mid-runPipeline Preflight
My new build keeps breaking productionDeploy Guard
I need to pick between two actorsA/B Tester
My run succeeded but the data is wrongOutput Guard
I have 10+ actors and don't know what to fix firstQuality Monitor
I want to grow revenue but don't know where to startFleet Health Report
I'm about to publish an actor — is it safe?Actor Risk Triage (Compliance Scanner)

The shared decision contract

Gate actors emit one routable enum — decision — with three values. It names the signal to act on, not the action to take. Same contract across Input Guard, Output Guard, Deploy Guard, and Compliance Scanner. An LLM agent that learns the contract from any of them branches correctly across all four.

act_now

A signal strong enough to act on — something's wrong.

monitor

Directional or warning-level — watch it, don't act yet.

ignore

No signal to act on — proceed as normal.

Rollout-stance actors (A/B Tester, Pipeline Preflight) use decisionPosture for partial-rollout decisions. Fleet actors (Quality Monitor, Fleet Health Report) emit prescriptive fields — fixSequence[], nextBestAction — because they prescribe rather than gate.

Frequently asked questions

Do I need to run all 8 actors every time I use my target actor?

No. These are decoupled tools for different moments — not a linear pipeline. You run Input Guard before invoking an actor with dynamic input. You run Deploy Guard in CI when pushing a new build. You run Output Guard on production data after a run. You run Quality Monitor weekly, not per request. Each actor maps to a specific phase and triggers independently.

What's the shared decision contract?

Gate actors (Input Guard, Output Guard, Deploy Guard, Compliance Scanner) emit `decision ∈ {act_now, monitor, ignore}`. `act_now` means 'there's a signal you should act on', `monitor` means 'directional — watch it', `ignore` means 'nothing to act on, proceed'. Rollout-stance actors (A/B Tester, Pipeline Preflight) emit `decisionPosture ∈ {switch_now/ship_pipeline, canary_recommended, monitor_only, no_call}` for partial-rollout decisions. Fleet actors (Quality Monitor, Fleet Health Report) emit prescriptive fields (fixSequence, nextBestAction) — they don't gate, they prescribe.

Which actor should I start with?

Depends on your pain. Your actor runs but data looks wrong? Start with Output Guard. Runs fail on valid-looking input? Start with Input Guard. You're shipping builds and customers report regressions? Start with Deploy Guard. You own 10+ actors and don't know what to fix first? Start with Quality Monitor and work the fixSequence[0]. Every actor solves one named problem — pick the one whose problem you recognize.

How are these different from Apify's built-in checks?

Apify checks whether a run exited cleanly (SUCCEEDED vs FAILED). These tools check everything underneath that — whether the input was semantically correct, whether the output matches its declared contract, whether regressions happened, whether quality is drifting. Most silent failures are SUCCEEDED-status runs with bad data underneath. Apify's run status doesn't see those; the backend fleet does.

Do they all cost money?

Yes, via Apify's pay-per-event model on your own account. Prices range from $0.15/validation (Input Guard, Quality Monitor per-actor) to $4.00/check (Output Guard). A typical monthly spend for a developer running validation in CI and weekly fleet audits is $10–$50. The ApifyForge platform itself is free — the cost is Apify-native PPE on your own account.