Actor Run
An Apify Actor Run is a single execution of an actor with specific input parameters, representing one complete job from start to finish. When you start a run — via the Apify Console, the REST API, the CLI with apify call, the Apify client library, or from another actor using Actor.call() — Apify spins up a Docker container from the actor's latest successful build, injects the JSON input into the default Key-Value Store under the 'INPUT' key, and runs the actor's entry point code. Actor runs matter because they are the fundamental unit of work and billing on Apify. Every piece of data extracted, every page scraped, every automation task completed happens within an actor run. Users pay compute unit costs for every run (based on memory and duration), plus PPE charges if the actor uses Pay Per Event pricing, plus proxy data transfer fees if proxies are used. Understanding run lifecycle, monitoring, and optimization directly impacts both cost and reliability. Each run is fully isolated in its own Docker container and gets its own default storage: a default Dataset (for tabular output), a default Key-Value Store (for arbitrary data), and a default Request Queue (for crawl state). This isolation means runs cannot interfere with each other, and a crashing run does not affect other users' runs of the same actor. Runs progress through a status lifecycle: READY (queued, waiting for available compute capacity), RUNNING (executing), and then one of four terminal states: SUCCEEDED (completed normally), FAILED (crashed with an error), TIMED_OUT (exceeded the configured timeout), or ABORTED (manually stopped by the user or by another actor). You can monitor run status, live logs, and output in the Apify Console or via the API at GET /v2/actor-runs/{runId}. To start a run from the CLI: apify call username/actor-name -i '{"url": "https://example.com"}'. From the API: POST /v2/acts/{actorId}/runs with the input as the request body. From code: const run = await Actor.call('username/actor-name', { url: 'https://example.com' }); const dataset = await run.dataset().listItems(); From the Apify client: const client = new ApifyClient({ token: 'YOUR_TOKEN' }); const run = await client.actor('username/actor-name').call({ url: 'https://example.com' }); Run duration and memory usage determine compute unit cost via the formula: CU = (memoryMbytes / 1024) * (durationSecs / 3600). The default timeout is 3600 seconds (1 hour), but you can configure this per run with the timeout query parameter or in the actor's default run configuration. Long-running scrapers processing millions of pages may need timeouts of 24-72 hours, while simple API actors should use 300-600 seconds to fail fast on errors. Common mistakes include not setting appropriate timeouts — an actor stuck in an infinite loop will run until the default timeout, burning compute credits the entire time. Another mistake is not checking run status after Actor.call() in pipeline actors. Actor.call() waits for the run to finish and returns the run object, but the run may have FAILED. Always check: if (run.status !== 'SUCCEEDED') throw new Error('Upstream actor failed'). Not using Actor.metamorph() for long pipelines is also a missed optimization — metamorph replaces the current actor's Docker image with another actor's image without starting a new run, saving the cold start overhead. For fleet management across dozens or hundreds of actors, monitor aggregate run statistics: success rate (target above 95%), average duration (watch for regressions), failure patterns (same error message across runs indicates a systematic issue), and compute cost trends. The Apify Console dashboard provides these metrics, and the API supports programmatic access for custom monitoring. Related concepts: Actor Build, Dataset, Compute Unit, PPE, Key-Value Store, Webhook.