Beginner

What Is an Apify Actor?

An Apify actor is a serverless cloud program that runs on the Apify platform. It accepts JSON input, executes a task (scraping, data processing, API calls, or AI tool serving), and produces structured output in datasets, key-value stores, or request queues. Actors are packaged as Docker containers and can be run via API, scheduled, or chained together.

By Ryan ClintonLast updated: March 27, 2026

An Apify actor is a serverless cloud program that runs on the Apify platform. You give it structured JSON input, it executes a task -- scraping a website, calling an API, processing data, or serving tools to an AI agent -- and it produces structured output. Actors run inside Docker containers on Apify's infrastructure, so you never manage servers, scaling, or uptime. You deploy once and call the actor via REST API, schedule it on a cron, trigger it from a webhook, or chain it with other actors. Over 3,000 actors are published on the Apify Store, covering everything from Google Maps scraping to SEC filing analysis to MCP servers for Claude Desktop.

How actors work

Every actor follows the same lifecycle: input, run, output.

Input. You define an input_schema.json file that specifies what parameters the actor accepts. The Apify Console generates a form from this schema, and the API validates incoming JSON against it. Inputs can be strings, numbers, booleans, arrays, or objects.

Run. When you start an actor (via API, Console, or schedule), Apify spins up a Docker container with your code, injects the input, and executes your main.ts or main.js entry point. The actor has access to Apify's SDK for managing datasets, key-value stores, request queues, and proxy configuration.

Output. Results are written to one or more of these storage types:

Storage TypePurposeAccess
DatasetTabular data (rows of JSON objects)API, CSV/JSON export, webhook
Key-Value StoreArbitrary files (JSON, HTML, screenshots)API, direct URL
Request QueueURLs to crawl (for scraper actors)Internal to the actor

After the run completes, you retrieve results from the dataset via API. The Apify Console also shows results in a table view with filtering, sorting, and export options.

Actor types

Not all actors do the same thing. The Apify ecosystem has evolved four major categories:

TypeWhat It DoesExample
Web ScraperCrawls websites and extracts structured dataGoogle Maps Email Extractor, Trustpilot Review Analyzer
API WrapperCalls external APIs with retry logic, pagination, and output normalizationSEC EDGAR Filing Search, PubMed Research Search
OrchestratorChains multiple actors together into a pipelineAgent Pipeline Builder, Cloud Staging Test
MCP ServerExposes tools via Model Context Protocol for AI assistantsMulti-Source Intelligence MCP, Brand Narrative MCP

Web scrapers and API wrappers are the most common. MCP servers are the fastest-growing category, driven by demand from Claude Desktop, Cursor, and Windsurf users who want AI assistants connected to real-world data.

Pricing: the Pay-Per-Event model

Apify actors use the Pay-Per-Event (PPE) pricing model. The developer sets a price per event -- typically per result row, per search query, or per tool call. Users pay that price multiplied by the number of events consumed during a run. The developer keeps 80% of the revenue; Apify retains 20% as a platform fee.

Pricing ElementTypical RangeWho Sets It
Price per event$0.01 - $0.50Actor developer
Platform compute$0.25 - $5.00/GB-hourApify (based on memory/CPU)
Developer revenue share80% of PPE revenueFixed by Apify
Free tier5 USD/month platform creditApify

For users, PPE pricing means you pay for what you get, not for compute time. A scraper that returns 1,000 results at $0.01 per result costs $10 regardless of how long the run takes. For developers, PPE aligns incentives: faster, more efficient actors earn the same revenue per result while costing less to run.

How to find the right actor

With thousands of actors on the Apify Store, finding the right one for your task can be overwhelming. ApifyForge provides two tools to help:

Actor Recommender at /recommend -- describe your task in plain English and get ranked actor suggestions based on category match, quality score, pricing, and maintenance status.

Actor Comparison at /compare -- side-by-side comparison of actors in the same category. See success rates, total runs, pricing, last build date, and input schema complexity.

You can also browse by category on the /actors page, which organizes all published actors into functional groups with filtering and sorting.

Frequently asked questions

How is an actor different from a regular API?

An actor is a managed execution environment, not just an endpoint. It handles scaling, retries, proxy rotation, storage, and scheduling. You call a single API endpoint and the platform manages everything else. A regular API requires you to manage your own infrastructure.

Do I need Docker experience to build an actor?

No. The Apify CLI generates a working Dockerfile for you. Most developers never modify it. The default Dockerfile installs Node.js dependencies and runs your entry point. You only need to customize it if you need system packages like Chromium or Python.

Can I run an actor locally for testing?

Yes. The Apify CLI includes apify run which executes your actor locally with the same SDK and storage emulation. You can test with a local INPUT.json file before deploying to the platform.

What happens if my actor crashes mid-run?

Apify captures the error, logs it, and marks the run as FAILED. Any data written to the dataset before the crash is preserved. You can configure webhooks to get notified on failure. ApifyForge Monitor at /monitor provides centralized failure alerting across your entire actor fleet.

How many actors can I run at once?

The free tier allows 1 concurrent run. Paid plans scale up to 100+ concurrent runs depending on your subscription. Each run is isolated in its own container, so actors do not interfere with each other.

Can actors call other actors?

Yes. The Apify SDK includes Actor.call() and Actor.callTask() methods that let one actor start another actor and wait for its results. This is how orchestrator actors and pipelines work. The Agent Pipeline Builder tool at /tools/pipeline-builder helps you design these chains visually.

Related guides

Beginner

Getting Started with Apify Actors

To build an Apify actor, install Node.js 18+ and the Apify CLI, scaffold a project with apify create, write your logic inside Actor.main(), define an input_schema.json, and deploy with apify push. This guide walks through every step from zero to a published Apify Store listing.

Essential

Apify PPE Pricing Explained: Pay Per Event Model, Strategy, and Code Examples

Pay Per Event (PPE) is Apify's usage-based monetization model for actors on the Apify Store. Developers set a price per event (typically $0.001 to $0.50), call Actor.addChargeForEvent() in their code, and keep 80% of revenue while Apify takes 20%. This ApifyForge guide covers the 80/20 revenue split, actor.json configuration, charging code patterns, the 14-day price change rule, and pricing strategy by actor type.

Revenue

How to Monetize Your Actors

To monetize Apify actors, start with Pay Per Event pricing at $0.01-$0.25 per result, then layer on tiered pricing for power users, free-tier funnels to drive adoption, and MCP server bundles that combine multiple actors into a single subscription. ApifyForge analytics tracks revenue per actor so you know which strategies work. This guide covers each revenue model with real pricing examples.

Quality

Actor Testing Best Practices

To test an Apify actor, define input/output test cases in a JSON fixture, run them with the ApifyForge test runner before every deploy, and set assertions on output shape, field counts, and error rates. The regression suite catches breaking changes by comparing current output against a saved baseline. This guide covers the full testing workflow from local validation to CI/CD integration.

Growth

Store SEO Optimization

Apify Store search ranks actors by title match, README keyword density, category tags, run volume, and a quality score out of 100. To rank higher, write a README that opens with a plain-language description of what the actor does, include target keywords in the first 100 words, set accurate categories in actor.json, and maintain a success rate above 95%. This guide breaks down every ranking factor and shows how ApifyForge tracks your score.

Scale

Managing Multiple Actors

To manage 10, 50, or 200+ Apify actors, use the ApifyForge fleet dashboard to monitor health, revenue, and quality scores across your entire portfolio in one view. Group actors by category, run bulk updates on pricing and metadata, set up failure alerts, and track maintenance pulse to catch stale actors before users complain. This guide covers fleet management workflows at every scale.

Essential

Cost Planning Tools: Calculator, Plan Advisor & Proxy Analyzer

How to use ApifyForge's cost planning tools to estimate actor run costs, choose the right Apify subscription plan, and pick the most cost-effective proxy type for each scraper.

Essential

AI Agent Tools: MCP Debugger, Pipeline Builder & LLM Optimizer

How to use ApifyForge's AI agent tools to debug MCP server connections, design multi-actor pipelines, optimize actor output for LLM token efficiency, and generate integration templates.

Quality

Schema Tools: Diff, Registry & Input Tester

How to use ApifyForge's schema tools to compare actor output schemas, browse the field registry, and test actor inputs before running — preventing wasted credits and broken pipelines.

Essential

Compliance Scanner, Actor Recommender & Comparisons

How to use ApifyForge's compliance risk scanner to assess legal exposure, the actor recommender to find the best tool for your task, and head-to-head comparisons to evaluate competing actors.

Quality

The ApifyForge Testing Suite

Five cloud-powered testing tools for Apify actors: Schema Validator, Test Runner, Cloud Staging, Regression Suite, and MCP Debugger. How they work together and when to use each one.

Essential

The Complete ApifyForge Tool Suite

All 14 developer tools in one guide: testing, schema analysis, cost planning, compliance scanning, LLM optimization, and pipeline building. What each tool does, when to use it, and how they work together.

Essential

What Are MCP Servers on Apify?

MCP (Model Context Protocol) servers are Apify actors that run in standby mode and expose tools via an HTTP endpoint for AI assistants like Claude Desktop, Cursor, and Windsurf. They connect large language models to real-world data sources -- APIs, databases, web scrapers, and intelligence feeds -- so AI agents can take actions beyond text generation.

Beginner

How to Choose the Right Apify Actor

With over 3,000 actors on the Apify Store, choosing the right one for your task requires evaluating success rates, run history, pricing, maintenance frequency, and input schema quality. This guide provides a decision framework for selecting actors based on measurable quality metrics, plus tools to automate the comparison process.

Scale

How to Manage a Large Apify Actor Portfolio

Managing 10 Apify actors is straightforward. Managing 50 requires dashboards and cost tracking. Managing 200+ demands automated regression testing, schema validation, revenue analytics, and failure alerting. This guide covers the tools, processes, and hard-won lessons from scaling an actor portfolio from a handful to over 320 actors.