Scale

Managing Multiple Actors

Fleet management strategies for 10, 50, or 200+ actors. Bulk operations, shared configs, maintenance monitoring, and the ApifyForge dashboard workflow.

By Ryan ClintonLast updated: March 19, 2026

Managing one actor is easy. Managing 10 is a chore. Managing 50 or more without a system is impossible. As your actor portfolio grows, every manual process becomes a bottleneck: checking health, deploying updates, adjusting pricing, fixing maintenance warnings. This guide covers the strategies, tools, and automation patterns that developers with large portfolios use to stay on top of everything. These techniques come from managing a fleet of 250+ actors and 80+ MCP servers in production.

The three phases of portfolio growth

**Phase 1: 1-10 actors.** Manual management works. You know each actor by name, you check the Console dashboard daily, and you deploy with apify push from each actor's directory. Enjoy this simplicity while it lasts.

**Phase 2: 10-50 actors.** Manual management breaks down. You cannot check 50 actors daily. You start missing maintenance warnings, success rate drops, and pricing inconsistencies. This is when you need fleet monitoring scripts and bulk operations.

**Phase 3: 50+ actors.** You need full automation: scheduled health checks, automated deployment pipelines, centralized configuration, and a dashboard that shows fleet-wide metrics at a glance. The ApifyForge dashboard is designed for this phase.

Fleet monitoring: catching problems before users do

The biggest risk with a large portfolio is actors failing silently. A website changes, your scraper returns empty results, and you do not notice until users complain or Apify flags it for maintenance. Fleet monitoring means checking all your actors regularly for success rate drops, output schema violations, and maintenance flags.

import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: process.env.APIFY_TOKEN });

async function checkFleetHealth() {
    const actorList = await client.actors().list({ limit: 500 });
    const actors = actorList.items;
    const issues = [];

    for (const actor of actors) {
        // Check recent run success rate
        const runList = await client.actor(actor.id).runs().list({
            limit: 20, desc: true,
        });
        const runs = runList.items;

        if (runs.length === 0) continue;

        const succeeded = runs.filter(r => r.status === 'SUCCEEDED').length;
        const successRate = Math.round((succeeded / runs.length) * 100);

        if (successRate < 90) {
            issues.push({
                actor: actor.name,
                issue: 'LOW_SUCCESS_RATE',
                detail: successRate + '% (' + succeeded + '/' + runs.length + ' recent runs)',
            });
        }

        // Check for maintenance notices
        const actorDetail = await client.actor(actor.id).get();
        if (actorDetail.notice) {
            issues.push({
                actor: actor.name,
                issue: 'MAINTENANCE_NOTICE',
                detail: actorDetail.notice,
            });
        }

        // Check for stale actors (no runs in 30 days)
        const lastRun = runs[0];
        const msPerDay = 1000 * 60 * 60 * 24;
        const daysSinceLastRun = (Date.now() - new Date(lastRun.startedAt).getTime()) / msPerDay;
        if (daysSinceLastRun > 30) {
            issues.push({
                actor: actor.name,
                issue: 'STALE',
                detail: 'No runs in ' + Math.round(daysSinceLastRun) + ' days',
            });
        }
    }

    return issues;
}
javascript

**Run this check daily.** Schedule it as an Apify actor itself (an actor that monitors other actors) or as a cron job on your development machine. The ApifyForge dashboard automates this — it checks all your actors every 6 hours and sends notifications for any issues. See the Actor Testing guide (/learn/actor-testing) for details on what to check beyond success rates.

Bulk operations: updating 50 actors at once

When you need to update metadata across your entire fleet, doing it one at a time is not practical. Common bulk operations include:

- **Updating categories** when Apify adds new category options - **Adjusting PPE pricing** across a category of actors (see the PPE Pricing guide at /learn/ppe-pricing for pricing strategy) - **Setting consistent SEO metadata** across all actors (see the Store SEO guide at /learn/store-seo) - **Updating Dockerfile base images** when Apify releases new versions

import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: process.env.APIFY_TOKEN });

async function bulkUpdateCategories(actorIds, newCategories) {
    const results = [];

    for (const actorId of actorIds) {
        try {
            await client.actor(actorId).update({
                categories: newCategories,
            });
            results.push({ actorId, status: 'updated' });
        } catch (error) {
            results.push({ actorId, status: 'failed', error: error.message });
        }
    }

    const succeeded = results.filter(r => r.status === 'updated').length;
    const failed = results.filter(r => r.status === 'failed').length;
    console.log('Bulk update complete: ' + succeeded + ' updated, ' + failed + ' failed');

    return results;
}
javascript

**Critical warning:** Bulk pricing updates trigger the 14-day notice period for each actor individually. If you increase the price on 50 actors at once, all 50 actors notify their active users simultaneously. Preview every bulk change before applying it, and always run a dry-run mode first that prints what would change without actually making the API call.

// Dry-run pattern: preview changes before applying
async function bulkUpdateDryRun(actorIds, updates) {
    for (const actorId of actorIds) {
        const actor = await client.actor(actorId).get();
        console.log('Actor: ' + actor.name);
        console.log('  Current categories: ' + JSON.stringify(actor.categories));
        console.log('  New categories:     ' + JSON.stringify(updates.categories));
        const currentPrice = actor.pricingModel ? actor.pricingModel.pricePerEvent : 'none';
        console.log('  Current price:      ' + currentPrice);
        if (updates.pricingModel) {
            console.log('  New price:          ' + updates.pricingModel.pricePerEvent);
        }
        console.log('---');
    }
    console.log('Dry run complete. ' + actorIds.length + ' actors would be updated.');
    console.log('Run with --apply flag to make changes.');
}
javascript

Deployment pipelines for large portfolios

For portfolios of 20+ actors, manual deployment with apify push does not scale. You need a pipeline that tests, builds, and pushes actors automatically. Here is a practical deployment workflow:

**Step 1: Organize actors in a monorepo.** Keep all actors in a single Git repository with a consistent directory structure. Each actor gets its own directory under a category folder.

project-root/
  actors/
    amazon-scraper/
    linkedin-scraper/
    price-monitor/
  mcps/
    social-media-intelligence/
    ecommerce-analytics/
  scripts/
    deploy.sh
    check-maintenance.py
    make-remaining-public.sh

**Step 2: Build a deploy script.** The script validates the actor, runs tests, pushes to Apify, and verifies the build succeeded. Keep this script in your scripts/ directory.

import { readFileSync, existsSync } from 'fs';
import path from 'path';

function validateActor(actorDir) {
    const actorJsonPath = path.join(actorDir, '.actor', 'actor.json');

    // Validate actor.json exists and is valid
    if (!existsSync(actorJsonPath)) {
        throw new Error('No actor.json found in ' + actorDir);
    }
    const actorJson = JSON.parse(readFileSync(actorJsonPath, 'utf-8'));
    console.log('Validating: ' + actorJson.name + ' (v' + actorJson.version + ')');

    // Validate input schema
    const schemaPath = path.join(actorDir, '.actor', 'input_schema.json');
    if (existsSync(schemaPath)) {
        JSON.parse(readFileSync(schemaPath, 'utf-8'));
        console.log('  Input schema: valid');
    }

    // Check README exists and has minimum length
    const readmePath = path.join(actorDir, 'README.md');
    if (existsSync(readmePath)) {
        const readme = readFileSync(readmePath, 'utf-8');
        if (readme.length < 500) {
            console.warn('  WARNING: README is too short (' + readme.length + ' chars)');
        }
    } else {
        console.warn('  WARNING: No README.md found');
    }

    return actorJson;
}
javascript

**Step 3: Deploy in batches.** Never deploy your entire fleet at once. Deploy in batches of 5-10, verify each batch, then proceed. This limits blast radius if something goes wrong.

Shared configurations and templates

Actors in the same category often share common patterns: similar Dockerfile base images, similar proxy configurations, similar error handling, and similar output schemas. Extract these into shared templates.

// shared/base-scraper.js — Common scraper boilerplate
import { Actor } from 'apify';
import { CheerioCrawler } from 'crawlee';

export async function createBaseScraper(options) {
    const { requestHandler, inputValidator } = options;
    const input = await Actor.getInput();

    // Shared input validation
    if (inputValidator) {
        const errors = inputValidator(input);
        if (errors.length > 0) {
            throw new Error('Invalid input: ' + errors.join(', '));
        }
    }

    // Shared proxy configuration
    const proxyConfiguration = await Actor.createProxyConfiguration(
        input.proxyConfig || { useApifyProxy: true }
    );

    // Shared crawler setup with sensible defaults
    const crawler = new CheerioCrawler({
        proxyConfiguration,
        maxConcurrency: input.maxConcurrency || 10,
        maxRequestRetries: 3,
        requestHandlerTimeoutSecs: 60,
        requestHandler,
    });

    return crawler;
}
javascript

When Apify updates their base Docker images or changes best practices, you update the template once and propagate the change across all actors that use it. This reduces maintenance overhead from O(n) to O(1) for common changes.

Maintenance monitoring workflow

Apify flags actors with maintenance issues: low success rates, schema violations, and deprecated features. For a large portfolio, maintenance monitoring needs to be systematic:

1. **Run the maintenance check daily** — Use the fleet monitoring script above or the ApifyForge dashboard 2. **Triage by severity** — Maintenance warnings (yellow) can wait a few days. Maintenance errors (red) need immediate attention because they suppress your actor in search results. 3. **Fix the root cause, not the symptom** — If a scraper is failing because a website changed its HTML, do not just retry until it passes. Fix the selectors. Otherwise it will fail again next week. 4. **Track maintenance history** — Keep a log of which actors had issues and when. Patterns emerge: "Actor X breaks every 2 months when Website Y deploys updates." This lets you schedule proactive maintenance.

Revenue tracking at fleet scale

When you manage 50+ monetized actors, revenue tracking needs to go beyond the Apify Console's per-actor view. You need:

- **Fleet-level daily revenue** — Is your total revenue growing or shrinking? - **Per-actor revenue trends** — Which actors are growing? Which are declining? - **Revenue-per-compute ratio** — Which actors are profitable vs. which cost more to run than they earn? - **User concentration risk** — Is 80% of your revenue from one actor? If that actor breaks, so does your income.

// Revenue analysis across the fleet
async function analyzeFleetRevenue(client, days) {
    const numDays = days || 30;
    const actorList = await client.actors().list({ limit: 500 });
    const revenue = {};

    for (const actor of actorList.items) {
        const runList = await client.actor(actor.id).runs().list({
            limit: 100, desc: true,
        });

        const cutoffMs = numDays * 24 * 60 * 60 * 1000;
        const cutoffDate = new Date(Date.now() - cutoffMs);

        const recentRuns = runList.items.filter(r => {
            return new Date(r.startedAt) > cutoffDate;
        });

        let totalCharged = 0;
        for (const r of recentRuns) {
            totalCharged += (r.chargedEventCount || 0);
        }

        revenue[actor.name] = {
            runs: recentRuns.length,
            events: totalCharged,
            actorId: actor.id,
        };
    }

    // Sort by events (proxy for revenue)
    const sorted = Object.entries(revenue)
        .sort(function(a, b) { return b[1].events - a[1].events; });

    console.log('Top 10 actors by event volume:');
    sorted.slice(0, 10).forEach(function(entry, i) {
        var name = entry[0];
        var data = entry[1];
        console.log('  ' + (i + 1) + '. ' + name + ': ' + data.events + ' events (' + data.runs + ' runs)');
    });

    return revenue;
}
javascript

The ApifyForge dashboard provides all these views out of the box, with charts, trend lines, and alerting. See the Monetization guide (/learn/monetization) for strategies on maximizing revenue across your fleet.

Real-world tips from managing 250+ actors

**Tip 1: Automate everything that runs more than twice a week.** If you find yourself doing the same Console task repeatedly, script it. Fleet management is about eliminating manual work, not doing manual work faster.

**Tip 2: Keep a maintenance calendar.** Some websites update on predictable schedules (monthly deploys, quarterly redesigns). Track these and schedule proactive maintenance for your scrapers that target those sites.

**Tip 3: Archive dead actors instead of deleting them.** An actor with zero runs for 90 days is probably dead, but deleting it loses the code and metadata. Move it to an archive directory in your repo and unpublish it from the Store. You can always bring it back.

**Tip 4: Use consistent naming conventions.** When you have 200+ actors, you will search for them by name constantly. Consistent prefixes like amazon-, linkedin-, mcp-social- make fleet management dramatically easier. Establish your naming convention early and enforce it.

**Tip 5: Monitor your Apify spend as aggressively as your revenue.** Large portfolios can accumulate significant platform compute costs. An actor with a memory leak that runs hourly costs real money. Track compute costs per actor and investigate any sudden spikes immediately.

**Tip 6: Pin dependency versions.** When one actor breaks because a dependency auto-updated, you fix one actor. When 50 actors share that dependency and it auto-updates, you fix 50 actors. Pin exact versions in package.json and update dependencies deliberately, not accidentally.

**Tip 7: Use the ApifyForge Tools section.** The dashboard provides fleet-wide views of quality scores, revenue, maintenance status, and deployment history. Instead of checking 250 actors one at a time in the Apify Console, check one dashboard. The Glossary defines all the metrics and terms you will encounter while managing your fleet.

Related guides

Beginner

Getting Started with Apify Actors

A complete walkthrough from zero to your first deployed actor. Covers project structure, Actor.main(), input schema, Dockerfile, and your first Apify Store listing.

Essential

Understanding PPE Pricing

How Pay Per Event works, how to set prices that attract users while covering costs, and common pricing mistakes that leave money on the table.

Revenue

How to Monetize Your Actors

Revenue strategies beyond basic PPE. Tiered pricing, free-tier funnels, bundling actors into MCP servers, and tracking revenue with ApifyForge analytics.

Quality

Actor Testing Best Practices

Use the ApifyForge test runner and regression suite to validate actors before every deploy. Define test cases, set assertions, and integrate with CI/CD.

Growth

Store SEO Optimization

How Apify Store search works, what metadata matters, and how to write READMEs that rank. Includes the quality score breakdown and how ApifyForge tracks it.

Essential

Cost Planning Tools: Calculator, Plan Advisor & Proxy Analyzer

How to use ApifyForge's cost planning tools to estimate actor run costs, choose the right Apify subscription plan, and pick the most cost-effective proxy type for each scraper.

Essential

AI Agent Tools: MCP Debugger, Pipeline Builder & LLM Optimizer

How to use ApifyForge's AI agent tools to debug MCP server connections, design multi-actor pipelines, optimize actor output for LLM token efficiency, and generate integration templates.

Quality

Schema Tools: Diff, Registry & Input Tester

How to use ApifyForge's schema tools to compare actor output schemas, browse the field registry, and test actor inputs before running — preventing wasted credits and broken pipelines.

Essential

Compliance Scanner, Actor Recommender & Comparisons

How to use ApifyForge's compliance risk scanner to assess legal exposure, the actor recommender to find the best tool for your task, and head-to-head comparisons to evaluate competing actors.

Quality

The ApifyForge Testing Suite

Five cloud-powered testing tools for Apify actors: Schema Validator, Test Runner, Cloud Staging, Regression Suite, and MCP Debugger. How they work together and when to use each one.

Essential

The Complete ApifyForge Tool Suite

All 14 developer tools in one guide: testing, schema analysis, cost planning, compliance scanning, LLM optimization, and pipeline building. What each tool does, when to use it, and how they work together.