Fleet Analytics (Fleet Health Report)

What should I do right now to grow my Apify revenue?

Fleet Analytics scans every actor in your account, measures real per-run profit (not guessed), detects revenue cliffs and silent quality bleed before normal alerts trip, benchmarks your pricing against your own category cohorts, and returns a single ranked action you can work from. One API call, works whether you run 5 actors or 500. $0.50 per run.

Open the run, read nextBestAction, do one thing. That loop is the product. The learning layer tracks which actions actually moved revenue in your fleet and calibrates future recommendations against it.

What Fleet Analytics returns

nextBestAction (the headline)

One ranked action with title, urgency, estimated monthly revenue impact in USD, step-by-step instructions, and a calibrated confidence score. Every run answers one question: what do I work on right now?

Real per-run profit measurement

PPE revenue − platform compute cost per run, per actor. Surfaces actors that succeed on every run but lose money on every run. Thin-margin actors almost always have fixable root causes — pricing misconfiguration, over-provisioned memory, or slow code paths.

Revenue cliff detection

A sudden drop in daily revenue that normal fail-rate alerts miss — because the actor is technically succeeding, it's just not earning. First-detection alerts catch it days earlier than dashboard-staring does.

Ranked actionPlan

Full queue of 5–20 actions (depending on fleet size) ordered by impact × confidence. Each entry: priority, action type, estimatedImpactMonthlyUsd, effort estimate, and calibratedConfidence from your fleet's learning history.

Pricing benchmarking

Compares your PPE pricing against the actors you've shipped in the same category. Flags under-priced actors (leaving revenue on the table) and over-priced actors (suppressing adoption) before the market tells you.

Learning layer

Tracks which past actions actually moved revenue in your fleet. After 3+ samples per action type (pricing change, SEO update, schema fix), confidence scores graduate from heuristic to calibrated. The tool gets more accurate the longer you use it.

Portfolio decision methods compared

MethodReturns a ranked actionCalibrated to your fleetTime to first insight
Fleet AnalyticsYes — single nextBestActionYes — learning layerUnder 2 minutes
Apify Console dashboardNo — raw metricsNoYou interpret it
Spreadsheet + billing CSVWhatever you buildManual2–6 hours setup + weekly refresh
IntuitionNoNoFast, often wrong

Example Fleet Analytics output

{
  "nextBestAction": {
    "urgency": "high",
    "title": "Raise PPE on my-api-wrapper from \$0.05 to \$0.15 per call",
    "estimatedImpactMonthlyUsd": 340,
    "calibratedConfidence": 0.78,
    "reason": "Under-priced vs your category cohort (median \$0.18); 2,267 runs/month sustain pricing power",
    "steps": ["Open actor settings", "Edit pricingPerEvent", "Confirm PPE dialogue"]
  },
  "actionPlan": [
    { "priority": 1, "type": "pricing-change", "estimatedImpactMonthlyUsd": 340 },
    { "priority": 2, "type": "quality-fix", "estimatedImpactMonthlyUsd": 120 },
    { "priority": 3, "type": "seo-update", "estimatedImpactMonthlyUsd": 80 }
  ],
  "revenueCliffs": [
    { "actor": "old-scraper", "cliffDetectedAt": "2026-04-18", "revenueDropPct": 0.62 }
  ],
  "calibration": { "status": "developing", "samples": 12, "byType": { "pricing-change": 4, "quality-fix": 5 } }
}

Every run answers one question: what should I do right now to increase revenue?

How Fleet Analytics works

1

Schedule it daily or weekly — no input required when on Apify, token auto-injected

2

Reads revenue, runs, quality, and pricing for every actor; compares to previous snapshots

3

Returns nextBestAction + full actionPlan; logs outcomes back to the learning layer on re-run

Limitations

  • 1.Cold-start accuracy. The learning layer needs 3+ historical samples per action type to graduate from heuristic to calibrated confidence. First 4 weeks the recommendations are directionally correct but not yet tuned to your fleet's specific response curves.
  • 2.Revenue attribution is approximate. PPE revenue is directly measured; the impact estimates for non-pricing actions (SEO, quality, schema) are based on category benchmarks plus your fleet's historical responses. Treat estimatedImpactMonthlyUsd as a ranking signal, not a forecast.
  • 3.Not a replacement for customer research. Fleet Health Report optimizes what you already have. If your fleet is fundamentally targeting the wrong market, no action from this tool will fix that — you need customer interviews, not portfolio analytics.
  • 4.Pricing benchmarks use your own category. The tool compares your pricing against the other actors you've shipped in the same category. It does not compare against third-party actors across the Apify Store — that signal is too noisy to calibrate against.
  • 5.Requires fleet of 5+ actors for meaningful outputs. With 1–4 actors there's not enough signal for calibration or cohort benchmarking. For single-actor analysis use Quality Monitor instead.

What Fleet Analytics costs

$0.50 per run, flat rate regardless of fleet size. Scheduled daily that's $15/month. Weekly is $2/month. The average nextBestAction carries an estimated monthly impact of $50–$500 per action — the ROI math is not subtle. Apify's free plan includes $5/month in credits, enough for 10 runs per month.

Frequently asked questions

How is this different from Quality Monitor?

Quality Monitor scores each actor on quality dimensions and tells you what's broken. Fleet Health Report operates one layer up — it takes every actor's performance, revenue, and quality data and answers a single business question: 'What should I do next to grow revenue?' Quality Monitor emits fixSequence[]; Fleet Health Report emits nextBestAction. Use Quality Monitor when you're fixing things. Use Fleet Health Report when you're deciding what to fix.

What is nextBestAction?

nextBestAction is a single ranked action the tool recommends you take right now, based on your fleet's actual revenue, quality, and pricing data. It includes: the action title, urgency (high / medium / low), estimated monthly revenue impact (USD), step-by-step instructions, and a calibratedConfidence score grounded in your fleet's historical outcomes from similar actions. Open the run, read one field, do one thing. That loop is the product.

How does the learning layer work?

Every time you complete an action from a prior actionPlan, Fleet Health Report records the outcome — did revenue actually move, and by how much? Over time it calibrates its confidence per action type (pricing change, schema improvement, SEO update, etc.) against your specific fleet. After 3+ historical samples per action type, confidence scores become grounded rather than heuristic. The tool gets more accurate the longer you use it.

How does it measure 'real per-run profit'?

Fleet Health Report reads your Apify billing data (PPE earnings) and pairs it with platform compute costs (memory × runtime). Profit = PPE revenue − platform compute. Most actors have thin per-run margins they've never measured — the tool surfaces actors that are losing money on every run so you can raise pricing, cut compute, or deprecate them.

What's a 'revenue cliff'?

A revenue cliff is a sudden drop in daily revenue for an actor that normal fail-rate alerts miss — because the actor is still technically succeeding, it's just not earning. Causes: users dropping off from bad output quality, SEO ranking drops, competitor launches, or pricing misconfigurations. Fleet Health Report flags cliffs at first detection, not three weeks later when you notice on the dashboard.

Does it work with a 5-actor fleet?

Yes. The tool is sized for 5 actors to 500. At small fleet sizes the action plan has fewer entries, but the underlying analysis is the same. The learning layer kicks in faster with concentrated action types — 10 pricing changes in a 5-actor fleet calibrates as well as 50 in a 50-actor fleet.