DEVELOPER TOOLS

Quality Audit

Score each actor in your portfolio 0-100 across README completeness, pricing configuration, output schema compliance, and run reliability. Returns specific fix recommendations.

Try on Apify Store
$0.25per event
2
Users (30d)
12
Runs (30d)
90
Actively maintained
Maintenance Pulse
$0.25
Per event

Maintenance Pulse

90/100
Last Build
Today
Last Version
1d ago
Builds (30d)
8
Issue Response
N/A

Cost Estimate

How many results do you need?

quality-audits
Estimated cost:$25.00

Pricing

Pay Per Event model. You only pay for what you use.

EventDescriptionPrice
quality-auditCharged per fleet quality audit.$0.25

Example: 100 events = $25.00 · 1,000 events = $250.00

Documentation

Score the quality of every actor in your Apify account across five weighted dimensions: README completeness, pricing configuration, output schema, reliability, and popularity. ApifyForge Quality Monitor produces a 0-100 quality score for each actor, along with specific issues found and actionable recommendations to improve. It also computes a fleet-wide average quality score so you can track improvements over time. Built to power the quality panel of the ApifyForge dashboard, this actor tells you exactly where each actor falls short and what to fix first.

Why use ApifyForge Quality Monitor?

  • Objective quality scoring. Every actor gets a 0-100 score based on five measurable dimensions, removing guesswork from quality assessment.
  • Specific, actionable feedback. Does not just give you a number -- tells you exactly what is wrong ("Description too short", "No PPE pricing configured") and what to do about it ("Expand actor description to at least 500 characters").
  • Five quality dimensions. Evaluates README/description (25%), pricing setup (20%), output schema (15%), run reliability (30%), and user popularity (10%).
  • Worst-first sorting. Results are sorted by quality score ascending so the actors that need the most work appear at the top.
  • Fleet-wide tracking. The fleetQualityScore gives you a single number to track over time as you improve your actors.
  • Schema validation. Checks whether your actors have defined output dataset schemas, which is required for Apify Store listing quality and API documentation.
  • Dashboard-ready output. Structured JSON with per-dimension breakdowns designed for visualization in ApifyForge.

Key Features

  • Fetches all actors from your account with full pagination support
  • Evaluates description/README length and checks for usage examples
  • Detects PPE pricing configuration from actor detail endpoint
  • Checks for output dataset schema via the latest tagged build
  • Samples last 100 runs to compute 30-day reliability score
  • Normalizes popularity score against the most popular actor in your fleet
  • Produces per-actor breakdown scores across all five dimensions
  • Generates specific issue descriptions and fix recommendations
  • Sorts results worst-first for efficient quality improvement workflows

How to Use

  1. Go to ApifyForge Quality Monitor on the Apify Store.
  2. Click Try for free.
  3. Enter your Apify API Token (find it at Settings > Integrations).
  4. Click Start.
  5. Wait for the run to complete (typically 30-120 seconds depending on fleet size).
  6. Review per-actor quality scores and recommendations in the Dataset tab.

Input Parameters

ParameterTypeRequiredDefaultDescription
apifyTokenstringYes--Your Apify API token. Used to authenticate all API calls. Find it at https://console.apify.com/settings/integrations

Input Examples

Standard quality scan:

{
    "apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}

Automated daily quality tracking via API:

{
    "apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}

Output Example

{
    "fleetQualityScore": 68,
    "actors": [
        {
            "name": "quick-prototype-scraper",
            "id": "low001",
            "qualityScore": 22,
            "breakdown": {
                "readme": 10,
                "pricing": 0,
                "schema": 20,
                "reliability": 50,
                "popularity": 0
            },
            "issues": [
                "Description very short or missing",
                "No README content",
                "No PPE pricing configured",
                "No tagged build found",
                "No recent runs to assess reliability"
            ],
            "recommendations": [
                "Add a detailed description explaining what the actor does",
                "Create a README with description, usage examples, and output format",
                "Set up Pay-Per-Event pricing to monetize this actor",
                "Define a dataset schema in .actor/dataset_schema.json"
            ]
        },
        {
            "name": "google-maps-scraper",
            "id": "best01",
            "qualityScore": 95,
            "breakdown": {
                "readme": 100,
                "pricing": 100,
                "schema": 100,
                "reliability": 99,
                "popularity": 100
            },
            "issues": [],
            "recommendations": []
        }
    ],
    "scannedAt": "2026-03-16T14:30:00.000Z"
}

Output Fields

FieldTypeDescription
fleetQualityScorenumberAverage quality score across all actors (0-100)
actorsarrayPer-actor quality details, sorted by quality score ascending (worst first)
actors[].namestringActor name
actors[].idstringActor ID
actors[].qualityScorenumberComposite quality score (0-100), weighted sum of five dimensions
actors[].breakdownobjectPer-dimension scores (each 0-100)
actors[].breakdown.readmenumberREADME/description quality score. 100 = 500+ char description with examples.
actors[].breakdown.pricingnumberPricing configuration score. 100 = PPE pricing set up, 0 = no PPE pricing.
actors[].breakdown.schemanumberOutput schema score. 100 = dataset schema defined, 30 = build exists but no schema, 20 = no tagged build.
actors[].breakdown.reliabilitynumberRun reliability score based on 30-day success rate. Requires 5+ runs for full confidence. 50 = neutral (no data).
actors[].breakdown.popularitynumberPopularity score normalized against the most popular actor in your fleet (0-100).
actors[].issuesarrayList of specific quality issues found for this actor
actors[].recommendationsarrayActionable fix recommendations for each issue
scannedAtstringISO 8601 timestamp of when the quality scan was performed

Programmatic Access

Python

from apify_client import ApifyClient

client = ApifyClient("apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx")

run = client.actor("ryanclinton/apifyforge-quality-monitor").call(
    run_input={"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
)

dataset_items = client.dataset(run["defaultDatasetId"]).list_items().items
quality = dataset_items[0]

print(f"Fleet quality score: {quality['fleetQualityScore']}/100")

# Show the 10 worst actors
print("\nLowest quality actors (fix these first):")
for actor in quality["actors"][:10]:
    print(f"  {actor['name']}: {actor['qualityScore']}/100")
    for issue in actor["issues"]:
        print(f"    - {issue}")
    for rec in actor["recommendations"]:
        print(f"    > {rec}")

JavaScript

import { ApifyClient } from "apify-client";

const client = new ApifyClient({
    token: "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
});

const run = await client.actor("ryanclinton/apifyforge-quality-monitor").call({
    apifyToken: "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
const quality = items[0];

console.log(`Fleet quality: ${quality.fleetQualityScore}/100`);

// Find actors missing PPE pricing
const noPricing = quality.actors.filter((a) => a.breakdown.pricing === 0);
console.log(`Actors without PPE pricing: ${noPricing.length}`);

// Find actors with poor READMEs
const poorReadme = quality.actors.filter((a) => a.breakdown.readme < 50);
console.log(`Actors with poor documentation: ${poorReadme.length}`);

cURL

# Start the quality scan
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~apifyforge-quality-monitor/runs?token=YOUR_API_TOKEN" \
    -H "Content-Type: application/json" \
    -d '{"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"}'

# Fetch results from the default dataset
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN"

How It Works

ApifyForge Quality Monitor evaluates each actor through a five-dimension scoring pipeline:

  1. Actor enumeration. Calls GET /v2/acts?my=true with pagination to retrieve every actor in your account.

  2. Detail fetching. For each actor, calls GET /v2/acts/{actorId} to retrieve description, README, pricing configuration, tagged builds, and user statistics.

  3. README scoring (25% weight). Evaluates the actor description length:

    • 500+ characters = 100 points
    • 200-499 characters = 70 points
    • 50-199 characters = 40 points
    • Under 50 characters = 10 points
    • Bonus: +10 points if the README contains "example" or "usage" keywords
  4. Pricing scoring (20% weight). Checks for PAY_PER_EVENT entries in the pricingInfos array:

    • PPE pricing configured = 100 points
    • No PPE pricing = 0 points
  5. Schema scoring (15% weight). Checks the latest tagged build for a dataset schema definition:

    • Dataset schema defined = 100 points
    • Build exists but no schema = 30 points
    • No tagged build found = 20 points
  6. Reliability scoring (30% weight). Fetches the last 100 runs, filters to the 30-day window, and computes success rate:

    • 5+ runs: score = success rate percentage (capped at 100)
    • 1-4 runs: score = success rate (flagged as low sample size)
    • 0 runs: score = 50 (neutral -- no data to assess)
  7. Popularity scoring (10% weight). Normalizes the actor's 30-day user count against the most popular actor in the fleet:

    • Score = (actor users / max fleet users) * 100
  8. Composite score. Weighted sum: readme * 0.25 + pricing * 0.20 + schema * 0.15 + reliability * 0.30 + popularity * 0.10

  9. Output. Sorts actors by quality score ascending (worst first), computes fleet average, pushes to dataset, and charges one PPE event.

How Much Does It Cost?

ApifyForge Quality Monitor uses Pay-Per-Event pricing at $0.05 per scan.

ScenarioEventsCost
One-time quality audit1$0.05
Weekly monitoring (4x/month)4$0.20
Daily monitoring (30x/month)30$1.50

Platform compute costs also apply. A typical quality scan of 200 actors completes in under 2 minutes using 256 MB of memory.

Tips

  • Work from the bottom up. Results are sorted worst-first. Focus on the lowest-scoring actors for the biggest fleet-wide quality improvement.
  • README is the easiest win. Adding a 500+ character description with usage examples can improve an actor's quality score by up to 25 points.
  • Add PPE pricing to everything. Even a small price ($0.01) gives you 20 points and starts generating revenue. The Quality Monitor flags every actor without pricing.
  • Define dataset schemas. Adding .actor/dataset_schema.json improves your schema score from 30 to 100 (15 points on the composite score) and makes your actor's output more discoverable.
  • Track fleet quality over time. Schedule weekly runs and monitor fleetQualityScore. Aim for 80+ across your fleet.
  • Cross-reference with revenue. Low-quality actors with high traffic (from Revenue Tracker) are your highest-impact improvement targets.

Limitations

  • Description vs. README distinction. The scoring primarily evaluates description length. A very long README with a short description may score lower than expected. The actor checks both fields but weights description length more heavily.
  • Popularity is relative. The popularity score is normalized against the most popular actor in your fleet, not against the entire Apify Store. A score of 100 means "most popular in your fleet," not "most popular globally."
  • Build schema check requires a tagged build. If your actor has no "latest" tagged build, the schema score defaults to 20 regardless of whether a schema file exists in the source code.
  • Binary pricing score. Pricing is scored as 100 or 0 (PPE configured or not). There is no differentiation between well-priced and poorly-priced actors.
  • Run sample cap. Only the last 100 runs are sampled for reliability scoring. High-volume actors may have runs outside this window.

Frequently Asked Questions

Why is reliability weighted the most (30%)? Because actor reliability directly impacts user trust and retention. An actor with a great README but a 50% failure rate will lose users quickly. Reliability is the foundation that all other quality dimensions build on.

How can I improve my fleet quality score the fastest? Focus on three quick wins: (1) Add PPE pricing to all actors without it (+20 points each). (2) Expand descriptions to 500+ characters (+15-25 points each). (3) Fix any actors with high failure rates. These three actions typically move the fleet score by 15-30 points.

What does a quality score of 50 mean for reliability when there are no runs? A score of 50 is a neutral default when there is no run data to evaluate. It means "unknown reliability" rather than "average reliability." Once the actor has 5+ runs, the score will reflect the actual success rate.

Can I set custom weights for the five dimensions? Not currently. The weights (README 25%, Pricing 20%, Schema 15%, Reliability 30%, Popularity 10%) are fixed. If you need custom weighting, download the raw data and apply your own formula to the breakdown scores.

Integration with ApifyForge Dashboard

This actor powers the quality panel of the ApifyForge dashboard. When connected, quality data is visualized with radar charts showing per-dimension scores, sortable actor tables, and a fleet quality trend line. The dashboard highlights "quick wins" -- actors where a single improvement (like adding pricing) would yield the biggest score jump. Schedule this actor to run weekly and track your fleet quality trajectory as you improve your actors.

How it works

01

Configure

Set your parameters in the Apify Console or pass them via API.

02

Run

Click Start, trigger via API, webhook, or set up a schedule.

03

Get results

Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.

Use cases

Sales Teams

Build targeted lead lists with verified contact data.

Marketing

Research competitors and identify outreach opportunities.

Data Teams

Automate data collection pipelines with scheduled runs.

Developers

Integrate via REST API or use as an MCP tool in AI workflows.

Ready to try Quality Audit?

Start for free on Apify. No credit card required.

Open on Apify Store