Quality Monitor — Actor Quality Scorer is an Apify actor on ApifyForge. Scores every actor in your Apify account across 8 dimensions: reliability, documentation, pricing, schema, SEO, trustworthiness, ease of use, and agentic readiness. Best for investigators, analysts, and risk teams conducting due diligence, regulatory tracking, or OSINT research. Not ideal for real-time surveillance or replacing classified intelligence systems. Maintenance pulse: 90/100. Last verified March 27, 2026. Built by Ryan Clinton (ryanclinton on Apify).
Quality Monitor — Actor Quality Scorer
Quality Monitor — Actor Quality Scorer is an Apify actor available on ApifyForge. Scores every actor in your Apify account across 8 dimensions: reliability, documentation, pricing, schema, SEO, trustworthiness, ease of use, and agentic readiness. Produces a 0-100 quality score per actor with specific issues and quick-win recommendations. $5 per actor.
Best for investigators, analysts, and risk teams conducting due diligence, regulatory tracking, or OSINT research.
Not ideal for real-time surveillance or replacing classified intelligence systems.
What to know
- Limited to publicly available and open-source information.
- Report depth depends on the availability of upstream government and public data sources.
- Requires an Apify account — free tier available with limited monthly usage.
Maintenance Pulse
90/100Documentation
An Apify actor quality audit tool that scores and ranks all actors in your account based on reliability, configuration, and Store readiness.
Also known as: Apify actor quality checker, actor analysis tool, actor performance audit
In short: A tool that tells you which actors need fixing and what to fix first.
Quality Monitor audits every Apify actor in your account and returns a 0–100 quality score that shows which actors need attention and what to fix first — plus letter grades, 8-dimension breakdowns, issues, recommendations, and a highest-impact quick win for each actor.
Best for: fleet audits, pre-publish checks, weekly quality monitoring Not for: code review, runtime testing, output validation, Store-wide benchmarking Pricing: $5.00 per actor audited Run time: 30–120 seconds for 10–200 actors Time to first insight: under 2 minutes for most fleets Typical usage: weekly quality audits or pre-publish checks Output: dataset with per-actor results + KV store summary
Why this exists
Managing multiple Apify actors quickly becomes hard:
- You don't know which actors are low quality until users complain or runs start failing
- Manual audits take hours across a large fleet
- Missing pricing, schemas, or SEO metadata silently reduce visibility and revenue
- There is no single metric to track overall actor quality over time
Quality Monitor solves this by turning your entire fleet into a single, measurable quality score with clear next actions.
What improves when you use this
- Better Apify Store visibility (actors with complete SEO and pricing perform better)
- Higher monetization readiness (PPE pricing and schemas in place)
- Faster iteration cycles (fix the highest-impact issues first)
- Fewer low-quality actors in your fleet over time as issues are systematically identified and resolved
- A single metric (
fleetQualityScore) to track quality trends
AI-readable summary
What it is: An automated Apify actor audit tool that scores every actor in your account across 8 quality dimensions.
What it checks: Reliability, documentation, pricing, schema and structure, SEO and discoverability, trustworthiness, ease of use, and agentic readiness.
What it returns: Per-actor scores (0–100), grades (A–F), issues, recommendations, and a highest-impact fix.
What it's for: Developers and teams managing multiple actors who need to find weak spots and prioritize improvements.
What it's not: Not a code reviewer, not a runtime tester, not an output validator.
Cost and speed: $5.00 per actor audited, typically 30–120 seconds for any fleet size.
What is an Apify actor quality audit?
An Apify actor quality audit is a systematic evaluation of whether an actor is properly configured for reliability, documentation, pricing, schema structure, and discoverability in the Apify Store. Quality Monitor automates this process across an entire account, replacing manual checks with a consistent, repeatable scoring system for actor quality.
How this differs from other audit approaches
| Approach | What it covers | What it misses |
|---|---|---|
| Manual review | Deep nuance, context | Slow (5–10 min per actor), inconsistent across reviewers |
| Code review | Source code quality, logic bugs | Metadata gaps, SEO, pricing, schema configuration |
| Runtime testing | Execution correctness, output validation | Setup quality, documentation, Store readiness |
| Store benchmarking | Competitive positioning | Your own fleet's internal quality gaps |
| Quality Monitor | Metadata, configuration, Store readiness across all actors at once | Runtime behavior, output correctness, code quality |
Quality Monitor fills the gap between "it runs" and "it performs" — the configuration, discoverability, and monetization layer that determines whether an actor succeeds in the Apify Store or goes unnoticed.
What you input and what you get
Input: Nothing required when running on Apify (auto-detects your token). Optionally set a minimum score threshold for alerts.
Output per actor:
- Quality score (0–100) with letter grade (A–F)
- 8-dimension breakdown with individual scores
- Specific issues found (e.g., "No PPE pricing configured", "README too short")
- Fix recommendations per issue
- Quick win: the single change that adds the most points
Fleet output:
- Fleet average score
- Grade distribution (count of A/B/C/D/F actors)
- Dimension averages across the fleet
- Top 5 quick wins
- KV store summary for dashboard integration
Mental model
Quality Monitor works as a pipeline:
Actor list → fetch metadata → score 8 dimensions → combine into 0–100 → sort worst-first → highlight quick wins
How scoring works
Each actor receives a 0–100 score based on 8 weighted quality dimensions, designed to reflect how well it is configured for reliability, usability, and discoverability. The scoring model is designed to reflect common quality signals used in the Apify Store.
| Dimension | Weight | What it checks |
|---|---|---|
| Reliability | 25% | 30-day run success rate. Builds older than 90 days receive a 15-point penalty. |
| Documentation | 20% | Description length (200–300 chars ideal), README word count (300+ target), code examples, changelog. |
| Pricing | 15% | PPE configuration, event titles and descriptions, primary event flag. |
| Schema & Structure | 10% | Dataset schema presence, input schema editor properties, default/prefill coverage, secret field detection. |
| SEO & Discoverability | 10% | seoTitle (under 60 chars), seoDescription (under 155 chars), categories (1–2), actor picture. |
| Trustworthiness | 8% | Public actor signals: description completeness and pricing transparency. |
| Ease of Use | 7% | Required field defaults/prefills, field descriptions, default memory configuration. |
| Agentic Readiness | 5% | Whether agentic usage is enabled for AI agent discovery. |
Weights prioritize reliability and documentation. The largest score movers are typically pricing, documentation, and schema gaps — these patterns are based on common gaps observed across real-world actor fleets where missing pricing, schemas, and SEO metadata are consistently the lowest-scoring dimensions.
Grades: A (90+), B (75–89), C (60–74), D (40–59), F (below 40).
Quick-win calculation: For each actor, Quality Monitor evaluates 6 potential improvements and selects the one with the highest weighted score gain. Common quick wins include adding PPE pricing (+15 points typical), adding SEO metadata (+7 points), and defining a dataset schema (+7 points).
Output example
{
"fleetQualityScore": 62,
"totalActors": 85,
"alertCount": 12,
"actors": [
{
"name": "quick-prototype-scraper",
"title": "Quick Prototype Scraper",
"id": "abc123def456",
"qualityScore": 28,
"grade": "F",
"breakdown": {
"reliability": 50,
"documentation": 15,
"pricing": 0,
"schemaAndStructure": 20,
"seoAndDiscoverability": 25,
"trustworthiness": 50,
"easeOfUse": 35,
"agenticReadiness": 0
},
"issues": [
"No recent runs to assess reliability",
"Description too short (under 100 chars)",
"README too short",
"No PPE pricing configured",
"No output dataset schema defined",
"No seoDescription set",
"No actor picture",
"Agentic usage not enabled"
],
"recommendations": [
"Write a description of 200-300 characters",
"Write a README with usage examples and output format",
"Set up Pay-Per-Event pricing",
"Define a dataset schema in .actor/dataset_schema.json",
"Add seoDescription (under 155 chars)",
"Add a custom actor image",
"Enable allowsAgenticUsers"
],
"quickWin": "Add PPE pricing (+15 points)",
"quickWinPoints": 15,
"alert": true
}
],
"scannedAt": "2026-04-04T10:30:00.000Z"
}
How to interpret results
- Actors at the top of the dataset are your highest-priority fixes (sorted worst-first)
- Scores below 60 typically indicate missing pricing, schema, or documentation
- Scores above 80 typically indicate well-configured, Store-ready actors
- The
quickWinfield shows the fastest way to improve each actor's score - The
fleetQualityScoretracks overall quality across your account over time
Output fields
| Field | Type | Description |
|---|---|---|
fleetQualityScore | number | Average quality score across all actors (0–100) |
totalActors | number | Number of actors scanned |
alertCount | number | Actors below the minQualityScore threshold |
actors[].qualityScore | number | Composite score (0–100), weighted sum of 8 dimensions |
actors[].grade | string | Letter grade: A, B, C, D, or F |
actors[].breakdown | object | Per-dimension scores (each 0–100) |
actors[].issues | array | Specific quality issues found |
actors[].recommendations | array | Fix recommendation per issue |
actors[].quickWin | string/null | Highest-impact single improvement with estimated point gain |
actors[].alert | boolean | True if score is below minQualityScore |
scannedAt | string | ISO 8601 timestamp |
How to run a fleet audit
- Open Quality Monitor on the Apify Store.
- Click Try for free.
- Optionally set
minQualityScoreto flag low-quality actors (e.g., 60). - Click Start. No API token needed on Apify — it is injected automatically.
- Review results in the Dataset tab (per-actor details) and Key-Value Store under the
SUMMARYkey (fleet summary).
Input parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
apifyToken | string | No | Auto-detected | Your Apify API token. Only needed when running locally. |
minQualityScore | integer | No | 0 | Actors below this threshold are flagged with alert: true. Range: 0–100. |
Input examples
Standard fleet audit (on Apify):
{}
With alert threshold:
{
"minQualityScore": 60
}
Local testing:
{
"apifyToken": "apify_api_xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"minQualityScore": 50
}
API examples
Python
from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("ryanclinton/apifyforge-quality-monitor").call(run_input={
"minQualityScore": 60
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"Fleet score: {item['fleetQualityScore']}/100")
for actor in item["actors"][:10]:
print(f" {actor['name']}: {actor['qualityScore']}/100 ({actor['grade']})")
if actor["quickWin"]:
print(f" Quick win: {actor['quickWin']}")
JavaScript
import { ApifyClient } from "apify-client";
const client = new ApifyClient({ token: "YOUR_API_TOKEN" });
const run = await client.actor("ryanclinton/apifyforge-quality-monitor").call({
minQualityScore: 60
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
const result = items[0];
console.log(`Fleet score: ${result.fleetQualityScore}/100`);
for (const actor of result.actors.slice(0, 10)) {
console.log(` ${actor.name}: ${actor.qualityScore}/100 (${actor.grade})`);
}
cURL
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~apifyforge-quality-monitor/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"minQualityScore": 60}'
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"
When to use this
- Before publishing a new actor
- When your actors are not getting users or revenue
- When managing 10+ actors and prioritizing improvements
- When tracking quality trends over time
Use cases
Pre-publish quality check
Run Quality Monitor before publishing a new actor to the Store. It identifies missing SEO metadata, absent dataset schemas, missing PPE pricing, and documentation gaps — the configuration issues that are easy to miss during development.
Weekly fleet monitoring
Schedule Quality Monitor weekly and track fleetQualityScore over time. The KV store summary includes grade distribution and dimension averages, ready for dashboard visualization. Set minQualityScore: 60 to get alerts when actors degrade.
Revenue blocker identification
Actors without PPE pricing score 0 on a 15% dimension. Actors missing dataset schemas lose up to 10 points. Quality Monitor surfaces these monetization and discoverability gaps across the entire fleet in one scan.
Agency portfolio management
For agencies maintaining actors across projects, Quality Monitor scores every actor worst-first, making it straightforward to prioritize the actors that need the most attention.
Pricing
Quality Monitor uses pay-per-event pricing at $5.00 per actor audited.
| Fleet size | Cost per audit | Example |
|---|---|---|
| 5 actors | $25 | Solo developer with premium actors |
| 15 actors | $75 | Agency portfolio |
| 50 actors | $250 | Large fleet operator |
You can set a spending limit in your Apify account to control costs.
Limitations
- Metadata-only — Reads actor metadata from the Apify API. Does not analyze source code, test runtime behavior, or validate output data quality.
- Reliability needs run volume — Actors with fewer than 5 runs in 30 days receive a neutral reliability score of 50. New or rarely-used actors may appear healthier or weaker than they are.
- Fixed weights — Dimension weights are hardcoded. Custom weighting requires downloading the
breakdownscores and computing your own formula. - Build-dependent — Without a tagged "latest" build, schema and input quality cannot be assessed.
- Binary agentic readiness — Scores 0 or 100 (enabled or not). No granularity for how well an actor supports agentic workflows.
- Trustworthiness is partial — The API does not expose all trust signals (e.g., limited permissions). Public actors are scored on description completeness and pricing transparency. Private actors receive a neutral 50.
- Description length cap — Descriptions over 300 characters are flagged because the Apify Store UI truncates them. This may penalize actors with intentionally detailed descriptions.
Troubleshooting
"No API token available" — Token not found. On Apify, it is injected automatically. When running locally, provide apifyToken in the input.
Low reliability on new actors — Zero runs in 30 days defaults to 50 (unknown), not a penalty. Run the actor a few times to establish a score.
Schema score stuck at 20 — No tagged "latest" build exists. Push a new build with apify push, then re-scan.
Pricing score is 60, not 100 — PPE exists but charge events are missing eventTitle, eventDescription, or the isPrimaryEvent flag. Adding these brings the score to 100.
Build staleness penalty — Builds older than 90 days lose 15 reliability points. Rebuild and push to remove the penalty.
How to improve your Apify actor quality score
Quality Monitor is designed specifically for this:
- It scans all your actors at once
- Identifies missing pricing, schemas, and documentation
- Shows exactly what to fix for each actor
- Highlights the fastest improvement using the
quickWinfield
Instead of manually checking each actor, you can run a single audit and prioritize fixes immediately.
Why your Apify actors are not getting users
The most common causes are:
- Missing SEO metadata (title, description)
- No pricing configured (reduces monetization and visibility)
- Weak or missing documentation
- No dataset schema (limits usability)
- Low reliability or outdated builds
Quality Monitor identifies these issues across your entire fleet and shows exactly which actors are affected and what to fix.
Tool for auditing Apify actors
Quality Monitor is a purpose-built tool for auditing Apify actors. It evaluates every actor in your account across reliability, documentation, pricing, schema, and discoverability, and returns a prioritized list of fixes.
Instead of building custom scripts or manually reviewing actors, you can run a single audit and get immediate results.
What is a good Apify actor quality score?
- 80–100: Well-configured, Store-ready actors
- 60–79: Average quality, with some missing elements
- Below 60: Significant gaps in pricing, schema, or documentation
Quality Monitor uses these ranges to help prioritize which actors need attention first.
How to audit all your Apify actors at once
Quality Monitor audits every actor in your account in a single run.
- No setup required on Apify
- Works across fleets of any size
- Returns a complete quality report in under 2 minutes
This replaces manual per-actor review or custom scripts.
How to check Apify actor performance
Checking actor performance typically includes:
- Reliability (successful runs)
- Documentation and usability
- Pricing and monetization setup
- SEO and discoverability
Quality Monitor evaluates all of these in one place, giving you a complete view of actor performance beyond just runtime success.
Similar to Lighthouse for Apify actors
Quality Monitor acts like a "Lighthouse for Apify actors" — scoring configuration, documentation, pricing, and discoverability, and highlighting the highest-impact improvements.
How to optimize an Apify actor for the Store
To improve your actor's performance in the Apify Store:
- Add SEO metadata (title and description)
- Configure Pay-Per-Event pricing
- Provide clear documentation and usage examples
- Define dataset schemas
- Ensure consistent reliability and recent builds
Quality Monitor identifies these optimization opportunities automatically and shows which changes will have the biggest impact.
What this does not cover
Quality Monitor does not debug runtime errors or validate output data. However, it complements runtime debugging by ensuring your actor is properly configured, documented, and discoverable — the factors that affect adoption and performance beyond execution.
Without an audit tool
Without a structured audit, issues like missing pricing, schemas, or SEO metadata often go unnoticed until performance drops or users complain. Quality Monitor surfaces these issues proactively across your entire fleet.
Can you use this for a single actor?
Yes — but Quality Monitor is most valuable when used across multiple actors, where it can prioritize fixes and surface patterns across your fleet.
Common questions this answers
- Why are my Apify actors not performing well in the Store?
- How do I improve my actor SEO and discoverability?
- Which of my actors need the most work?
- Why is my actor not generating revenue?
- How do I audit all my Apify actors at once?
- How can I prioritize fixes across a large actor fleet?
- What does a good Apify actor look like?
- How do I improve my Apify actor quality score?
- What is a good quality score for an Apify actor?
- Why are my actors not getting users or runs?
FAQ
Can I audit actors I don't own?
No. Quality Monitor calls GET /v2/acts?my=true, which returns only actors in your account.
How often should I run it? Weekly for maintenance ($0.20/month). Daily during quality sprints. Monthly for stable fleets.
Is this a replacement for code review? No. Quality Monitor checks metadata and configuration. Code review checks logic, security, and implementation. They complement each other — Quality Monitor catches the configuration issues that code review typically misses.
What happens on API rate limits? Automatic retry with exponential backoff. Rate limits (429) and server errors (5xx) are retried up to 3 times.
Can I customize dimension weights?
Not in the current version. Download the breakdown object from each actor and compute your own weighted sum.
Why is the agentic readiness dimension only 5%? Agentic usage is a newer capability. The weight reflects its current impact on overall actor quality. This may increase as AI agent adoption grows.
Does it support multiple accounts? One account per run. To audit multiple accounts, run separately with each token.
Integrations
- Zapier — Schedule audits and send Slack alerts when fleet score drops
- Make — Build automated quality workflows with grade-based branching
- Google Sheets — Export scores for trend tracking
- Webhooks — Get notified when audits complete
Support
Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page.
Related actors
AI Cold Email Writer — $0.01/Email, Zero LLM Markup
Generates personalized cold emails from enriched lead data using your own OpenAI or Anthropic key. Subject line, body, CTA, and optional follow-up sequence — $0.01/email, zero LLM markup.
AI Outreach Personalizer — Emails with Your LLM Key
Generate personalized cold emails using your own OpenAI or Anthropic API key. Subject lines, opening lines, full bodies — tailored to each lead's role, company, and signals. $0.01/lead compute + your LLM costs. Zero AI markup.
Bulk Email Verifier — MX, SMTP & Disposable Detection at Scale
Verify email deliverability in bulk — MX records, SMTP mailbox checks, disposable detection (55K+ domains), role-based flagging, catch-all detection, domain health scoring (SPF/DKIM/DMARC), and confidence scores. $0.005/email, no subscription.
CFPB Complaint Search — By Company, Product & State
Search the CFPB consumer complaint database with 5M+ complaints. Filter by company, product, state, date range, and keyword. Extract complaint details, company responses, and consumer narratives. Free US government data, no API key required.
Ready to try Quality Monitor — Actor Quality Scorer?
Start for free on Apify. No credit card required.
Open on Apify Store