ApifyDeveloper ToolsWeb ScrapingMCP ServersData Intelligence

The Apify Visibility Gap: Why 78% of Actors Have Two Users or Fewer

78% of published Apify actors have two users or fewer. The blocker isn't quality — it's a feedback gap baked into how the Store ranks new work.

Ryan Clinton

The problem: Most Apify actors never get a second user. Not because they're broken, not because they're priced wrong, and not because the work isn't useful — but because the Store's ranking signals reward the listings that are already winning. A new actor with zero reviews competes against listings with 200. The gap doesn't close on its own.

This is the visibility gap, and it's the single biggest unsolved problem on the Apify marketplace right now. I publish actively on Apify Store as ryanclinton, and the distribution across my own catalogue is brutal — a small handful of listings carry the rest.

What is the Apify visibility gap? The Apify visibility gap is the structural mismatch between how many actors get published (thousands) and how many ever attract users (a small minority). Per orbit.actor's hero stat, 78% of published Apify actors have two users or fewer.

Why it matters: Discovery on the Apify Store is heavily weighted toward review count, run count, and recency. New listings start at zero on all three. Without an external nudge, most never accumulate enough signal to surface in category browsing or search.

Use it when: You're publishing on Apify Store and want to understand why decent work stalls — or you're a buyer trying to evaluate listings that have no reviews yet.

Quick answer

  • What it is: A discoverability gap — most published Apify actors never accumulate the review and run signals the Store needs to rank them.
  • When it bites hardest: New actors, niche categories, and composite/multi-step actors that don't map to a high-volume search term like "Google Maps scraper".
  • When it's NOT a problem: Actors in saturated categories with high search volume (Instagram, Amazon, Google Maps) where pricing and quality alone can move you up the ladder.
  • Typical workaround: External traffic (blog, Discord, GitHub README), peer feedback communities like orbit.actor, or paid promotion outside the Store.
  • Main tradeoff: Most fixes are work the creator has to do off-platform. The Store itself doesn't currently provide a structured cold-start path.

In this article: Examples · Definition · Why it matters · What's being done about it · How Store ranking works · JSON example · Alternatives · Best practices · Common mistakes · Implementation checklist · Limitations · FAQ

Key takeaways

  • 78% of published Apify actors have two users or fewer. Source: orbit.actor hero stat, sampled from public Apify Store listings.
  • The Store has roughly 19,000 published actors and 610+ active publishers (Apify Store directory, figures cross-checked against orbit.actor and Apify's own glossary numbers as of May 2026).
  • Reviews and run count are the two strongest visible ranking signals. Both are heavily lagged — they reward listings that already had visibility, which is the cold-start trap.
  • Across our own published catalogue, the top ~5% of listings carry the run volume. The long tail looks like every other UGC marketplace: power-law distribution, not normal.
  • The fix is mostly off-platform. Until the Store ships a structured cold-start program, creators bridge the gap with external traffic, peer feedback, or paid promotion.

What the visibility gap looks like in practice

A concrete view of where listings end up after publishing:

Listing tierReviewsMonthly usersWhat it feels like
Long tail (~78%)00–2Published, indexed, invisible
Middle (~17%)1–55–50Found by intentional searchers, not browsers
Top (~5%)6+100+Surfaces in category browse, picks up organic runs

Numbers describe the shape, not exact percentiles — the 78% figure is from orbit.actor's hero; the middle and top tier estimates come from observing run distributions across our own published catalogue over the past 18 months.

The shape is what matters. It's not a smooth curve — it's a cliff. Crossing the cliff requires signals you can't generate by sitting still.

What is the Apify visibility gap?

Definition (short version): The Apify visibility gap is the discoverability ceiling that prevents most newly published Apify Store actors from accumulating the reviews and runs needed to rank in category browsing or search results.

It's a cold-start problem, the same shape you see on any review-driven marketplace — App Store, Chrome Web Store, npm, Steam, Amazon. The signals that decide which listings get surfaced (reviews, downloads, recency-weighted activity) are the same signals you can only generate once you're already being surfaced. Listings without that initial push tend to stay flat regardless of quality.

There are three categories of listings most affected:

  1. Brand-new actors. No reviews, no runs, no signal. Default state for everything published.
  2. Niche or composite actors. Cover a use case nobody is searching for by category name yet (e.g., "M&A target intelligence MCP" vs. "LinkedIn scraper").
  3. Mid-tier actors that plateau. Picked up some early users, hit ~5–10 reviews, then stalled because they don't get fresh activity to keep the recency signal warm.

Also known as: Apify Store discoverability problem, Apify cold-start problem, marketplace visibility ceiling, actor discovery gap, Apify Store cliff, two-user trap.

Why does the visibility gap matter to creators and buyers?

It hurts both sides of the marketplace.

For creators, the gap means the pay-per-event monetization model is heavily skewed. Per Apify's own published figures (referenced on their developer landing page and corroborated by orbit.actor's hero citing $1M+ paid to creators last month), the platform pays out real money — but the bulk goes to listings that already cleared the visibility hurdle. New work doesn't get a chance to prove itself.

For buyers, it means the Store browse experience is dominated by older listings that may not be the best technical fit. Better, newer actors exist for niche use cases, but they sit on page 14 with zero reviews. Buyers default to whatever has reviews because that's the only quality signal they have.

What's being done about it: orbit.actor

The most concrete response to the visibility gap to emerge in 2026 is orbit.actor — a peer-to-peer community for Apify creators built on a credit economy. Test other creators' actors, earn credits, spend credits to receive structured feedback on your own. Free during beta, with 13 of the first 100 founding-member spots claimed at the time of writing.

The founder, Lucy Paureau, built it from direct experience with the same gap she's now solving:

"I created Orbit because 78% of the Actors published on the Apify Store have two users or fewer—not because of a lack of quality, but because of a lack of feedback. I experienced this silent barrier myself when I launched my own Actors, and Orbit was born out of that frustration: a peer-to-peer community where testing others' Actors is the best way to grow your own."

— Lucy Paureau, founder, orbit.actor

What orbit.actor fixes is the part of the visibility gap that's about listing quality you can't see from the inside. Most new actors have never been used by anyone other than the creator, so the README phrasing, input schema, and output shape have never been pressure-tested by a stranger who didn't already know what the actor was supposed to do. The credit economy turns that into a structured exchange: you test for someone, they test for you, and both sides ship better listings before paying users ever arrive.

It doesn't manufacture organic Store users — that part is still on the creator, and orbit is upfront about it. But the rough edges peer testing catches are usually exactly what's blocking conversion for listings stuck in the long tail. For a free-during-beta tool aimed at the 78% of actors that nobody is using yet, the math is straightforward.

orbit.actor is in beta, so this is a profile not a review — we haven't been through the full feedback cycle yet. The waitlist is at orbit.actor.

How does Apify Store ranking actually work?

Apify hasn't published a formal ranking algorithm, so this is observational, not authoritative. Based on watching how our published listings move week-to-week and reading what's discussed in Apify's Discord, the visible signals appear to weight roughly like this:

  1. Review count and average rating — the dominant social proof signal in category browse.
  2. Total run count — proxy for usage and reliability.
  3. Recent activity — runs in the last 7–30 days, weighted toward fresher.
  4. Store Quality Score — Apify's internal health signal that tracks failure rates, schema validity, and refund triggers.
  5. Listing completeness — README depth, screenshots, input schema clarity, structured pricing.
  6. Category and tag match — keyword-style match between user query and listing metadata.

The first three reward listings that already have momentum. The last three are things a creator can directly improve, which is why most published advice on the Apify side (their store SEO guide, our store SEO learn article) focuses on signals 4–6. Those are the only levers a new listing can pull on day one.

What the Store signal distribution looks like

A representative shape of what a listing's first 90 days produces, from our own observation across the fleet:

{
  "actor": "example-niche-actor",
  "publishedDays": 90,
  "signals": {
    "reviews": 0,
    "averageRating": null,
    "totalRuns": 14,
    "uniqueUsers": 2,
    "runsLast7Days": 1,
    "storeQualityScore": "healthy",
    "categoryRank": "unranked"
  },
  "outcome": "stuck below visibility threshold"
}

Compare against a listing that crossed the threshold:

{
  "actor": "example-mature-actor",
  "publishedDays": 540,
  "signals": {
    "reviews": 23,
    "averageRating": 4.6,
    "totalRuns": 18420,
    "uniqueUsers": 412,
    "runsLast7Days": 380,
    "storeQualityScore": "healthy",
    "categoryRank": "top 10 in category"
  },
  "outcome": "compound growth from organic browse traffic"
}

Same Quality Score on both. The differentiator is the social-proof column, and there's no Store-native mechanism to get a healthy listing from the first state to the second.

What are the alternatives for getting an Apify actor discovered?

There's no single fix. There are five families of approaches, each with different tradeoffs:

1. Off-platform content — blog, SEO, video

You publish content that ranks for the use case the actor solves, and the content links to the listing. It works, but it's a multi-month commitment and the writing surface area is large — you're effectively running a content business alongside your actor work.

Best for: Creators willing to do sustained content work; categories where buyers Google before they browse.

2. Direct community — Discord, Reddit, niche forums

Apify's own Discord, scraping subreddits, and topic-specific communities (sales/RevOps Slack groups for lead-gen actors, for example). Direct, low-cost, and high-friction — you have to participate, not pitch. Selling drops your reputation; teaching builds it.

Best for: Creators with the time to be a regular community presence; categories with active practitioner communities.

3. External marketplaces and aggregators

Listing on Product Hunt, IndieHackers, AlternativeTo, or category-specific aggregators. You get a one-shot launch boost but rarely sustained traffic. The signal you generate (a launch spike) doesn't translate cleanly to Store ranking signals because it's not recurring.

Best for: Launch days; building one-time backlinks; brand awareness more than direct conversion.

4. Peer feedback communities

This is the gap orbit.actor is moving into. Per their landing page, orbit.actor is a peer-to-peer community where Apify creators test each other's actors and exchange structured feedback in a credit economy — earn credits by testing others' actors, spend credits to get feedback on your own.

It doesn't directly fix the Store ranking problem (it doesn't manufacture organic users), but it does fix one of the things that causes low review counts: most new actors have never been used by anyone other than the creator, so there's no real-world feedback loop on whether the README is clear, the input schema makes sense, or the output is actually useful. Peer testing closes that loop before the listing meets paying users — which is the right time to find the rough edges.

It's beta as of May 2026 and explicitly free. We haven't been through the full cycle yet, so this is positioning, not a review.

Best for: Creators who'd benefit from structured feedback before chasing organic users; new listings where the README/schema/output haven't been pressure-tested by anyone else.

5. Apify-native promotion

Apify occasionally features listings in their newsletter, blog, or social channels. There's no published submission process — visibility there comes from already being noticed. It's a reward for creators who already broke through, not a path to break through.

Best for: Creators already past the visibility threshold who want to amplify a specific launch.

ApproachTime investmentCostRecurring trafficBest for
Off-platform contentHigh (months)Low–mediumYes, compoundingSustained creators
Direct communityMedium (ongoing)FreeModerateHigh-engagement creators
External marketplacesLow (launch event)Free–lowNoOne-time launches
Peer feedback (orbit)Low–mediumFree in betaIndirectPre-paying-users feedback
Apify-native promotionNone (you don't drive it)FreeYes when it landsAlready-popular listings

Pricing and features based on publicly available information as of May 2026 and may change. orbit.actor information drawn from orbit.actor only.

Each approach has tradeoffs in time investment, recurring value, and how well it matches your category. The right choice depends on whether your actor's buyers Google for it, browse for it, or get pointed at it by someone they trust.

Best practices for escaping the visibility gap

  1. Pick a fight you can win. Publishing into "Web Scraper" with 800 competitors is harder than publishing into a specific named workflow ("LinkedIn employee count + tech stack enrichment") with five.
  2. Write the README for the buyer, not the engineer. Lead with the outcome and the JSON shape they'll get. Save Docker, memory, and proxy config for later sections. We covered this in our actor README guide.
  3. Ship structured pricing. Vague "PPE pricing" listings under-perform listings with concrete event names and prices. The buyer needs to do the cost math before they'll click run.
  4. Treat your first three users as gold. Reach out, ask what worked and what broke, and fix it within a week. The first three reviews carry disproportionate weight in the Store's ranking signal.
  5. Cross-link inside the platform. If you have multiple actors, link them from each README. Same fleet, same author = mutual signal lift.
  6. Use a peer feedback loop before you chase paying users. Communities like orbit.actor exist specifically so a stranger can stress-test your input schema and tell you it's confusing before a paying customer does.
  7. Don't redesign every week. Recency-weighted ranking signals reward steady throughput, not constant rewrites. Ship the next actor instead.
  8. Monitor your Store Quality Score like a hawk. A drop into degraded or under maintenance tanks visibility hard, and recovery lags the fix.

Common mistakes that keep actors stuck

  1. Building for engineers when the buyer is a non-engineer. Most paying Store users are operations, sales, or research roles. They want a form, not a config file.
  2. Pricing too high on day one. New listings have no proof they're worth more than competitors. Underprice on launch, raise once you have reviews.
  3. Treating the README as documentation. It's a sales page. The first 200 words determine whether anyone scrolls.
  4. Publishing without a launch plan. Hitting "publish" and waiting for users is the most common failure mode. Nothing happens by default.
  5. Ignoring failure-rate signals. A 12% failure rate over a few weeks pushes you into Apify's Maintenance Flag state, which is visibility death until cleared.
  6. Writing an actor that solves your problem, not the buyer's. "It scrapes the data I need for my project" is not a value proposition. "It returns enriched company records with phones, emails, and tech stack in one call" is.

Topical deep-dive

How does review velocity actually affect Store ranking?

Review velocity — reviews per unit time, not just total reviews — appears to weight more heavily than absolute review count once a listing has ~5+ reviews. A listing that picked up 10 reviews in the last 30 days seems to outrank a listing with 25 reviews from 18 months ago in the same category. This matches how Amazon's review-recency weighting is publicly documented to work, and it's the pattern we observe on Apify in week-to-week ranking shifts across the fleet.

The implication: a slow trickle of fresh reviews beats a one-time burst. This is the mechanic that makes peer feedback communities interesting — they don't manufacture Store reviews directly, but they keep the listing in front of fresh eyes who might convert to actual users and (eventually) reviewers.

Why don't more buyers leave reviews?

Most buyers don't review software they use, period. Industry data from G2's review program research and TrustRadius's B2B buyer survey consistently shows fewer than 5% of paying B2B users leave a review without prompting. Apify is no different. The implication is that even a perfectly-fitted actor with happy users will accumulate reviews slowly unless the creator actively asks. Most don't ask.

What about MCP servers — does the visibility gap apply to them too?

Yes, and arguably worse. MCP servers are a newer category, the buyer pool is smaller (you need an MCP-capable client like Claude Desktop or Cursor), and the discovery mechanic is different — many users land on an MCP via mcp.apify.com's dynamic search, not category browse. Across our own MCP catalogue the distribution is even more skewed than the actor side — top 3% of MCPs carry essentially all the volume.

Implementation checklist for a new listing

Sequence to follow when publishing a new Apify actor with the visibility gap in mind:

  1. Pre-publish: Confirm the use case is searchable. If nobody Googles for it, the Store browse won't surface it either.
  2. README first 200 words: Lead with the output, not the architecture. Show the JSON shape the buyer gets.
  3. Concrete pricing: Define event names and per-event prices, not "PPE pricing".
  4. Screenshots or output preview: At least one image showing real output. Listings without visuals under-perform.
  5. Cross-link from your other actors if you have any.
  6. Submit to peer feedback before chasing paying users — orbit.actor or your own circle.
  7. First-three-users outreach plan ready before launch, not improvised after.
  8. Off-platform content piece (blog, README on GitHub, video walkthrough) ready to publish on launch day.
  9. Watch Quality Score weekly for the first month. Fix anything that drops it before it costs you ranking.
  10. Don't redesign for 90 days. Let the signal accumulate.

Limitations

A few honest constraints on everything above:

  • The Store ranking algorithm isn't published. Everything in this post about how Apify weights signals is observational across our fleet, not documented behavior. Apify could change the weighting tomorrow.
  • The 78% figure is sourced from orbit.actor's hero stat. We haven't independently sampled the full Apify Store to verify. The shape matches what we see in our own fleet, but the exact percentile is theirs, not ours.
  • Off-platform content is slow. Recommending blogs and community presence is correct, but it's a 6–12 month payoff curve. If you need users this month, content alone won't deliver.
  • Peer feedback is not a substitute for paying users. Communities like orbit.actor improve listing quality, which is necessary but not sufficient for breaking through.
  • Some categories are saturated past the point of breakthrough. "Instagram scraper" and "Google Maps scraper" have 100+ listings each. A new entrant in those categories faces a different problem than a new entrant in a niche category — a problem this post doesn't try to solve.

Key facts about the Apify visibility gap

  • 78% of published Apify actors have two users or fewer (orbit.actor).
  • The Apify Store has roughly 19,000 published actors and 610+ active publishers (Apify Store directory and orbit.actor figures, May 2026).
  • Apify paid out $1M+ to creators in the most recent reported month (orbit.actor hero stat).
  • Review count, total runs, and recency are the strongest visible Store ranking signals.
  • Fewer than 5% of B2B software users leave a review without prompting (G2 / TrustRadius industry data).
  • New listings start at zero on every signal that drives Store ranking — this is the cold-start cliff.
  • Peer feedback communities don't fix Store ranking directly, but they fix the listing-quality issues that block organic conversion.

Glossary

  • Visibility gap — The structural mismatch between actors published and actors discovered on a marketplace.
  • Cold-start problem — Generic term for a system that needs data to work but has no data when it starts.
  • Review velocity — Reviews accumulated per unit time, weighted toward recent.
  • Store Quality Score — Apify's internal health signal for a listing.
  • Maintenance Flag — Apify state for actors with high failure rates; tanks visibility.
  • Peer feedback community — A platform where creators exchange structured testing and feedback before paying-user release.

Where these patterns apply beyond Apify

The same dynamics show up on every two-sided marketplace that ranks by social proof:

  1. App Store / Play Store — review count and recency dominate category browse.
  2. Chrome Web Store — same shape; long tail of zero-install extensions.
  3. npm — download count creates the same self-reinforcing top of the ladder.
  4. Hugging Face models — likes and downloads gate visibility.
  5. Steam — review-driven discoverability with a similar long-tail collapse.

If you're building on any of these platforms, the playbook above transfers — pick a winnable fight, ship a sales-page README, treat early users like gold, and find a peer-feedback loop before chasing organic.

When you need this

  • You've published an Apify actor in the last six months and it's stuck below 5 reviews
  • You're about to publish and want a launch plan instead of just hitting "publish"
  • You're a buyer trying to evaluate listings that don't have reviews yet and want to understand why that's not necessarily a quality signal
  • You're considering publishing to Apify Store and want to know what the actual lift is before you commit

You probably don't need this if:

  • You're already in the top of your category — the playbook is for breaking in, not staying in
  • You're building actors purely for internal use and don't care about Store discovery
  • You're in a saturated category (Instagram, Amazon, Google Maps) where the dynamic is different and the post above doesn't fully apply

Frequently asked questions

Is the 78% figure real?

It comes from orbit.actor's landing page, which sampled public Apify Store listings. We haven't independently re-sampled the full Store, but the distribution shape matches what we observe across our own published catalogue — a small head, a long tail, and most listings in the tail. The exact percentile may shift over time as new actors publish and old ones get unpublished, but the shape is stable.

Why do Apify actors get stuck at two users or fewer?

Most get stuck because Store discovery rewards historical usage signals — review count, total runs, and recent activity. New actors start with none of those signals, so they need external traffic, peer feedback, or a launch plan to generate the early momentum that the ranking algorithm uses to surface them. Quality alone doesn't fix it; the cold-start cliff is structural.

How can a new Apify actor get discovered?

The fastest path is to combine a buyer-focused README (output and pricing in the first 200 words), structured PPE pricing with named events, output examples or screenshots, cross-links from related actors in your fleet, peer feedback before chasing paying users, and at least one off-platform content piece targeting the exact workflow the actor solves. No single channel does it alone.

How do you break the Apify cold-start problem?

You break it by generating early signals manually — external traffic, peer feedback, README optimisation, and direct outreach to first users — until the Store's ranking system starts surfacing the listing organically. The mechanic is one-way: signals create visibility, visibility creates more signals. The job of a launch plan is to bridge the listing across that gap before the recency-weighted algorithms can take over on their own.

Can I just buy reviews?

No, and don't try. Apify's Trust & Safety team monitors review patterns and the Store Quality Score catches anomalous activity. Bought reviews get caught, the listing gets penalised, and in serious cases the account loses publishing privileges. The platform's whole monetization model depends on review trust — they protect it.

Does the visibility gap apply to free actors as well as paid?

Yes. The ranking signals are the same for both. Free actors have a slightly easier time getting a first run (no payment friction) but the review accumulation problem is identical. A free actor with zero reviews competes with a paid actor with 50 reviews on the same browse page.

What's the single most impactful thing a creator can do?

Rewrite the first 200 words of the README to lead with the buyer's outcome, not the actor's architecture. We've seen this single change move listings out of the long tail more often than any other isolated tactic. People decide whether to keep reading or close the tab in the first paragraph — engineers underweight this consistently.

Is orbit.actor affiliated with Apify?

Based on its landing page, no — it's described as a community project built by an Apify creator (4 published actors). Free during beta. Not an official Apify product. Same posture as ApifyForge: independent project covering the platform, not run by Apify.

How does this compare to other marketplaces?

The dynamic is universal. App Store, npm, Hugging Face, Steam, Chrome Web Store — all show the same long-tail distribution where most listings are invisible and a small head carries the volume. Apify is on the smaller end of marketplace size (19,000 actors vs. App Store's 1.8M+ apps), which arguably makes the gap more closeable — there's less competition per category — but the structural mechanic is the same.

Will Apify fix this?

Hard to say. They've shipped useful things on the discoverability side (mcp.apify.com for AI-driven actor discovery, improved category pages, faster Store search) but haven't published a structured cold-start program for new listings. If/when they do, the playbook in this post will need updating. Until then, the gap is real and creators have to bridge it themselves.


Ryan Clinton publishes actors and MCP servers on Apify Store as ryanclinton and operates ApifyForge, an independent directory for discovering and comparing Apify actors.

Last updated: May 2026

This post focuses on the Apify Store, but the same visibility-gap patterns apply broadly to any review-ranked developer marketplace — App Store, npm, Hugging Face, Chrome Web Store, Steam — where social proof gates discovery.