Data IntelligenceOriginal ResearchCreator EconomyDeveloper MarketingApify

The 2026 Dev-YouTuber Sponsorship Audit: Only 2 A-Tier Creators in 90

Of 90 dev YouTubers indexed May 2026, 2 hit A-tier: NetworkChuck and TheCherno. Brilliant.org out-sponsors Cursor, Warp, and Codeium combined.

Ryan Clinton

We built youtube-sponsorship-intelligence to enrich a named list of YouTube creators with sponsor-readiness tier, priority score, detected sponsor brands from transcripts, affiliate-link hosts, MX-validated business contacts, and sponsor-maturity classification. To stress-test the 2026 industry chatter about "AI-native dev tools out-bidding traditional SaaS for top dev creators," we ran it across 121 named developer-focused YouTube channels on 14 May 2026. The 2 A-tier creators it surfaced from a 90-channel post-gate cohort, and the brand it crowned as the actual sponsor king of dev YouTube, are the data this post documents.

The problem: Every quarter a creator-marketing newsletter publishes the "AI dev tools are winning YouTube" thinkpiece off three sponsor screenshots and a Cursor announcement. The actual ad-read patterns across a named cohort of 90 developer YouTubers tell a completely different story. The narrative says Cursor, Warp, and Codeium are sweeping the dev-creator market. The transcripts say Brilliant.org from 2012 is doing more YouTube ad reads than all three of those tools combined. SDR teams reading the newsletter are calling the wrong creators with the wrong pitch.

This post is a documentary audit of one snapshot of the developer-YouTube sponsorship market. Every named creator, brand count, and tier classification links back to either the run output or the channel's own public YouTube page. The data is real, the pattern is real, and the framing the marketing press usually slaps on top is wrong.

What is sponsor-readiness tier? A four-band classification (A/B/C/D) the actor assigns to each YouTube channel based on subscriber count, sponsorship frequency, brand-detection density, contactability, and recency of the last sponsored video. A-tier means a creator is actively running sponsor reads, has a validated business contact, and clears a meaningful subscriber threshold. See the actor's docs on Apify for the full scoring formula.

Why it matters: Creator-marketing agencies and B2B SaaS marketers running outbound to YouTubers waste most of their outreach time on cold lists where 80%+ of names have never run a sponsor read. The Influencer Marketing Hub 2025 benchmark report puts YouTube creator-marketing spend at over $35B globally, with most of it allocated through agency intermediaries who buy lists rather than validate them.

Use it when: building a creator-outreach list, sizing a sponsor's competitive footprint on YouTube, finding the real ad-read incumbents in a vertical, or filtering a long name-list down to the subset that's actually sponsorship-active before any human SDR touches it.

Key findings

  • 2 of 90 channels (2.2%) reached A-tier — NetworkChuck (5.25M subs, 83 priority score) and TheCherno (748k subs, 70 priority). Both have MX-validated business contacts and ran sponsor reads in the last 30 days.
  • Brilliant.org is the cohort's largest sponsor footprint — 10 of 90 channels (11.1%) carry a brilliant.org affiliate link in descriptions. The runners-up are bit.ly (29 channels, generic redirector) and amzn.to (14 channels, Amazon affiliate).
  • AI-native dev tools post zero detected ad reads across the cohort. Cursor: 0 channels. Warp: 0 channels. Codeium: 0 channels. Continue: 0 channels. The 2026 "AI tools dominate dev YouTube" narrative does not show up in the transcripts.
  • 84% of channels (76 of 90) show zero detected sponsor activity in their 10 most recent videos. Sponsorship-active dev YouTubers are a small minority of the cohort, not the norm.
  • Mid-tier (250k-1M subs) is the sponsorship-active sweet spot — 21.6% of mid-tier creators are running active sponsor reads, versus 6.9% of large-tier (1M-5M) creators. The "go bigger" assumption is backwards.
  • The unreachable mega-tier: of 4 channels with 5M+ subscribers, only NetworkChuck shows active sponsor reads. CodeWithHarry (9.64M) and freeCodeCamp (11.6M) emit as "never" maturity with zero detected brands.
  • 14.4% of the cohort returned an MX-validated business email. This is the actor's commercial product — see the redaction note below for why the addresses themselves are not in the public CSV.
  • The actor's quality gate trimmed 11 channels below the 50k-subscriber floor (correctly — these resolved to renamed clones or small accounts) and 8 errored on handle resolution. Post-gate cohort: 90 unique channels by channelId.

In this article: The leaderboard · Story A: A-tier in detail · Story B: Brilliant.org dominance · Story C: mid-tier sweet spot · Story D: the unreachable mega-tier · Story E: the actor's job · What coverage gets wrong · Methodology · Caveats · Press lift-out · FAQ

The 90-channel leaderboard — ranked by priority score

The full leaderboard of 90 channels (post quality-gate, post dedup by channelId) is downloadable as CSV from the "Produced by" banner above. The CSV omits the email-address column for the reasons noted in the redaction policy below; the priority score, tier, maturity, detected-brand list, and MX-valid boolean are all included.

The top 22 channels by priority score:

RankHandleSubsTierPriorityMaturityBrands detectedMX-valid email
1@NetworkChuck5.25MA83regular8Yes
2@TheCherno748kA70regular7Yes
3@TechWithTim2.01MB64regular3No
4@CodingWithLewis741kB64occasional1Yes
5@dreamsofcode206kB61regular3No
6@AdrianTwarog418kB61regular3No
7@geekyshows546kC58never0Yes
8@davidbombal3.05MB57occasional1No
9@mehulmpt468kB54occasional2No
10@aniakubow442kB52never0Yes
11@DesignCourse1.17MB52never0Yes
12@programmingwithmosh5.04MC52never0Yes
13@realpython207kB51regular2No
14@JomaTech2.34MC51never1Yes
15@JoshuaFluke1624kB50occasional2No
16@jherr211kC49occasional4No
17@sentdex1.44MC49never0Yes
18@fknight694kB48occasional4No
19@Indently351kB48saturated1No
20@amigoscode1.09MC45never0Yes
21@Telusko2.78MB45never0Yes
22@LearnWebCode353kC45never0No

Captured 14 May 2026 via the youtube-sponsorship-intelligence actor against YouTube channel pages and the 10 most recent video transcripts per channel. Tier and priority score are deterministic outputs of the actor's scoring formula.

Readiness tier distribution across the full cohort

TierChannels% of 90
A22.2%
B1718.9%
C5561.1%
D1617.8%

C-tier is the bulk of the dev-YouTube market: subscriber count clears the gate, but the actor found no recent sponsor reads in the transcript sample. D-tier is below either the activity bar or the contactability bar.

Story A — the 2 A-tier creators

@NetworkChuck (5.25M subscribers, A-tier, priority 83.) Cybersecurity and networking tutorials. The actor detected 8 sponsor brands in the 10 most recent video transcripts: 3CX, BambuLabs, Bitdefender, Cisco, Hostinger, NetworkChuck Academy, Perplexity, and Twingate. Sponsorship frequency: 89% of recent videos contain a detected ad read. Last sponsored video: 14 April 2026. MX-valid business contact returned. This is the textbook dev-YouTube sponsor read: enterprise infrastructure (Cisco, Twingate), consumer-prosumer security (Bitdefender), AI search (Perplexity), and the creator's own school (NetworkChuck Academy).

@TheCherno (748k subscribers, A-tier, priority 70.) C++ game engine development. The actor detected 7 sponsor brands: CodeRabbit, GitKraken, Hostinger, Let's Get Rusty (a Rust course channel cross-promo), Miro, Surfshark, and Twingate. Sponsorship frequency: 71% of recent videos. Last sponsored video: 14 April 2026. MX-valid business contact returned. Smaller subscriber base than NetworkChuck but higher detected-brand density per video — TheCherno is running dev-tool sponsorships on a near-every-video cadence.

These two are what an outbound creator-marketing team should actually be calling first. The other 88 channels are progressively worse fits for a "we want a sponsored video next month" pitch.

Story B — Brilliant.org out-sponsors every AI dev tool combined

Across the 90-channel cohort, the brand with the largest detected footprint is Brilliant.org. Ten channels carry a brilliant.org affiliate URL in their video descriptions — the cohort's most-shared non-generic affiliate host. Two additional channels surface "Brilliant" as a transcript-detected sponsor mention separate from the affiliate link.

The 2026 marketing-trade-press narrative argues that AI-native dev tools have "captured" dev YouTube ad spend. The actor's transcript scan finds:

BrandChannels with detection (of 90)Notes
Brilliant.org10 affiliate + 2 transcriptEducational platform, founded 2012
Hostinger4 transcriptWeb hosting
Twingate3 transcriptZero-trust network access
KiwiCo2 transcriptSTEM kits for kids
Cursor0AI code editor
Warp0AI terminal
Codeium0AI code completion
Continue0Open-source AI coding

Sponsor counts based on detection across the 10 most recent video transcripts per channel as of 14 May 2026 and may change. Brand absence here means absence in the transcript sample, not absence from the broader market.

The mechanism is straightforward: AI-native dev tools are still mostly buying audience through Twitter/X seeding, direct deals with individual creators, and conference sponsorships. They have not yet meaningfully rotated into YouTube ad-read budgets at the cohort scale visible here. Brilliant.org, Hostinger, Twingate, GitKraken, and Bitdefender are the actual incumbents.

An agency briefing a client on "where to sit in the dev-YouTube ad mix" should be benchmarking against Brilliant.org's allocation, not against the AI-tool roster the trade press fixates on.

Story C — mid-tier creators are 3× more sponsorship-active than large-tier

The cross-aggregation of subscriber tier against sponsorship activity flips the conventional "go big" assumption on its head:

Subscriber tierChannelsSponsorship-active% activeMX-valid emailAvg priority
Mega (5M+)4125.0%3 (75%)55
Large (1M–5M)2926.9%5 (17.2%)36
Mid (250k–1M)37821.6%5 (13.5%)38
Small (50k–250k)20315.0%0 (0%)37
Total901415.6%13 (14.4%)

Sponsorship-active here means a maturity classification of saturated, regular, or occasional. Subscriber tiers are derived from enrichment output, not from the input handle list.

Mid-tier (250k–1M) is 3.1× more sponsorship-active than large-tier (1M–5M). The intuition that "more subscribers = more sponsorship activity" is wrong for this cohort. Large-tier dev YouTubers — including some of the most-recognised names in the developer space — frequently emit as "never" maturity because they monetise through other channels: courses, paid newsletters, books, conference speaking, or their own SaaS.

For a B2B SaaS marketer with a fixed creator-marketing budget, this matters concretely. A 600k-subscriber channel running monthly sponsor reads is a better pickup than a 2M-subscriber channel that hasn't read a sponsor in eight months. The actor surfaces this distinction directly via the maturity field.

Story D — the unreachable mega-tier

Four channels in the cohort clear 5M subscribers. Their sponsor activity is mostly absent:

ChannelSubsTierMaturityBrands detected
@freecodecamp11.6MCnever0
@CodeWithHarry9.64MCnever0
@NetworkChuck5.25MAregular8
@programmingwithmosh5.04MCnever0

freeCodeCamp is a non-profit and structurally won't run paid sponsor reads. CodeWithHarry is an India-based Python tutorials channel whose monetisation strategy looks course-led rather than YouTube-sponsor-led. Programming with Mosh — a 5M-subscriber course-empire creator — emits as "never" maturity across the 10-video sample, consistent with a monetisation strategy built around paid courses on codewithmosh.com rather than third-party sponsor reads. Three of the four mega-tier channels in this cohort show zero detected sponsor activity, with NetworkChuck the lone exception.

For an outbound team, this is the most important negative-space finding: the mega-tier dev YouTubers most likely to surface in a "biggest tech YouTubers" Google search are mostly unreachable for traditional sponsor outreach. The data argues against starting there.

Story E — what 15% precision filtering looks like

The actor classified 76 of 90 channels (84.4%) as never maturity. The remaining 14 channels (15.6%) — the sponsorship-active subset — carry named brands, dated last-sponsored videos, sponsorship-frequency percentages, and (for 13 of them) an MX-valid business contact. That subset is the agency value proposition: a noisy list of 121 named handles compressed into a focused outreach list of fewer than 15 actionable contacts, with the evidence trail attached.

A human SDR doing this work manually would take a multi-day pass to watch sample videos, check description boxes, search for "PR inquiries" emails, and validate them. The actor produced the same compression on this cohort in 116 seconds at a compute cost of $0.14 on the Apify platform. That's the 100×+ compression ratio in human time the actor is sold on.

The redaction note matters here: the 13 MX-valid email addresses are the commercial product. They are present in the actor's output when run on a customer's own cohort and absent from the public CSV.

What most coverage gets wrong about dev-creator sponsorships

  • "AI-native dev tools are winning YouTube ad spend." Not in this cohort. Cursor, Warp, Codeium, and Continue together register zero detected ad reads across 90 dev YouTubers' 10 most recent videos. The narrative may apply to specific direct deals; it does not apply to cohort-scale YouTube ad-read patterns visible in transcripts as of 14 May 2026.
  • "Bigger channels = more sponsor activity." The data goes the other way. Mid-tier (250k–1M) is 3.1× more sponsorship-active than large-tier (1M–5M). Sponsor density and subscriber count are weakly correlated at best in this cohort.
  • "The biggest tech YouTubers are sponsored every week." Three of the four 5M+ channels in this cohort emit as never maturity. Mega-tier creators frequently monetise through course sales, paid memberships, and their own products rather than YouTube ad reads.
  • "You can find a creator's business email on their About page." 14.4% of this cohort returned an MX-valid business email via the actor's resolution layer. The remaining 85.6% either have no listed email, list a personal Gmail, or surface an address that fails MX validation. Manual scraping of About-page emails dramatically over-counts what's actually contactable.
  • "All dev YouTubers run affiliate links." 29 of 90 channels (32.2%) carry a bit.ly link, and 14 (15.6%) carry an amzn.to link. The remaining channels carry no generic-affiliate redirector at all in their description boxes. Affiliate density is not universal.

Methodology

  • Tool: youtube-sponsorship-intelligence build 1.0.7, run via the Apify platform. The actor enriches a named list of YouTube channels with sponsor-readiness tier, priority score, transcript-detected sponsor brands, affiliate-link hosts, MX-validated business contacts, and sponsor-maturity classification.
  • Cohort: 121 hand-curated developer YouTube channel handles spanning web development, backend and systems, AI and ML, DevOps, security, mobile, and Python and data. The handle the actor resolved each input to is listed in raw.json under each enriched record's channelHandle field. Handles that errored or skipped under the quality gate appear as separate recordType: error or recordType: skipped records.
  • Date captured: 14 May 2026.
  • Quality gate: minSubscribers >= 50,000, ON. This trimmed 11 channels (correctly — those handles resolved to small or renamed clones).
  • Resolution errors: 8 handles errored on YouTube search/URL resolution. These are recorded in raw.json with the error reason but excluded from the leaderboard.
  • Post-gate cohort size: 90 unique channels after dedup by channelId. Some handles (for example @LowLevelLearning and @LowLevelTV) resolve to the same renamed channel and are deduped to a single row.
  • Transcript sample: the 10 most recent videos per channel, with auto-generated captions used where present.
  • Output profile: sales — a 10-field agency-ready record per channel including tier, priority, maturity, detected brands, frequency, contactability, MX-valid boolean, and last-sponsored date.
  • Sponsorship-active definition: maturity{saturated, regular, occasional}. never channels are excluded from the "active" count.
  • Brand-detection cleanup: the dataset.csv applies a post-processing pass that strips obvious fragment-like detections — standalone "Checkout," "Build Your Own Coding Agent," and generic transcript noise — and normalises near-duplicates ("twingate" / "Twingate"). The raw.json preserves the actor's full sponsorshipSignals.sponsorBrandsDetected array per channel for transparency, including the un-cleaned fragments.
  • Run time: 116 seconds for the full 121-handle cohort, after the parallel-scrape refactor in build 1.0.7 (roughly 6× faster than the sequential 1.0.6).
  • Compute cost: $0.14 USD on the Apify platform for this run.
  • Aggregation rule: dedup by channelId after enrichment, group by subscriber tier for the cross-aggregation, count brand mentions per channel for the footprint table.
  • Known gaps: the 10-video transcript window misses sponsor reads from videos 11 or older. A 30-video window would catch more, at higher compute cost. The actor exposes the sample size as a tunable input.
  • Cross-reference: SponsorRadar's "brands that sponsor tech YouTubers" insight and OutlierKit's "most common YouTube sponsors" guide serve as the public-narrative baseline this post compares against — both describe a dev-creator market dominated by AI-native dev tools. The actor's transcript-level scan contradicts that framing for this cohort.

Caveats and what this data does not say

  • The sample is a 10-video transcript window per channel. A creator who ran a sponsor read 11+ videos back may register as never. The actor exposes the transcript window as a tunable input — larger windows catch more historical sponsor reads at higher compute cost.
  • Some transcript-detected "brands" are signal-with-noise. The actor applies a cleanup pass for unambiguous fragments. Where a phrase like "checkout to get X" could plausibly be a real sponsor mention, the cleanup preserves it. The dataset.csv is the cleaned output; the raw.json preserves the raw detections.
  • The 50k-subscriber quality gate filtered out 11 channels. These were correctly excluded — they resolved to small or renamed clones — but a different gate would produce a different cohort.
  • Handle ambiguity is real. Some input handles resolve to renamed channels. Dedup by channelId was applied to handle this, but the input-handle list is not identical to the output-channel list.
  • MX validation is not deliverability. An MX-valid business email confirms the address exists at the named domain. It does not confirm the address is monitored, that a human will read outreach, or that replies will arrive. Real-world reply rates depend on outreach quality.
  • One snapshot, one moment. This audit is a 14 May 2026 capture. Sponsorship activity changes monthly. Agencies running the actor in watchlist mode get weekly deltas; one-shot enrichments are most useful as a baseline.
  • The cohort is hand-curated, not exhaustive. 121 named developer YouTubers is a meaningful slice of the developer-creator market but not the full market. Channels in non-English languages other than the few included are under-represented. A repeat audit against a larger cohort is queued as a follow-up.

Redaction policy

The public dataset.csv and raw.json files in this post's data folder exclude the actual email-address strings. The emailMxValid boolean is preserved so the 14.4%-MX-valid headline number is verifiable from the CSV, but the address column itself is redacted. The actor returns the real addresses when run on a customer's own cohort. Publishing the addresses in this audit would substitute for running the tool — the cohort-level statistic is the story, the address list is the product.

Press lift-out for journalists

A 2026 ApifyForge analysis of 90 named developer YouTubers on 14 May 2026 found that only 2 channels (2.2%) reached A-tier sponsor-readiness: NetworkChuck (5.25M subscribers) and TheCherno (748k subscribers). The largest sponsor footprint across the cohort belonged to Brilliant.org, with 10 channels carrying its affiliate link — more than the combined footprint of every AI-native developer tool in the dataset (Cursor, Warp, Codeium, and Continue all returned zero detected ad reads). Mid-tier creators (250k–1M subscribers) were 3.1× more sponsorship-active than large-tier (1M–5M), and 84% of the cohort showed no detected sponsor reads in their 10 most recent videos. The full leaderboard and methodology are documented at ApifyForge.

Source: youtube-sponsorship-intelligence actor, build 1.0.7, run 14 May 2026. Cross-reference: any individual channel's detected-brand list can be re-verified by opening that channel on YouTube and reviewing its 10 most recent video descriptions and auto-generated captions.

This post is part of a series of named-cohort ApifyForge audits. See also /data/tech-podcast-cemetery-2026 for a parallel creator-cohort dormancy audit on Apple Podcasts, /data/oss-maintainer-burnout-index-2026 for a named-developer ecosystem audit on GitHub, and /data/saas-hiring-mix-engineering-vs-sales-2026 for the GTM-economics context of the SaaS brands that buy creator ad reads.

Embeddable visuals

Chart 1 — bar chart of detected sponsor footprint

Horizontal bar chart of brand sponsor footprint across the 90-channel cohort. Y-axis: brand name (Brilliant.org, Hostinger, Twingate, KiwiCo, Cursor, Warp, Codeium, Continue). X-axis: channel count where the brand was detected. Brilliant.org bar at 10 (with a small "+2 transcript" annotation), Hostinger at 4, Twingate at 3, KiwiCo at 2, the four AI dev tools at 0. Title: "Detected sponsor footprint across 90 dev YouTubers, 14 May 2026." Source line: "ApifyForge / youtube-sponsorship-intelligence actor, transcript sample = 10 most recent videos per channel, captured 14 May 2026."

Chart 2 — stacked bar of maturity by subscriber tier

Stacked horizontal bars, one per subscriber tier (Mega 5M+, Large 1M–5M, Mid 250k–1M, Small 50k–250k). Each bar split into two segments: sponsorship-active (saturated + regular + occasional) and never. Annotate each bar with the active percentage (25%, 6.9%, 21.6%, 15%). Title: "Sponsorship activity by subscriber tier, n=90 dev YouTubers." Source line: "ApifyForge / youtube-sponsorship-intelligence actor, run 14 May 2026, dedup by channelId."

Chart 3 — readiness-tier pyramid

Inverted pyramid of readiness-tier distribution. Top (widest) layer: C-tier, 55 channels (61.1%). Then D-tier, 16 channels (17.8%). Then B-tier, 17 channels (18.9%). Bottom (narrowest) point: A-tier, 2 channels (2.2%), with NetworkChuck and TheCherno labelled. Title: "Sponsor-readiness pyramid, dev YouTube cohort, May 2026." Source line: "ApifyForge / youtube-sponsorship-intelligence actor build 1.0.7, post-quality-gate cohort of 90 channels."

Frequently asked questions

What is the A-tier sponsor-readiness classification?

A-tier is the actor's top band in a four-classification scheme (A/B/C/D). A channel reaches A-tier when it combines a meaningful subscriber count, an active sponsorship cadence in recent videos, a detected-brand density above the cohort median, recency of last sponsored video within the last 30–60 days, and a returned MX-valid business contact. In this 14 May 2026 audit of 90 developer YouTubers, only NetworkChuck (5.25M subs) and TheCherno (748k subs) cleared all five thresholds.

Why did Cursor, Warp, and Codeium return zero detected sponsor reads?

The transcript scan covered the 10 most recent videos per channel as of 14 May 2026. Across 90 developer YouTubers and roughly 900 video transcripts, none of those four AI-native developer tools appeared as a detected sponsor brand. This does not mean those companies are not doing creator marketing — it means they are not buying YouTube ad reads at meaningful density across this cohort in this window. Direct creator deals, Twitter/X seeding, conference sponsorships, and product-launch buzz are not visible in transcript-detection.

How does this compare to the 2025 dev-creator sponsorship picture?

This is a single 14 May 2026 snapshot, so a strict year-over-year comparison would require a backfill run against the same cohort with a 2025-bounded transcript window. That backfill is queued as a follow-up. Anecdotally, the Brilliant.org / Hostinger / Twingate / GitKraken roster reads similar to 2024 patterns documented by other creator-marketing trackers, suggesting the dev-YouTube ad-read incumbents have been relatively stable.

Where can I download the underlying data myself?

The full 90-row dataset is downloadable as dataset.csv from the "Produced by" banner at the top of this post. The raw enrichment per channel is in raw.json in the same folder. Email-address strings have been redacted from both files. To get the un-redacted version, run the youtube-sponsorship-intelligence actor against your own list of channel handles — the actor returns full MX-validated business contacts in the output.

Is this dataset suitable for journalism?

Yes for the cohort-level findings — tier distribution, brand-footprint ranking, mid-versus-large activity gap, and the "84% never maturity" statistic are all verifiable from dataset.csv. For individual-channel claims (for example, "NetworkChuck ran a Cisco ad on date X"), the named brand is the detection signal and the dated last-sponsored video is the cross-reference; opening the channel's recent video list on YouTube and confirming the read is the recommended verification step before naming a specific creator-brand pair in print.

How would an agency use this in watchlist mode?

The actor supports recurring scheduled runs against a saved cohort. Set a weekly schedule against the same 121-handle (or larger) list, persist the outputs, and the actor surfaces deltas: new A-tier promotions, new sponsor brands appearing on a channel, last-sponsored-date refreshes, and contactability changes. The one-shot enrichment in this post is the baseline; the commercial use case is the watchlist deltas on top of that baseline.

Why is mid-tier more sponsorship-active than large-tier?

Mid-tier creators (250k–1M subs) typically rely on YouTube ad-read revenue as a core monetisation channel because they have audience scale but not yet brand-level deals or course-empire economics. Large-tier creators (1M–5M) often diversify into paid courses, paid memberships, their own SaaS products, books, and conference speaking — YouTube ad reads become a smaller share of their income mix. The data in this cohort (21.6% active mid-tier versus 6.9% active large-tier) is consistent with that pattern.

Ryan Clinton publishes Apify actors and MCP servers as ryanclinton and builds developer tools at ApifyForge. The leaderboard above was produced via the youtube-sponsorship-intelligence actor across 121 input handles against YouTube channel pages and the 10 most recent video transcripts per channel; the methodology, analysis, and framing are independent of any product positioning.


Last updated: May 2026