Data IntelligenceOriginal ResearchConsumer BrandsTrustpilotApify

10 of 11 Top Trustpilot Brands Are British. 1 Is American.

Trustpilot audit of 62 major brands. 10 of 11 scoring 4.0+ are British. 53 (85%) have never replied to a review. Octopus 4.8, Wells Fargo 1.1.

Ryan Clinton

The problem: Trustpilot is the most-cited consumer review aggregator in UK and US press coverage, but its scores aren't comparable between brands or regions. Of 62 major US and UK consumer brands audited in May 2026, the 11 scoring 4.0 stars or higher are 10 British companies plus a single US insurer. Every other Fortune 500 brand in the sample sits between 1.1 and 2.3 stars. And 53 of the 62 audited brands (85%) have never replied to a single Trustpilot review on the platform.

What is the Trustpilot two-tier pattern? A 2026 ApifyForge audit of 62 major US and UK consumer brands found a structural divide: 10 of the 11 brands scoring 4.0+ on Trustpilot are British (Octopus Energy 4.8, EDF 4.8, Revolut 4.7, Vodafone UK 4.7, OVO 4.6, Monzo 4.6, British Gas 4.4, Currys 4.4, Wise 4.3, E.ON UK 4.0). The single American is Liberty Mutual at 4.4. Every other US Fortune 500 brand audited — Amazon, Apple, Microsoft, Google, Walmart, Wells Fargo, Verizon, Netflix — sits at 1.1 to 2.3 stars.

Why it matters: The mechanism is operational, not service-quality. UK brands in regulated sectors actively solicit Trustpilot reviews after every transaction; US brands generally don't. Their Trustpilot pages still exist (any consumer can review any brand) but they're populated only by spontaneous complainers. The 3.4-star gap between top and bottom industry medians measures engagement strategy, not customer satisfaction.

Use it when: reporting on Trustpilot's role in UK vs US consumer-protection coverage, citing brand-level evidence for "Trustpilot ratings are misleading" coverage, briefing comms teams on review-platform engagement, or sourcing a named-brand quantitative anchor for a feature on online review economics.

Key findings

  • 62 major US and UK consumer brands audited across 13 industries: banking, airlines, telecoms, insurance, e-commerce, streaming, food delivery, energy, travel, automotive, tech, retail, fintech.
  • 53 of 62 brands (85%) have a 0% response rate — never replied to a single Trustpilot review captured in the sample.
  • 10 of 11 brands scoring 4.0+ are British. The single American is Liberty Mutual (4.4 / 2,281 reviews / 14% response rate).
  • Octopus Energy has 795,984 Trustpilot reviews — more than Apple, Microsoft, Google, and Amazon combined (71,168). Octopus has 11.2× that on its own.
  • British Gas has a 99% response rate across 338,536 reviews. Allstate has 100% across 899. Every other brand audited is below 40%.
  • Wells Fargo, Citi, and Deliveroo all sit at 1.1 stars — the floor of the meaningful TrustScore range. Wells Fargo's recent 30-day average rating is 1.04. None has ever responded to a customer review.

In this article: Quick answer · The 62-brand leaderboard · Industry medians · The 4.0+ club · The silence index · Octopus phenomenon · Liberty Mutual · The 1.1-floor cohort · Methodology · Caveats · FAQ

Quick answer

  • What it is: A 62-brand audit of Trustpilot TrustScores, response rates, and review volumes for major US and UK consumer-facing companies, captured 8 May 2026.
  • Sample: 62 brands across 13 industries, weighted toward Fortune 500 US firms and UK Energy / Fintech / Retail / Telecom comparators.
  • Source: trustpilot-review-analyzer Apify actor build 2.1, with analysisProfile: "competitor_benchmark" and maxReviewsPerBusiness: 100.
  • What "TrustScore" measures: Trustpilot's published 1-5 score per business, factoring rating distribution, recency, and review volume. Reported here as Trustpilot publishes it.
  • Main caveat: TrustScore is not a representative consumer survey. It captures whatever reviewer slice each brand cultivates. Brands that solicit reviews score higher; brands that don't score lower. The data measures engagement strategy, not customer satisfaction.
BrandTrustScoreReviewsResponse rateRegion
Octopus Energy4.8795,9840%UK
Revolut4.7399,9405%UK
Liberty Mutual4.42,28114%US
Apple1.812,0580%US
Wells Fargo1.11,4810%US

What is the Trustpilot two-tier pattern?

Definition (short version): The Trustpilot two-tier pattern is the structural divide observed in a 2026 ApifyForge audit of 62 major consumer brands, where 10 of the 11 highest-scoring brands are British and 85% of audited brands have never replied to a customer review — driven by review-solicitation strategy, not service quality.

The pattern is not a coincidence and not a moral verdict on UK vs US brands. UK companies in regulated sectors (energy, fintech, retail) actively solicit Trustpilot reviews after every transaction, often as part of post-transaction email automation. The reviews skew positive because they capture transaction-completion satisfaction. US companies generally don't solicit reviews on Trustpilot; their pages get populated only by self-selected complainers.

The same brand operating in both markets shows the contrast cleanly. Vodafone UK sits at 4.7. The major US wireless carriers — Verizon at 1.3, AT&T at 1.3, T-Mobile at 1.4 — are an order of magnitude lower.

Also known as: the Trustpilot UK premium, the British review economy, the Trustpilot solicitation effect, the consumer-review two-tier system, the response-rate gap, the public-review-platform engagement divide.

The 62-brand Trustpilot leaderboard

Sorted by TrustScore, descending. Captured 8 May 2026 via the trustpilot-review-analyzer actor.

TrustScoreResponse rateReviewsBrandIndustryRegion
4.80%795,984Octopus EnergyEnergyUK
4.80%220,635EDFEnergyUK
4.75%399,940RevolutFintechUK
4.70%127,642Vodafone UKTelecomUK
4.60%277,107OVOEnergyUK
4.62%67,740MonzoFintechUK
4.499%338,536British GasEnergyUK
4.414%2,281Liberty MutualInsuranceUS
4.411%445,729CurrysRetailUK
4.319%289,214WiseFintechUK
4.00%65,587E.ON UKEnergyUK
2.739%26,919Sky UKTelecomUK
2.50%72ToyotaAutomotiveUS
2.30%113,630UberEATSFood deliveryUS
2.30%9,823GoogleTechUS
2.20%16,485GrubhubFood deliveryUS
1.90%431HondaAutomotiveUS
1.817%3,822TripAdvisorTravelUS
1.80%1,920TeslaAutomotiveUS
1.80%12,058AppleTechUS
1.70%12,217WalmartE-commerceUS
1.70%2,021Delta Air LinesAirlineUS
1.70%810SouthwestAirlineUS
1.70%109,730Booking.comTravelGlobal
1.70%45,666AmazonE-commerceUS
1.70%2,404McDonald'sFoodUS
1.60%13,660NetflixStreamingUS
1.60%5,491SpotifyStreamingUS
1.60%1,098State FarmInsuranceUS
1.60%4,081TargetE-commerceUS
1.60%12,559NikeApparelUS
1.50%132United AirlinesAirlineUS
1.50%1,348FordAutomotiveUS
1.40%3,188Bank of AmericaBankingUS
1.40%104,189Virgin MediaTelecomUK
1.40%19,962EtsyE-commerceUS
1.40%7,230T-MobileTelecomUS
1.40%93ComcastTelecomUS
1.40%1,276GEICOInsuranceUS
1.40%6,918Best BuyE-commerceUS
1.30%2,567ChaseBankingUS
1.30%30,149RyanairAirlineEU
1.30%3,068American AirlinesAirlineUS
1.30%431VerizonTelecomUS
1.30%10,232AT&TTelecomUS
1.30%17,934eBayE-commerceUS
1.30%3,197HuluStreamingUS
1.3100%899AllstateInsuranceUS
1.30%739Disney+StreamingUS
1.30%1,043HBO MaxStreamingUS
1.30%16,917AirbnbTravelUS
1.20%3,751Capital OneBankingUS
1.20%1,669ProgressiveInsuranceUS
1.20%13,178DoorDashFood deliveryUS
1.20%11,809InstacartFood deliveryUS
1.20%11,649ExpediaTravelUS
1.20%11,607Hotels.comTravelUS
1.20%129HyundaiAutomotiveUS
1.20%3,621MicrosoftTechUS
1.10%1,481Wells FargoBankingUS
1.10%272CitiBankingUS
1.10%25,071DeliverooFood deliveryUK

Aggregate stats:

  • 3,741,041 total Trustpilot reviews captured across the 62 brands.
  • Average TrustScore: 2.05 / 5.0.
  • 53 of 62 (85%) have 0% response rate.
  • 46 of 62 (74%) score 2.0 stars or lower.
  • 11 of 62 (18%) score 4.0 or higher — 10 UK brands plus Liberty Mutual.
  • One brand has a 100% response rate (Allstate, US insurance, 899 reviews).

Industry medians

Sorted high-to-low. The break between UK-routed industries and US-routed industries is the structural finding.

IndustrySampleMedian TSBrandsRegion
UK Energy54.6EDF, Octopus, OVO, British Gas, E.ONUK
UK Fintech34.6Wise, Revolut, MonzoUK
UK Telecom32.7Vodafone UK, Sky UK, Virgin MediaUK
US Auto51.8Toyota, Honda, Tesla, Ford, HyundaiUS
US Tech61.7Amazon, Apple, Google, Microsoft, McDonald's, NikeUS
US Airlines51.5Delta, Southwest, AA, United, RyanairUS/EU
US E-commerce61.45Amazon, Walmart, eBay, Target, Etsy, Best BuyUS
US Insurance51.4Liberty Mutual, GEICO, State Farm, Allstate, ProgressiveUS
US Telecoms41.4Verizon, AT&T, T-Mobile, ComcastUS
US Streaming51.3Netflix, Spotify, Hulu, Disney+, HBO MaxUS
Travel/booking51.3Booking.com, Expedia, Airbnb, Hotels.com, TripAdvisorGlobal
Food delivery51.2DoorDash, UberEats, Instacart, Grubhub, DeliverooUS/UK
US Banks51.2Chase, BofA, Wells Fargo, Citi, Capital OneUS

The two industries at the top — UK Energy and UK Fintech — cluster at 4.6. The two at the bottom — US Banks and Food delivery — cluster at 1.2. The gap is 3.4 stars on a 5-star scale, the difference between "almost perfect" and "almost worst possible." The structural divide is industry-shape, not random.

The 4.0+ club: the active-engagement tier

The 11 brands scoring 4.0 or higher are not a random selection of well-run companies. They are a structural cohort of brands that actively manage their Trustpilot presence.

BrandTrustScoreReviewsRegionSector
Octopus Energy4.8795,984UKEnergy
EDF4.8220,635UKEnergy
Revolut4.7399,940UKFintech
Vodafone UK4.7127,642UKTelecom
OVO4.6277,107UKEnergy
Monzo4.667,740UKFintech
British Gas4.4338,536UKEnergy
Liberty Mutual4.42,281USInsurance
Currys4.4445,729UKRetail
Wise4.3289,214UKFintech
E.ON UK4.065,587UKEnergy

10 of 11 are British. 1 is American. Liberty Mutual at 2,281 reviews is by far the smallest sample — three orders of magnitude smaller than Octopus Energy's 795,984.

The mechanism is operational. UK brands in Energy, Fintech, and Retail use Trustpilot as a customer-service channel. Their post-transaction emails ask customers to leave a Trustpilot review. UK regulators (Ofgem for energy, FCA for fintech) explicitly reward consumer-facing transparency, and Trustpilot is the de-facto British public review site.

US brands generally don't solicit Trustpilot reviews. Their Trustpilot pages exist because Trustpilot allows reviews of any company, but they're populated only by people angry enough to find an obscure-to-Americans review aggregator and post unprompted.

The silence index: the 9 brands that ever respond

Of 62 brands audited, 53 (85%) have a 0% response rate — never replied to a single review captured in the sample. The 9 brands that did respond:

BrandResponse rateTrustScoreReviewsIndustry
Allstate100%1.3899US insurance
British Gas99%4.4338,536UK energy
Sky UK39%2.726,919UK telecom
Wise19%4.3289,214UK fintech
TripAdvisor17%1.83,822US travel
Liberty Mutual14%4.42,281US insurance
Currys11%4.4445,729UK retail
Revolut5%4.7399,940UK fintech
Monzo2%4.667,740UK fintech

Two brands stand out — Allstate (100% response rate, every captured review answered) and British Gas (99% across 338,536 reviews — nearly every one of a third of a million reviews has a corporate reply attached). Both treat Trustpilot as a live customer-service surface, not a passive billboard.

The remaining 53 brands include every major US bank, every major US airline, every major US streaming service, every major US e-commerce platform, every major US tech giant. Wells Fargo (1,481 unanswered reviews), Verizon (431), AT&T (10,232), Amazon (45,666), Apple (12,058), Microsoft (3,621), Google (9,823), Netflix (13,660), Walmart (12,217), Ryanair (30,149), Booking.com (109,730), Virgin Media (104,189), Deliveroo (25,071) — none has ever responded to a single review captured in the audit.

This is the same structural pattern documented in the CFPB Big-3 credit-bureau dominance and the medical-debt-collection complaint share — public consumer-facing data showing the named entities cluster sharply rather than distributing evenly.

Story A — Octopus Energy: the most-reviewed brand in the sample

StatValue
Total Trustpilot reviews795,984
TrustScore4.8 / 5.0
Response rate0%

Octopus Energy has 795,984 Trustpilot reviews — more than every other brand in the sample. More than Apple (12,058), Microsoft (3,621), Google (9,823), and Amazon (45,666) combined (71,168 — Octopus has 11.2× that on its own). Adding Walmart (12,217), McDonald's (2,404), and Nike (12,559) to the comparator pushes the combined total to 98,348, still less than 13% of Octopus's footprint.

The mechanism is the operational decision Octopus made early in its growth: solicit a Trustpilot review after every customer interaction. Years of compound solicitation produced the largest Trustpilot footprint of any brand surveyed. The 4.8-star rating reflects post-transaction satisfaction (most customers got their electricity), not a representative consumer survey.

This is the cleanest illustration of why Trustpilot's TrustScore is not comparable across regions or sectors. The number on the Octopus page (4.8) and the number on the Wells Fargo page (1.1) are not measuring the same thing. One measures "of customers asked, how many were satisfied with their last transaction." The other measures "of self-selected complainers, how angry were they."

Story B — Liberty Mutual: the lone US 4.0+ brand

Liberty Mutual is the only US brand in the audit to crack 4.0 stars: 4.4 / 2,281 reviews / 14% response rate. Every other US insurance brand sits at 1.2-1.6 — GEICO 1.4, State Farm 1.6, Progressive 1.2, Allstate 1.3 (despite Allstate's 100% response rate on the sample window).

The mechanism matches the UK pattern: Liberty Mutual actively solicits Trustpilot reviews, particularly after policy purchase, and replies to a meaningful fraction. The 2,281-review sample is small relative to UK comparators (Octopus has 348× more reviews), but the structural behaviour matches.

This validates the framework. The 4.0+ club isn't a "good companies" cohort. It's an "actively-managed-Trustpilot-page" cohort. Any brand that solicits reviews and engages with them ends up there, regardless of geography. Any brand that ignores Trustpilot ends up at 1.1-2.3 — also regardless of how their actual customer service performs.

Story C — Wells Fargo, Citi, Deliveroo: the 1.1-floor cohort

Three brands in the audit hit 1.1 stars, the floor of the meaningful TrustScore range:

BrandTSReviewsRecent 30d avgStar-1 share
Wells Fargo1.11,4811.0490%+
Citi1.12721.2297%
Deliveroo1.125,071n/amajority 1-star

Recent 30-day average rating for Wells Fargo is 1.04. Of the last 30 days of reviews, virtually every single one is a 1-star complaint. Citi: 97% of its captured reviews are 1-star. The platform shows what amounts to a complaints-only feed for these brands.

Both Wells Fargo and Citi are systemically-important US banks (Wells Fargo: Fortune 50; Citi: Fortune 30) with no Trustpilot engagement. The complaints exist; the engagement does not. The data does not say whether their actual customer service is worse than Liberty Mutual's; it says their Trustpilot operating model is different. They've ceded the page entirely to spontaneous complainers.

Methodology

  • Tool: trustpilot-review-analyzer Apify actor build 2.1, captured 8 May 2026.
  • Input shape: 62 brand domains across three runs of 15, 35, and 12 brands respectively. analysisProfile: "competitor_benchmark", maxReviewsPerBusiness: 100, outputGranularity: "business". The actor charges per business analyzed.
  • Coverage: 62 brands selected as a representative cross-section of major US and UK consumer-facing companies. 13 industries: banking, airlines, telecoms, insurance, e-commerce, streaming, food delivery, energy, travel, automotive, tech, retail, fintech.
  • What "TrustScore" measures: Trustpilot's proprietary 1-5 score for a business, factoring rating distribution, recency-weighted ratings, and review volume. The score on each Trustpilot page is the platform's published metric, not a derived statistic in this audit. We capture and report it as Trustpilot publishes it. Trustpilot's own TrustScore methodology is the canonical source.
  • What "response rate" measures: the percentage of reviews captured by the actor that have a corporate reply attached. 0% means the actor sampled up to 100 reviews and found zero corporate replies.
  • Sample size per brand: up to 100 reviews captured per business. For brands with hundreds of thousands of reviews on Trustpilot (Octopus 795k, Currys 445k), the audit examines a recent slice — sufficient for response-rate estimation and recent-sentiment computation, not for full-history aggregation.
  • Reproduction: anyone with an Apify account can re-run the same actor with the same input list and produce comparable numbers. The full 62-brand domain list is in the dataset doc at docs/BACKLINK-BAIT-022-TRUSTPILOT-INDEX-DATASET.md in the ApifyForge repo.

The actor itself is one of ApifyForge's lead-generation and competitive-intelligence tools, priced for journalists, analysts, and brand-monitoring teams who need bulk Trustpilot extraction without scraping the platform manually.

What the data does NOT support

Several inferences a journalist might want to draw from this dataset are not supported by it. Honest framing requires acknowledging them upfront.

1. "UK brands provide better customer service than US brands." The data does not measure customer service. It measures Trustpilot engagement strategy. UK brands actively solicit reviews after every transaction; US brands generally don't. The score gap measures the solicitation gap, not the service gap. A British Gas customer who got a working boiler installed is more likely to be asked for a review than a Wells Fargo customer who got a working mortgage. The two distributions are not comparable.

2. "Trustpilot is biased toward UK brands." Trustpilot is a Danish company whose UK presence is large. The platform mechanically treats every brand the same — anyone can review any company, and brands choose whether to claim the page and engage. The "bias" is in adoption, not platform design.

3. "TrustScore is a representative consumer survey." It is not. Trustpilot reviewers are self-selected. Brands that actively solicit reviews capture both satisfied and angry customers, and their distributions skew higher. Brands that don't solicit capture only spontaneous complainers and skew lower. The structural difference between UK utility brands and US Fortune 500 brands in this audit is largely an artefact of solicitation strategy, not customer-experience truth.

4. "Wells Fargo's 1.1 means it's the worst US bank." It means Wells Fargo's Trustpilot page is a self-selected complainer feed with no corporate engagement. Other US banks (Chase 1.3, Bank of America 1.4, Capital One 1.2, Citi 1.1) all sit at the same approximate floor. The data does not rank service quality among them; it shows none of them engages with Trustpilot.

5. "If US brands started replying, their scores would rise to UK levels." Possible but not provable from this dataset. Allstate has a 100% response rate and a 1.3 TrustScore — engagement alone, without solicitation, doesn't move the score. The combination that produces 4.0+ is solicit-plus-engage, not engage-only. Causation here is multi-variable.

6. "The 62-brand sample proves it for every consumer brand." Editorial selection. A different sample would shift medians slightly but not the structural finding. The largest US and UK consumer brands are mainstream choices; expanding to 200 brands would tighten the medians but not overturn the two-tier shape.

7. "Allstate has answered every review ever posted." The 100% figure is a recent-window observation across the 100-review sample. The actor could not verify whether Allstate has unanswered reviews older than the most recent 100. "Currently responding to all new reviews" is the defensible reading.

8. "Region attribution is exact." Some brands operate globally (Booking.com, Amazon, Vodafone). The "UK" or "US" tag reflects the brand's primary review-attribute presentation on Trustpilot. Vodafone UK is the British arm of the global Vodafone brand and has its own page, separate from Vodafone Germany or Vodafone Italy.

Common misconceptions

  • "Trustpilot scores reflect actual customer satisfaction." They reflect the satisfaction of the slice of customers who reviewed. For brands that actively solicit reviews, that slice includes routine satisfied customers. For brands that don't, the slice is mostly angry customers who sought out the platform. Comparing across the two groups is not comparing satisfaction.
  • "UK brands are better-managed than US brands because their Trustpilot scores are higher." The score gap reflects review-platform engagement strategy, not management quality. A 4.8 on Octopus means "post-transaction emails are being sent and customers are responding to them." A 1.1 on Wells Fargo means "the bank does not engage with Trustpilot." Neither is a management verdict.
  • "Brands with 100% response rates have great service." Allstate has a 100% response rate and a 1.3 TrustScore. The reviews they're responding to are still mostly 1-star complaints. Engagement alone doesn't change the rating distribution; it changes the visible texture of the page (every complaint has a reply) without changing the score.
  • "The UK has better consumer protection laws than the US." Possibly true on the merits — UK has specific Ofgem and FCA review-transparency expectations — but this audit does not measure consumer-protection outcomes. It measures one specific public review platform.

Glossary

  • TrustScore — Trustpilot's proprietary 1-5 score per business, computed from rating distribution, recency, and volume.
  • Response rate — the percentage of reviews on a business's Trustpilot page that have a corporate reply attached. Computed by the actor from each captured review record.
  • analysisProfile — an input flag on the trustpilot-review-analyzer actor that selects which output dimensions to compute. competitor_benchmark produces TrustScore, response rate, review-volume, and aggregate sentiment per business.
  • Solicitation strategy — the operational decision a brand makes about whether to actively ask customers for Trustpilot reviews after each transaction. UK regulated-sector brands solicit; most US Fortune 500 brands do not.
  • The 4.0+ club — the editorial label in this post for the 11 brands scoring 4.0 stars or higher. Functionally a synonym for "actively-managed Trustpilot pages."
  • The silence index — the editorial label for the 53 brands with 0% response rates.

Press lift-out paragraph for journalists

For trade-press, newsletter, and analyst use:

A 2026 ApifyForge audit of Trustpilot for 62 major US and UK consumer
brands found 53 (85%) have a 0% response rate — never replied to a
single review. The 11 brands scoring 4.0 stars or higher are 10 UK
companies (5 energy: Octopus 4.8 / EDF 4.8 / OVO 4.6 / British Gas
4.4 / E.ON 4.0; 3 fintech: Revolut 4.7 / Monzo 4.6 / Wise 4.3; plus
Vodafone UK 4.7 / Currys 4.4) and 1 US insurer (Liberty Mutual 4.4).
Every other US Fortune 500 brand audited — Amazon (1.7), Apple (1.8),
Microsoft (1.2), Google (2.3), Walmart (1.7), Wells Fargo (1.1),
Verizon (1.3), Netflix (1.6), McDonald's (1.7) — sits at 1.1 to 2.3
stars. Octopus Energy alone has 795,984 Trustpilot reviews; the four
largest US tech brands combined have 71,168.

Mini case study — British Gas, the engagement extremity

Before: British Gas is a UK energy supplier with roughly 6.9 million domestic customers. Their Trustpilot page existed for years before becoming an active customer-service surface.

Engagement state: British Gas has 338,536 Trustpilot reviews and a 99% response rate. Roughly a third of a million reviews — and nearly every single one has a corporate reply attached. The TrustScore is 4.4, reflecting both the volume and the recency-weighted satisfaction across the active solicitation cycle.

Comparison: The closest UK comparator on engagement is Sky UK (39% response, 2.7 TrustScore). The closest US comparator is Allstate (100% response, but only 899 reviews and a 1.3 TrustScore). British Gas occupies the intersection of high volume + high engagement + high score that no other brand in the audit reaches.

State as of capture (May 2026): the British Gas Trustpilot page is functionally a live customer-service ledger. Every recent complaint has a corporate reply attached. The structural mechanism — solicit reviews after every interaction, then reply to every review received — produces the highest combination of volume + engagement + score in the sample.

These numbers reflect one capture of the public Trustpilot platform on 8 May 2026 and one specific actor build (2.1). Anyone with an Apify account and the actor can re-run the same query and verify the figures. Reproducibility is the point.

What are the alternatives to this kind of audit?

Several methods exist for measuring brand reputation across consumer review platforms. Each has tradeoffs.

ApproachWhat it measuresWhere it breaks at scale
trustpilot-review-analyzer actor (used here)TrustScore, response rate, review-volume, sentiment per businessBound by Trustpilot rate limits; sample-size cap of 100 reviews per business
Trustpilot Business platform (per-brand)Full per-brand metrics with internal analyticsPer-brand subscription cost; not cross-brand comparable in one workflow
Manual journalist samplingCurated subjective read of a few flagship brandsAnecdotal; not reproducible; misses long-tail scale
Google Reviews / Yelp comparisonCross-platform reputation triangulationDifferent rating distributions per platform; conflates platform effect with brand effect
Reddit / Twitter sentiment scrapeReal-time consumer sentimentSelection bias toward complaint volume; no structured rating; noisy
BBB / FTC complaint data (US)Regulator-level consumer complaintsOnly US; only formally-filed complaints; sub-sampled
Better Business Bureau scorecardPer-brand US-focused ratingUS-only; methodology different from Trustpilot's

Pricing and features based on publicly available information as of May 2026 and may change.

Each approach has tradeoffs in granularity, reproducibility, region coverage, and cost. For a tag-by-tag, named-brand, response-rate-aware Trustpilot audit with reproducible numbers, the trustpilot-review-analyzer actor is the most direct mechanism. For richer follow-up — cross-platform comparison, multi-region expansion, or longitudinal change-over-time — combining the actor's output with Wayback Machine snapshots or a cross-platform review-aggregator extends the audit further. The methodology pattern matches the structural-data audits in the CFPB credit-bureau dominance leaderboard and the medical-debt-collection breakdown.

Best practices for journalists citing this dataset

  1. Always pair the 85% silence figure with the 62-brand scope. Frame as "53 of 62 audited brands" not "85% of all consumer brands."
  2. Use a specific named brand with its TrustScore as the quote anchor. "Octopus Energy 4.8 vs Wells Fargo 1.1" is more citable than the aggregate.
  3. Distinguish between engagement and service quality. The post deliberately avoids "UK brands are better than US brands." The defensible framing is "UK brands engage with Trustpilot; most US brands don't."
  4. Date the snapshot. All figures are 8 May 2026 captures of the public Trustpilot platform via the trustpilot-review-analyzer actor build 2.1.
  5. Link to the methodology section for reproducibility — every figure can be re-fetched by re-running the actor.
  6. Disclose the 100-review sampling cap. Brands with hundreds of thousands of reviews had only the most recent 100 sampled. The TrustScore is Trustpilot's full-history figure; the response-rate is sample-derived.
  7. Acknowledge the 4.0+ club is a structural cohort. Frame as "brands that actively manage their Trustpilot page" rather than "the best consumer companies."
  8. Quote the press lift-out paragraph verbatim if the framing needs to be standardised across outlets.

Common mistakes when citing this dataset

  • Treating Trustpilot scores as comparable across brands. They are not. A solicited brand and a non-solicited brand are measuring different things even if they share the same numeric scale.
  • Inferring service quality from TrustScore. The score is a function of who reviews and how many of them. It is not a service-quality measurement.
  • Generalising to "all of Trustpilot." This audit covers 62 brands. Trustpilot has millions of business pages. A different selection would shift specific medians.
  • Treating 0% response rates as proof of bad customer service. Many brands run customer service through other channels (in-app support, phone, branch) and ignore Trustpilot entirely. The 0% measures Trustpilot engagement, not service breadth.
  • Conflating "few reviews" with "low quality." Brands with thin sample sizes (Hyundai 129, United Airlines 132) have low statistical reliability, not necessarily low quality.
  • Ignoring the operational mechanism. The post is an operational story, not a moral one. Reporting that omits "review-solicitation strategy" misframes the structural finding.

Implementation checklist for re-running the audit

  1. Sign up for an Apify account at apify.com.
  2. Pick the brand domains you want to audit. The 62-brand seed in this post is a starting point.
  3. Choose analysisProfile: "competitor_benchmark" for the same output shape as this audit.
  4. Set maxReviewsPerBusiness: 100 for the same sample size.
  5. Run the trustpilot-review-analyzer actor on each domain.
  6. Parse the TrustScore, response rate, and review-volume fields from the dataset.
  7. Repeat across all brands. Aggregate response rates and compute medians per industry.
  8. Publish the spreadsheet alongside the analysis. Reproducibility is the entire point.

Limitations

  • Sample size per brand caps at 100 reviews. Brands with hundreds of thousands of reviews on Trustpilot had only their most recent 100 sampled. The TrustScore figure is Trustpilot's full-history metric and is unaffected; the response-rate and recent-sentiment figures are based on the 100-review sample.
  • The 62-brand selection is editorial. A different sample would shift the medians slightly but not the structural finding. The largest US and UK consumer brands are mainstream choices that no editor would dispute. Expanding to 200 brands would tighten the medians.
  • Region attribution is approximate. Some brands operate globally (Booking.com, Amazon, Vodafone). The "UK" or "US" tag reflects the brand's primary review-attribute presentation on Trustpilot.
  • Industry labels are normalised by the analyst. Trustpilot's own categorisation taxonomy is messier — Toyota appeared as "Data Marketplace" in the actor's category response, which is Trustpilot's own miscategorisation.
  • Allstate's 100% response rate is verified across the 100-review sample only. The actor could not verify whether Allstate has unanswered reviews older than the most recent 100.
  • TrustScore is not a representative consumer survey. Trustpilot reviewers are self-selected. The data measures engagement strategy, not satisfaction.
  • No year-over-year comparison. This audit is a single 8 May 2026 snapshot. A 2025 baseline would allow YoY drift analysis but does not currently exist.

Key facts about the Trustpilot two-tier pattern

  • 62 major US and UK consumer brands were audited 8 May 2026 across 13 industries.
  • 53 of 62 brands (85%) have a 0% response rate — never replied to a single Trustpilot review captured.
  • 11 of 62 brands score 4.0 or higher — 10 are British, 1 is American (Liberty Mutual at 4.4).
  • Octopus Energy has 795,984 Trustpilot reviews — more than the four largest US tech brands combined (71,168).
  • British Gas has a 99% response rate across 338,536 reviews; Allstate has a 100% response rate across 899.
  • Wells Fargo, Citi, and Deliveroo all sit at 1.1 stars, with 0% response rates and complaint-dominated review feeds.
  • The UK Energy and UK Fintech industry medians are 4.6; US Banks and Food delivery medians are 1.2 — a 3.4-star gap.
  • The mechanism is operational: UK brands actively solicit Trustpilot reviews; most US brands don't.
  • The audit is reproducible via the trustpilot-review-analyzer Apify actor at $0.15 per business analyzed.

Broader applicability

These patterns apply beyond Trustpilot to any user-contributed public review platform whose value depends on brands cultivating their reviewer pool:

  • Public review platforms reward solicitation. Any platform where brands can ask customers to leave reviews will produce higher scores for brands that ask. Google Reviews, Yelp, App Store ratings, and G2 all show similar dynamics.
  • Engagement and rating distribution are independent variables. Allstate's 100% response rate doesn't change its 1.3 TrustScore. Replying to complaints surfaces engagement; only solicitation changes who reviews in the first place.
  • Regulated sectors generate review-rich pages. UK Energy and UK Fintech regulators reward consumer-facing transparency, which produces brands that treat public review platforms as compliance surfaces. The same regulator structure in other jurisdictions would produce similar score distributions.
  • The "channel collapse" pattern from prior series posts holds in reverse. The same way Stack Overflow's question intake collapsed channel-by-channel — a channel-level shift independent of underlying activity — Trustpilot's score gap is a channel-engagement signal independent of underlying customer-service quality.
  • Reproducibility is the press currency. Audit datasets that anyone can re-run from a public platform via a public actor are easier for journalists to cite than proprietary internal numbers.

When you need this analysis

You probably want to reference this dataset if:

  • You're writing about Trustpilot's role in UK or US consumer-protection coverage.
  • You're reporting on a specific named brand's review presence (Wells Fargo, Octopus Energy, Apple, Liberty Mutual, etc.).
  • You need a brand-level, response-rate-aware quantitative anchor for an "online reviews are misleading" feature.
  • You're briefing comms, customer-service, or PR leadership on review-platform engagement strategy.
  • You want a reproducible, named-brand, dated dataset rather than vibes.
  • You're sourcing methodology for a follow-on analysis (cross-platform, EU expansion, or YoY tracking).

You probably don't need this if:

  • You want platform-level traffic, page views, or unique visitors — not in this dataset.
  • You want full review content extraction — that requires re-running the actor with outputGranularity: "review".
  • You want NPS or CSAT scores — those are proprietary internal metrics, not Trustpilot's TrustScore.
  • You want a single-cause attribution — multiple operational factors overlap and the post deliberately surfaces that.
  • You want a moral verdict on UK vs US service quality — this audit does not provide that.

How to verify any single brand in this dataset

Each brand's TrustScore and response rate is re-fetchable from the public Trustpilot platform. To verify any individual figure:

  1. Visit the brand's Trustpilot page directly (e.g., Octopus Energy, British Gas, Wells Fargo).
  2. Read the published TrustScore and review count from the page header.
  3. Sample a recent slice of reviews and count how many have a corporate reply — that's the response-rate sample.
  4. Compare to the table in this post.
  5. To reproduce at scale, use the trustpilot-review-analyzer Apify actor with the same input shape documented in the methodology section.

Frequently asked questions

How does Trustpilot calculate TrustScore?

Trustpilot's TrustScore is a 1-5 score that factors in three inputs: the raw distribution of star ratings the business has received, the recency-weighting of those ratings (newer reviews count more), and the total review volume (more reviews stabilise the score). Trustpilot publishes the methodology and updates the score continuously as new reviews arrive. This audit captures the published TrustScore as Trustpilot displays it on the brand's public page; we do not re-compute or normalise.

Why are so many UK brands at the top of the list?

The mechanism is operational — UK brands in regulated sectors (energy, fintech, retail) actively solicit Trustpilot reviews after every transaction, often as part of post-transaction email automation. This produces a reviewer slice that includes routine satisfied customers, not just spontaneous complainers. UK regulators (Ofgem for energy, FCA for fintech) explicitly reward consumer-facing transparency, and Trustpilot is the de-facto British public review aggregator. Most US Fortune 500 brands don't engage with Trustpilot at all; their pages get populated only by the angry customers who sought out the platform unprompted.

Does a 4.8 on Octopus mean better service than a 1.8 on Apple?

No. The two scores are not comparable. Octopus actively solicits reviews from every customer; Apple doesn't engage with Trustpilot at all. Octopus's score reflects "of the customers asked, how many were satisfied with their last interaction." Apple's score reflects "of the self-selected complainers who found Trustpilot, how angry were they." The data does not say which company has better service; it says they have different review-platform operating models.

Why does Allstate have a 100% response rate but a 1.3 TrustScore?

Allstate replies to every Trustpilot review captured in the sample, but the reviews themselves are mostly 1-star complaints. Engagement alone — replying to existing reviews — does not change the rating distribution. The combination that produces a 4.0+ score is solicit + engage: ask satisfied customers to review, then reply to those who do. Allstate engages but doesn't solicit at scale, so the page stays a complaint feed with corporate replies attached.

What about brands operating in both the US and the UK?

Vodafone is the cleanest example. Vodafone UK has a Trustpilot score of 4.7 across 127,642 reviews. The major US wireless carriers — Verizon 1.3, AT&T 1.3, T-Mobile 1.4 — are an order of magnitude lower across smaller review counts. The same global category produces a 3.4-star gap purely from operating-model difference. This is the strongest single signal that the gap measures engagement strategy, not regional service quality.

Can I republish these tables?

Yes — the dataset is published for press citation. Attribution to ApifyForge with a link back to this post is appreciated. The press lift-out paragraph above is written for direct quoting. Every figure is reproducible by running the trustpilot-review-analyzer actor on the same brand list, so verifying before publication is straightforward.

What's the difference between Trustpilot and Google Reviews?

Different platforms with different operating models. Google Reviews captures location-pinned reviews tied to a Google Maps presence; volume is higher but distribution shapes are platform-specific. Yelp is concentrated in US local business categories. The BBB and FTC data are US regulator surfaces. Trustpilot is the de-facto British public review platform, lighter weight in the US. A "Wells Fargo on Trustpilot 1.1, on Google Reviews 4.0" cross-cut would test the platform-effect hypothesis but is not in scope of this audit. Cross-platform comparison is on the follow-up roadmap.

A note on the underlying tools

This post is the tenth in an ApifyForge backlink-bait series documenting structural patterns through public consumer-facing data. It is the third in the series specifically built around named-entity ranking through public review or complaint data, alongside the CFPB Big-3 credit-bureau dominance leaderboard (the three bureaus capturing 90%+ of consumer credit complaints) and the medical-debt-collection 2024 leaderboard (where 70% of medical-debt complaints route to collection-industry firms, not hospitals). All three follow the same pattern: pick a public dataset, capture it as of a specific date, publish the per-row figures alongside the methodology, and let the named-entity rankings do the journalism.

Other posts in the ApifyForge backlink-bait series:

The dataset for this post was generated using the trustpilot-review-analyzer Apify actor, the same tool brands and analysts can use for ongoing competitive-intelligence sweeps and review-monitoring across Trustpilot.

Ryan Clinton publishes Apify actors and MCP servers as ryanclinton and runs ApifyForge.


Last updated: May 2026

This guide focuses on Trustpilot's UK and US consumer-brand pages, but the same engagement-strategy patterns apply broadly across any user-contributed public review platform whose ratings depend on brand-driven solicitation.