AIDEVELOPER TOOLS

Brand Reputation Monitor

Brand Reputation Monitor runs 8 intelligence sub-actors in parallel against any brand name, then applies four proprietary scoring models to produce a DEFCON-rated threat report in minutes. It covers domain impersonation campaigns, fake review attacks, negative social sentiment, and narrative drift — all in a single structured output designed for brand managers, legal teams, and digital risk analysts.

Try on Apify Store
$0.30per event
1
Users (30d)
6
Runs (30d)
90
Actively maintained
Maintenance Pulse
$0.30
Per event

Maintenance Pulse

90/100
Last Build
Today
Last Version
1d ago
Builds (30d)
8
Issue Response
N/A

Cost Estimate

How many results do you need?

analysis-runs
Estimated cost:$30.00

Pricing

Pay Per Event model. You only pay for what you use.

EventDescriptionPrice
analysis-runFull intelligence analysis run$0.30

Example: 100 events = $30.00 · 1,000 events = $300.00

Documentation

Brand Reputation Monitor runs 8 intelligence sub-actors in parallel against any brand name, then applies four proprietary scoring models to produce a DEFCON-rated threat report in minutes. It covers domain impersonation campaigns, fake review attacks, negative social sentiment, and narrative drift — all in a single structured output designed for brand managers, legal teams, and digital risk analysts.

No other single tool cross-references SSL certificate transparency logs, WHOIS registrations, multi-platform reviews, Bluesky social mentions, and Wayback Machine historical snapshots simultaneously. The result is a composite threat score (0-100) with specific signals, prioritized immediate actions, and optional side-by-side competitor benchmarking.

What data can you extract?

Data PointSourceExample
📊 Composite threat scoreAll 8 sub-actors42 (DEFCON_3 — moderate threat)
🚨 DEFCON threat levelWeighted scoring modelDEFCON_2 (high threat — immediate action required)
🌐 Impersonation risk levelWHOIS + SSL cert transparencyHIGH — 7 lookalike domains in 90 days
🔐 Recent SSL certificatescrt.sh transparency log11 certificates issued in last 30 days
🕵️ New domain registrationsWHOIS domain lookup4 domains registered in last 90 days
Review authenticity levelTrustpilot + multi-reviewSUSPICIOUS — burst of 9 reviews in 48 hours
📝 Narrative drift levelWayback Machine + change monitorMINOR_DRIFT — 5 content changes detected
📣 Social sentiment breakdownBluesky social search{ positive: 45, negative: 12, neutral: 38 }
🔍 All threat signalsAll models combined["7 domains registered in 90 days — coordinated squatting"]
Immediate actionsScoring engine["File domain takedown requests immediately"]
🗓️ Monitoring recommendationsScoring engine["Increase monitoring to daily — threat score elevated"]
🏆 Competitor threat benchmarksPer-competitor sub-pipeline[{ brand: "Adidas", threatScore: 28, defconLevel: 4 }]

Why use Brand Reputation Monitor?

Manually tracking brand threats across review platforms, domain registrars, SSL transparency logs, social media, and website archives takes a full-time analyst 4-8 hours per brand per week. Enterprise tools like BrandShield, Meltwater, or Brandwatch charge $500-$2,000/month for similar coverage — and none provide an open, programmable output format.

This actor automates the entire brand intelligence pipeline. One API call triggers 8 parallel data collections, four algorithmic scoring passes, and a DEFCON-rated threat report with an explicit action list. The full run takes under 3 minutes.

  • Scheduling — run daily, weekly, or on custom intervals to catch threats the moment they emerge
  • API access — trigger runs from Python, JavaScript, or any HTTP client and pipe results into your own dashboards
  • Proxy rotation — collect data at scale without IP blocks using Apify's built-in proxy infrastructure
  • Monitoring — get Slack or email alerts when runs fail or when DEFCON levels change
  • Integrations — connect to Zapier, Make, Google Sheets, HubSpot, or webhooks to automate crisis response workflows

Features

  • 8 parallel sub-actor pipeline — brand-protection-monitor, bluesky-social-search, multi-review-analyzer, trustpilot-review-analyzer, whois-domain-lookup, crt-sh-search, wayback-machine-search, and website-change-monitor all run simultaneously using Promise.allSettled, so no single slow source blocks the report
  • DEFCON 1-5 composite threat rating — maps to specific response protocols: DEFCON 5 (all clear), DEFCON 4 (low threat), DEFCON 3 (moderate — investigate), DEFCON 2 (high — activate response team), DEFCON 1 (critical — brand under active attack)
  • Weighted 4-model composite scoring — Brand Threat carries 30% weight, Impersonation Detection 25%, Review Authenticity 25%, and Narrative Drift 20%, producing a 0-100 composite score calibrated to real-world brand crisis patterns
  • SSL certificate velocity detection — queries Certificate Transparency logs via crt.sh; 3+ certificates issued within 7 days triggers an active impersonation campaign alert; scores up to 40 points in the impersonation model
  • WHOIS registration analysis — flags domains registered within the last 90 days as new registrations (7 points each) and privacy-protected registrations with budget registrars as additionally suspicious (3 points each)
  • Review J-curve authenticity detection — genuine reviews follow a J-curve distribution; a bimodal distribution (heavy clustering at 1-star and 5-star with fewer than 20% mid-range ratings) triggers manipulation scoring
  • 48-hour review burst detection — sliding window algorithm scans all review timestamps; 5+ reviews within any 48-hour window flags a coordinated fake review campaign
  • Duplicate review detection — normalizes review text and measures duplicate ratio; 20%+ identical or near-identical reviews triggers a campaign signal
  • Single-review account flagging — reviewers with only 1 total review on the platform are counted as potential sock puppet accounts
  • Negative social keyword classification — 10-keyword threat vocabulary (scam, fraud, fake, lawsuit, boycott, ripoff, deceptive, worst, avoid, terrible) applied to every Bluesky post; 3+ matches triggers a social threat signal
  • Narrative drift tracking — compares consecutive Wayback Machine snapshot digests/hashes to detect content revisions; significant hash changes score as major revisions (5 points each) separate from minor content changes (3 points each)
  • Competitor benchmarking — runs a 4-actor sub-pipeline (brand protection + social + two review sources) against each listed competitor and returns their threat scores, DEFCON levels, and key signals for side-by-side comparison
  • Structured signal list — every scoring model contributes human-readable signals with exact counts, enabling direct copy-paste into client reports or incident tickets
  • Prioritized immediate actions — four rule-based triggers automatically generate specific action items: domain takedown requests, fake review platform reports, crisis communications activation, and SSL certificate revocation requests

Use cases for brand reputation monitoring

Brand manager weekly health checks

Marketing and brand managers at mid-to-large companies run the actor weekly against their own brand and 2-3 direct competitors. The DEFCON rating provides a one-glance status, while the signal list identifies which specific threat vector is elevated. Scheduled runs combined with a Zapier webhook deliver results directly to Slack every Monday morning.

Legal and IP team domain squatting investigations

IP and trademark attorneys use the impersonation detection model to build evidence packages for UDRP (Uniform Domain Name Dispute Resolution) filings. The WHOIS data (new registrations, privacy protection status, registrar) and SSL certificate velocity provide documented proof of coordinated squatting activity. The structured JSON output can be exported and attached directly to takedown requests.

Crisis communications and PR agency response

PR and communications agencies monitor client brands daily during product launches, controversies, or executive crises. Review burst detection and social sentiment tracking surface coordinated attacks within hours. The immediate actions list gives the crisis response team a pre-prioritized checklist rather than requiring manual triage across multiple platforms.

Digital risk protection services

Managed security and digital risk protection (DRP) providers use the actor to automate client brand monitoring at scale. The structured DEFCON output, signal list, and competitor benchmarks translate directly into client-facing risk reports. Running 10-20 clients per day is feasible at $1-2 total cost, compared to $5,000+/month for enterprise DRP platforms.

Competitor intelligence and market positioning

Competitive intelligence teams monitor competitor DEFCON levels alongside their own brand. A competitor experiencing a DEFCON 2 event (fake review campaign or impersonation spike) is a market window. The benchmarking output provides a normalized score across the competitive set, enabling trend analysis over time.

M&A and due diligence research

Investment analysts and corporate development teams run the actor against acquisition targets during due diligence. A high impersonation score, active fake review campaign, or significant narrative drift may indicate undisclosed legal issues, brand erosion, or management problems that warrant deeper investigation.

How to monitor brand reputation

  1. Enter your brand name — type the company or brand name exactly as it appears publicly (e.g., "Acme Corp"). This is the only required field.
  2. Add your domain — enter the primary website domain without https:// (e.g., acmecorp.com). This enables WHOIS lookup, SSL certificate analysis, and content monitoring. Without it, the actor infers the domain from the brand name.
  3. List competitors (optional) — add 1-5 competitor brand names to receive a side-by-side threat benchmark. Each competitor adds approximately $0.04-$0.06 to the run cost.
  4. Click Start and download results — the full 8-actor pipeline runs in parallel and typically completes in 2-4 minutes. Download JSON, CSV, or Excel from the Dataset tab.

Input parameters

ParameterTypeRequiredDefaultDescription
brandstringYesBrand or company name to monitor (e.g., "Nike", "Acme Corp")
domainstringNoInferred from brandPrimary brand website domain for WHOIS, SSL cert analysis, and content monitoring (e.g., "nike.com")
competitorsarrayNoCompetitor brand names for comparative threat benchmarking (e.g., ["Adidas", "New Balance"])

Input examples

Standard brand health check:

{
  "brand": "Pinnacle Industries",
  "domain": "pinnacleindustries.com"
}

Brand monitoring with competitor benchmarking:

{
  "brand": "Pinnacle Industries",
  "domain": "pinnacleindustries.com",
  "competitors": ["Apex Solutions", "Summit Corp", "Crestline Group"]
}

Minimal run — brand name only:

{
  "brand": "Pinnacle Industries"
}

Input tips

  • Always provide the domain — without it, the actor guesses brandname.com, which misses WHOIS data for brands with unusual TLDs or company suffixes
  • Keep competitor lists to 5 or fewer — each competitor triggers a 4-actor sub-pipeline; large lists increase run time and cost proportionally
  • Use the exact public-facing name — the brand name is passed directly to social search and review queries; abbreviations or internal names may miss mentions

Output example

{
  "brand": "Pinnacle Industries",
  "domain": "pinnacleindustries.com",
  "analyzedAt": "2026-03-20T09:14:32.441Z",

  "compositeScore": 61,
  "overallThreat": "DEFCON_2",

  "brandThreat": {
    "score": 58,
    "defconLevel": 2,
    "impersonationDomains": 8,
    "negativeSentiment": 5,
    "reviewAttacks": 14,
    "socialMentions": 23,
    "signals": [
      "7 brand protection alerts — active threat landscape",
      "8 recently registered lookalike domains/certificates — impersonation risk",
      "58% negative reviews — potential coordinated attack or serious quality issues"
    ]
  },

  "impersonation": {
    "score": 72,
    "suspiciousDomains": 4,
    "recentCertificates": 11,
    "newRegistrations": 5,
    "riskLevel": "HIGH",
    "signals": [
      "4 SSL certificates issued in last 7 days — active impersonation campaign likely",
      "11 certificates in last 30 days — elevated domain squatting activity",
      "5 domains registered in last 90 days — coordinated squatting"
    ]
  },

  "reviewAuthenticity": {
    "score": 68,
    "totalReviews": 47,
    "suspiciousReviews": 12,
    "reviewBurstDetected": true,
    "authenticityLevel": "LIKELY_FAKE",
    "signals": [
      "Review burst: 9 reviews within 48-hour window — coordinated campaign likely",
      "84% of reviews are 1 or 5 stars — bimodal distribution suggests manipulation",
      "7 reviews from single-review accounts — potential sock puppets"
    ]
  },

  "narrativeDrift": {
    "score": 34,
    "snapshotCount": 18,
    "contentChanges": 7,
    "majorRevisions": 2,
    "driftLevel": "MINOR_DRIFT",
    "signals": [
      "18 Wayback Machine snapshots — rich historical record",
      "7 content changes tracked — frequently evolving messaging"
    ]
  },

  "socialSentiment": {
    "positive": 8,
    "negative": 5,
    "neutral": 10,
    "total": 23
  },

  "contentManipulation": {
    "totalSnapshots": 18,
    "recentChanges": 3,
    "removedContent": 1,
    "addedContent": 2
  },

  "allSignals": [
    "7 brand protection alerts — active threat landscape",
    "8 recently registered lookalike domains/certificates — impersonation risk",
    "58% negative reviews — potential coordinated attack or serious quality issues",
    "4 SSL certificates issued in last 7 days — active impersonation campaign likely",
    "Review burst: 9 reviews within 48-hour window — coordinated campaign likely",
    "84% of reviews are 1 or 5 stars — bimodal distribution suggests manipulation"
  ],

  "immediateActions": [
    "File domain takedown requests for impersonating domains immediately",
    "Report fake review campaign to platform trust and safety teams",
    "Activate crisis communications protocol — brand under active attack",
    "Contact certificate authorities to revoke fraudulent SSL certificates"
  ],

  "monitoringRecommendations": [
    "Increase brand monitoring frequency to daily",
    "Set up domain monitoring alerts for new lookalike registrations",
    "Enable review platform alerts for burst detection"
  ],

  "competitors": [
    {
      "brand": "Apex Solutions",
      "threatScore": 22,
      "defconLevel": 4,
      "negativeSentiment": 1,
      "reviewAttacks": 3,
      "signals": []
    }
  ],

  "dataSources": {
    "brandProtectionAlerts": 7,
    "socialPosts": 23,
    "multiReviewItems": 31,
    "trustpilotItems": 16,
    "whoisRecords": 5,
    "sslCertificates": 11,
    "waybackSnapshots": 18,
    "websiteChanges": 3
  }
}

Output fields

FieldTypeDescription
brandstringBrand name analyzed
domainstringDomain used for WHOIS and SSL analysis, or null
analyzedAtstringISO 8601 timestamp of the analysis run
compositeScorenumberWeighted composite threat score, 0-100
overallThreatstringDEFCON_1 through DEFCON_5 threat classification
brandThreat.scorenumberBrand threat sub-score, 0-100 (30% weight)
brandThreat.defconLevelnumberDEFCON level 1-5 for this sub-model
brandThreat.impersonationDomainsnumberCount of detected lookalike domains
brandThreat.negativeSentimentnumberCount of negative social posts with threat keywords
brandThreat.reviewAttacksnumberCount of negative reviews (1-2 stars or negative sentiment)
brandThreat.socialMentionsnumberTotal social mentions analyzed
brandThreat.signalsarrayHuman-readable threat signals from this model
impersonation.scorenumberImpersonation sub-score, 0-100 (25% weight)
impersonation.suspiciousDomainsnumberDomains with privacy protection or budget registrars
impersonation.recentCertificatesnumberSSL certificates issued in last 30 days
impersonation.newRegistrationsnumberDomains registered in last 90 days
impersonation.riskLevelstringCLEAR, LOW, MODERATE, HIGH, or CRITICAL
impersonation.signalsarrayHuman-readable signals from this model
reviewAuthenticity.scorenumberReview authenticity sub-score, 0-100 (25% weight)
reviewAuthenticity.totalReviewsnumberTotal reviews analyzed across all platforms
reviewAuthenticity.suspiciousReviewsnumberReviews with suspicious content patterns
reviewAuthenticity.reviewBurstDetectedbooleanTrue if 5+ reviews appeared in any 48-hour window
reviewAuthenticity.authenticityLevelstringAUTHENTIC, MOSTLY_AUTHENTIC, SUSPICIOUS, LIKELY_FAKE, or CAMPAIGN_DETECTED
reviewAuthenticity.signalsarrayHuman-readable signals from this model
narrativeDrift.scorenumberNarrative drift sub-score, 0-100 (20% weight)
narrativeDrift.snapshotCountnumberTotal Wayback Machine snapshots found
narrativeDrift.contentChangesnumberContent changes detected between snapshots
narrativeDrift.majorRevisionsnumberMajor content revisions with hash/digest changes
narrativeDrift.driftLevelstringSTABLE, MINOR_DRIFT, MODERATE_DRIFT, MAJOR_DRIFT, or NARRATIVE_SHIFT
narrativeDrift.signalsarrayHuman-readable signals from this model
socialSentiment.positivenumberSocial posts containing positive keywords
socialSentiment.negativenumberSocial posts containing threat keywords
socialSentiment.neutralnumberSocial posts with neither positive nor negative keywords
socialSentiment.totalnumberTotal social posts analyzed
contentManipulation.totalSnapshotsnumberTotal Wayback Machine snapshots
contentManipulation.recentChangesnumberChanges detected by website change monitor
contentManipulation.removedContentnumberWebsite change events classified as removals
contentManipulation.addedContentnumberWebsite change events classified as additions
allSignalsarrayDeduplicated list of all signals across all four models
immediateActionsarrayPrioritized actions triggered when high-severity conditions are met
monitoringRecommendationsarrayLong-term monitoring configuration recommendations
competitors[]arrayThreat benchmark per competitor brand (if provided)
competitors[].brandstringCompetitor brand name
competitors[].threatScorenumberBrand threat score for this competitor
competitors[].defconLevelnumberDEFCON level for this competitor
dataSourcesobjectRecord count from each of the 8 sub-actors

How much does it cost to monitor brand reputation?

Brand Reputation Monitor uses pay-per-run pricing — you pay approximately $0.08-$0.12 per brand analysis. Platform compute costs are included. Each competitor added costs an additional $0.04-$0.06.

ScenarioBrandsApprox. cost per brandTotal cost
Quick test1$0.10$0.10
Weekly monitoring1 + 3 competitors$0.10 + $0.15 competitors~$0.25
Small agency (5 clients)5$0.10$0.50/run
Daily monitoring (30 days)1$0.10~$3.00/month
Enterprise portfolio (20 brands)20$0.10~$2.00/run

You can set a maximum spending limit per run to control costs. The actor stops when your budget is reached.

Compare this to BrandShield or Meltwater at $500-$2,000/month. Most teams running daily brand monitoring with this actor spend $3-$10/month with no subscription commitment.

Brand reputation monitoring using the API

Python

from apify_client import ApifyClient

client = ApifyClient("YOUR_API_TOKEN")

run = client.actor("ryanclinton/brand-reputation-monitor").call(run_input={
    "brand": "Pinnacle Industries",
    "domain": "pinnacleindustries.com",
    "competitors": ["Apex Solutions", "Summit Corp"]
})

for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(f"Brand: {item['brand']} | DEFCON: {item['overallThreat']} | Score: {item['compositeScore']}/100")
    for signal in item.get("allSignals", []):
        print(f"  - {signal}")
    for action in item.get("immediateActions", []):
        print(f"  ACTION: {action}")

JavaScript

import { ApifyClient } from "apify-client";

const client = new ApifyClient({ token: "YOUR_API_TOKEN" });

const run = await client.actor("ryanclinton/brand-reputation-monitor").call({
    brand: "Pinnacle Industries",
    domain: "pinnacleindustries.com",
    competitors: ["Apex Solutions", "Summit Corp"]
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
for (const item of items) {
    console.log(`${item.brand}: ${item.overallThreat} (score: ${item.compositeScore}/100)`);
    console.log("Signals:", item.allSignals);
    console.log("Immediate actions:", item.immediateActions);
}

cURL

# Start the actor run
curl -X POST "https://api.apify.com/v2/acts/ryanclinton~brand-reputation-monitor/runs?token=YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "brand": "Pinnacle Industries",
    "domain": "pinnacleindustries.com",
    "competitors": ["Apex Solutions"]
  }'

# Fetch results (replace DATASET_ID from the run response)
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"

How Brand Reputation Monitor works

Phase 1: Parallel data collection across 8 sources

When a run starts, the actor builds 8 sub-actor calls and executes them simultaneously using Promise.allSettled. Each call invokes a separate specialized actor: brand-protection-monitor (threat alerts), bluesky-social-search (social mentions), multi-review-analyzer and trustpilot-review-analyzer (cross-platform reviews), whois-domain-lookup (domain registration records), crt-sh-search (SSL certificate transparency logs), wayback-machine-search (historical snapshots), and website-change-monitor (recent content changes). Each sub-actor runs with a 120-second timeout and collects up to 1,000 items. Using Promise.allSettled means a failure in any single source does not abort the entire pipeline — the scoring models handle empty arrays gracefully.

Phase 2: Four independent scoring models

Each of the four models receives the full collected data map and applies its own algorithm independently before the composite calculation.

The Brand Threat model scores alert severity (up to 30 points), lookalike domain counts (up to 25 points), negative review ratios (up to 25 points), and social threat keywords (up to 20 points). The Impersonation Detection model measures SSL certificate velocity: each certificate issued in the last 30 days scores 4 points, with a 6-point bonus per certificate issued in the last 7 days, capped at 40 points. WHOIS analysis adds 7 points per domain registered in 90 days (capped at 35) and 3 points per privacy-protected domain. The Review Authenticity model measures rating distribution polarization (J-curve deviation), runs a sliding 48-hour burst detection window across all review timestamps, scores duplicate content ratio, and counts single-review accounts — each component capped independently. The Narrative Drift model compares consecutive Wayback Machine snapshot digests to detect major revisions, combines this with website-change-monitor output, and scores historical depth as a confidence multiplier.

Phase 3: Weighted composite and DEFCON assignment

The composite score is calculated as: (brandThreat × 0.30) + (impersonation × 0.25) + (reviewAuthenticity × 0.25) + (narrativeDrift × 0.20), rounded to the nearest integer. DEFCON levels map to composite score ranges: DEFCON_5 (0-19), DEFCON_4 (20-39), DEFCON_3 (40-59), DEFCON_2 (60-79), DEFCON_1 (80-100). DEFCON_1 and DEFCON_2 automatically trigger rule-based immediate action generation covering domain takedown requests, fake review platform reports, crisis communications activation, and SSL certificate revocation.

Phase 4: Competitor benchmarking sub-pipeline

If competitor names are provided, each competitor runs a 4-actor parallel sub-pipeline (brand-protection-monitor, bluesky-social-search, multi-review-analyzer, trustpilot-review-analyzer) sequentially. The Brand Threat scoring model is applied to each competitor's data, producing a normalized threat score and DEFCON level for side-by-side comparison.

Tips for best results

  1. Always provide the domain parameter. Without it, the actor constructs a guess like brandname.com. This fails for brands with hyphens, legal entity suffixes, or country-code TLDs, and WHOIS and SSL certificate data will be empty or inaccurate.

  2. Run on a weekly schedule at minimum. Impersonation campaigns typically escalate from 0 to 10+ lookalike domains within 2 weeks of launch. A weekly cadence catches threats before they gain traffic or victims.

  3. Use the DEFCON level as a trigger, not the sub-scores. Set your Zapier or Make automation to alert only when overallThreat is DEFCON_1 or DEFCON_2. The sub-scores are for investigation — the composite DEFCON is for triage.

  4. Cross-reference immediate actions with platform-specific processes. The immediateActions array gives you the what; you still need to file the UDRP complaint, submit the Trustpilot abuse report, or draft the press statement. Use HubSpot Lead Pusher to route action items into your CRM task queue automatically.

  5. For competitor monitoring, keep the list stable week-over-week. Changing competitors between runs makes longitudinal trend comparison unreliable. Add all competitors at the start and leave the list fixed for quarterly reviews.

  6. Supplement with deeper domain analysis when impersonation scores are HIGH or CRITICAL. The actor identifies suspicious domains but does not fetch their content. Use Website Contact Scraper to inspect what those lookalike sites are actually hosting.

  7. Store historical results in a dataset or spreadsheet. The actor produces a single record per run. Tracking compositeScore over time in Google Sheets gives you trend lines that signal brand erosion before it reaches crisis levels.

Combine with other Apify actors

ActorHow to combine
Website Change MonitorRun independently against specific pages (pricing, terms, leadership) for granular change tracking beyond what brand-level monitoring covers
Trustpilot Review AnalyzerPull the full Trustpilot dataset separately to read individual review text and reviewer profiles for fake review investigation
Multi-Review AnalyzerExpand review coverage to Google, Yelp, and BBB for platforms not included in this actor's review pipeline
WHOIS Domain LookupLook up individual suspicious domains identified in the impersonation signals to gather full registrant details for takedown filings
Website Contact ScraperScrape contact and ownership data from lookalike domains to identify the parties behind impersonation campaigns
Company Deep ResearchRun deep company intelligence on brands that show elevated threat scores to identify root causes (litigation, leadership issues, regulatory problems)
HubSpot Lead PusherPush immediate action items and high-DEFCON alerts directly into CRM tasks for brand response team assignment

Limitations

  • Social monitoring is Bluesky only. Twitter/X, LinkedIn, Reddit, and Facebook are not included in the social sentiment analysis. For broader social coverage, supplement this actor with platform-specific social scrapers and aggregate the sentiment data manually.
  • Review platforms are limited to Trustpilot and the sources in multi-review-analyzer. Google reviews, Amazon reviews, and app store reviews are not directly included. Use Multi-Review Analyzer separately for broader platform coverage.
  • WHOIS data depends on registrar transparency. Registrars that aggressively enforce GDPR WHOIS redaction return minimal registration data. Privacy-protected domains will show as suspicious but not reveal registrant details.
  • Wayback Machine coverage varies by domain. Smaller or newer brand websites may have few historical snapshots, limiting narrative drift analysis. The scoring model applies a low-confidence penalty when fewer than 3 snapshots are available.
  • Narrative drift scores historical messaging changes, not intent. A high drift score may reflect legitimate website redesigns, rebranding, or content updates, not manipulation. Always review the specific signals array entries before taking action.
  • The actor does not take action. It identifies and prioritizes threats but does not file takedowns, report reviews, or send notifications on its own. The immediateActions array requires human follow-through.
  • Competitor analysis uses a subset pipeline. Competitor threat scores use only 4 of the 8 sub-actors (no WHOIS, SSL certs, Wayback Machine, or website change monitor). Competitor DEFCON levels are directional, not equivalent to a full brand analysis.
  • Rate limits from sub-actors may cause partial data. If any of the 8 sub-actors returns zero results (due to rate limiting or data availability), the corresponding scoring model component scores zero, which may understate the true threat level. Check dataSources record counts to assess data completeness.

Integrations

  • Zapier — trigger Slack notifications or create Jira tickets when overallThreat reaches DEFCON_2 or DEFCON_1
  • Make — build automated crisis response workflows that route immediateActions to the appropriate team based on action type
  • Google Sheets — append compositeScore, overallThreat, and sub-scores weekly to build a longitudinal brand health dashboard
  • Apify API — call programmatically from your existing brand monitoring platform or SIEM to add DEFCON-rated threat context
  • Webhooks — post the full report JSON to your internal threat intelligence system after every run
  • LangChain / LlamaIndex — feed the structured signals and immediate actions into an LLM pipeline to generate natural-language brand threat summaries for executive reports

Troubleshooting

  • All sub-scores are 0 and dataSources shows all zeros. This usually means the API token does not have permission to call sub-actors, or the sub-actors are not in your Apify account. Verify that all 8 actor IDs in the pipeline are accessible with your token. If running this actor under a different account, the sub-actor IDs may need updating.

  • Impersonation score is 0 despite entering a well-known brand. The crt.sh SSL certificate search and WHOIS lookup both require a domain, not just a brand name. Ensure the domain field is populated with the brand's primary domain (e.g., acmecorp.com). Without it, those two sub-actors receive the raw brand string, which may produce no results.

  • Review authenticity score seems high for a brand with no known fake review problems. The review burst detection fires on any 5+ reviews within 48 hours, which can trigger during a legitimate product launch or PR mention. Check reviewBurstDetected and the associated signals alongside the actual review dates before concluding manipulation.

  • Run takes longer than 5 minutes. The 8 parallel sub-actors each have a 120-second timeout. If several sub-actors hit their limit simultaneously (which can happen with rate-limited sources), the total wall time may reach 3-4 minutes. Adding competitors extends this further. For time-sensitive use cases, omit the competitors field and run competitor analysis as separate runs.

  • Narrative drift score is unexpectedly high for a brand that hasn't changed its messaging. A high drift score with no signals about major revisions typically indicates the Wayback Machine has very few snapshots with identical digests, which scores as "content changed" even for minor differences. Review the snapshotCount field — if it is below 5, the drift score is low-confidence.

Responsible use

  • This actor only accesses publicly available brand data — social posts, public reviews, public WHOIS records, public SSL certificate transparency logs, and publicly archived web pages.
  • Respect website terms of service and robots.txt directives.
  • Comply with GDPR and applicable data protection laws when storing or sharing data about individuals who appear in review records or WHOIS registrant fields.
  • Do not use brand reputation data for harassment, extortion, or unauthorized competitive intelligence operations.
  • For guidance on web scraping legality, see Apify's guide.

FAQ

How does the brand reputation DEFCON rating work?

The DEFCON system maps composite threat scores to five response levels: DEFCON_5 (score 0-19, all clear — no action needed), DEFCON_4 (20-39, low threat — continue routine monitoring), DEFCON_3 (40-59, moderate — investigate specific signals), DEFCON_2 (60-79, high — activate response team and begin remediation), DEFCON_1 (80-100, critical — brand under active multi-vector attack). Scores 60 and above automatically generate specific immediateActions.

How many data sources does brand reputation monitoring cover?

Each run queries 8 sources in parallel: brand protection alerts, Bluesky social posts, two separate review platforms (multi-platform and Trustpilot), WHOIS domain records, SSL certificate transparency logs (crt.sh), Wayback Machine historical snapshots, and website change monitoring. Each source contributes to one or more of the four scoring models.

How does brand reputation fake review detection work?

The Review Authenticity model uses four techniques: rating distribution analysis (genuine reviews follow a J-curve; heavy polarization at 1-star and 5-star suggests manipulation), a 48-hour burst detection window (5+ reviews in any 48-hour period flags a coordinated campaign), duplicate content detection (20%+ near-identical review text), and single-review account flagging (reviewers with only 1 total review across the platform are potential sock puppets). All four signals combine into a score from 0-100 with five levels from AUTHENTIC to CAMPAIGN_DETECTED.

How accurate is the brand reputation domain impersonation detection?

The impersonation model uses two independent signals: SSL certificate transparency log velocity (crt.sh indexes every publicly trusted certificate, so new certificates for lookalike domains appear within minutes of issuance) and WHOIS new registration dates. Both signals are factual data, not heuristics. False positives can occur when the brand has legitimate partner domains or CDN certificates being renewed. Always review the signals array to verify the specific domains and certificate counts before filing takedowns.

How is Brand Reputation Monitor different from BrandShield or Meltwater?

BrandShield and Meltwater are SaaS platforms charging $500-$2,000/month with proprietary dashboards and manual analyst workflows. This actor provides programmatic access to the same underlying threat categories (domain impersonation, fake reviews, social sentiment, content monitoring) at $0.08-$0.12 per run, with structured JSON output that plugs directly into your existing tools. It does not include email alerts or a graphical dashboard, but it integrates with Zapier, Make, or any webhook for notifications.

Can I schedule brand reputation monitoring to run automatically?

Yes. Use Apify's built-in scheduling feature to run the actor daily, weekly, or at any custom interval. Combine scheduling with a webhook or Zapier trigger to push DEFCON-level changes to Slack, email, or your team's incident management system automatically.

What happens if one of the 8 sub-actors returns no data?

The pipeline uses Promise.allSettled, so a failure or empty result from any single sub-actor does not abort the run. The affected scoring model component simply scores zero for that dimension. Check the dataSources field in the output — any zero-count entry indicates that source returned nothing, and the corresponding model component may be understating risk.

How long does a brand reputation analysis run take?

A standard run with no competitors typically completes in 2-4 minutes. The 8 sub-actors execute in parallel with individual 120-second timeouts. Adding competitors runs an additional 4-actor sub-pipeline per competitor sequentially, adding roughly 60-90 seconds per competitor.

Is it legal to scrape review and domain data for brand monitoring purposes?

Yes. This actor accesses only publicly available data: public reviews visible without login, public WHOIS records, public SSL certificate transparency logs, and publicly archived pages. For detailed legal context, see Apify's guide on web scraping legality. Always comply with applicable data protection laws (GDPR, CCPA) when storing records that contain personal data.

Can I use brand reputation monitoring for competitive intelligence?

Yes, the competitors input field runs a 4-actor sub-pipeline against each competitor and returns their threat score, DEFCON level, and key signals. Note that competitor analysis uses a subset of 4 actors (not all 8), so competitor scores are directional and comparable to each other, but not directly equivalent to a full brand analysis.

How does narrative drift detection work?

The Narrative Drift model compares consecutive Wayback Machine snapshot digests. When two adjacent snapshots have different content digests or hashes, that registers as a content change (3 points). When the digest field changes significantly (indicating a major page rewrite rather than a minor edit), it scores as a major revision (5 points). This is supplemented by website-change-monitor output, which flags specific page-level changes. High drift scores with MAJOR_DRIFT or NARRATIVE_SHIFT level may indicate crisis management rewriting, legal settlement compliance, or reputation repair efforts.

What should I do when the brand gets a DEFCON_1 rating?

DEFCON_1 (score 80+) means multiple threat vectors are elevated simultaneously. Start with the immediateActions list, which the actor generates automatically based on which specific thresholds were breached. Typical DEFCON_1 actions include filing domain takedown requests (UDRP), reporting fake review campaigns to platform trust and safety teams, activating crisis communications protocols, and requesting SSL certificate revocation from the issuing certificate authorities. The allSignals list identifies the specific evidence backing each action.

Help us improve

If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:

  1. Go to Account Settings > Privacy
  2. Enable Share runs with public Actor creators

This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.

Support

Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom solutions or enterprise integrations, reach out through the Apify platform.

How it works

01

Configure

Set your parameters in the Apify Console or pass them via API.

02

Run

Click Start, trigger via API, webhook, or set up a schedule.

03

Get results

Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.

Use cases

Sales Teams

Build targeted lead lists with verified contact data.

Marketing

Research competitors and identify outreach opportunities.

Data Teams

Automate data collection pipelines with scheduled runs.

Developers

Integrate via REST API or use as an MCP tool in AI workflows.

Ready to try Brand Reputation Monitor?

Start for free on Apify. No credit card required.

Open on Apify Store