AIDEVELOPER TOOLS

Brand Narrative Intelligence MCP Server

Brand narrative intelligence and brand threat detection for AI agents via the Model Context Protocol. This MCP server gives Claude, Cursor, Windsurf, and any MCP-compatible client direct access to brand protection scoring, impersonation domain detection, fake review analysis, social sentiment monitoring, and narrative drift tracking — all in a single, composable interface.

Try on Apify Store
$0.25per event
1
Users (30d)
14
Runs (30d)
90
Actively maintained
Maintenance Pulse
$0.25
Per event

Maintenance Pulse

90/100
Last Build
Today
Last Version
1d ago
Builds (30d)
8
Issue Response
N/A

Cost Estimate

How many results do you need?

assess_brand_threat_levels
Estimated cost:$25.00

Pricing

Pay Per Event model. You only pay for what you use.

EventDescriptionPrice
assess_brand_threat_levelDEFCON 1-5 brand threat across social, reviews, impersonation.$0.25
detect_impersonation_domainsWHOIS + SSL certificate velocity spike analysis.$0.15
analyze_review_authenticityFake review campaign detection across platforms.$0.15
monitor_social_sentimentBluesky + brand protection social monitoring.$0.10
track_narrative_driftWayback Machine + change monitor historical evolution.$0.15
investigate_domain_registrationDeep WHOIS domain registration intelligence.$0.05
detect_content_manipulationHistorical vs current content comparison.$0.10
generate_brand_threat_reportAll 8 data sources, 4 scoring models, DEFCON rating, actions.$0.40

Example: 100 events = $25.00 · 1,000 events = $250.00

Connect to your AI agent

Add this MCP server to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.

MCP Endpoint
https://ryanclinton--brand-narrative-intelligence-mcp.apify.actor/mcp
Claude Desktop Config
{
  "mcpServers": {
    "brand-narrative-intelligence-mcp": {
      "url": "https://ryanclinton--brand-narrative-intelligence-mcp.apify.actor/mcp"
    }
  }
}

Documentation

Brand narrative intelligence and brand threat detection for AI agents via the Model Context Protocol. This MCP server gives Claude, Cursor, Windsurf, and any MCP-compatible client direct access to brand protection scoring, impersonation domain detection, fake review analysis, social sentiment monitoring, and narrative drift tracking — all in a single, composable interface.

The server orchestrates 8 specialized Apify actors in parallel: brand protection monitoring, Bluesky social search, cross-platform and Trustpilot review analysis, WHOIS domain lookup, SSL certificate transparency (crt.sh), Wayback Machine historical analysis, and live website change monitoring. Four independent scoring models combine into a composite DEFCON 1-5 brand threat rating that your AI agent can act on immediately.

What data can you extract?

Data PointSourceExample
📊 Brand threat score (0-100) + DEFCON levelBrand Protection Monitor + all sourcesScore: 67, DEFCON 2
🔎 Impersonation domain count + risk levelWHOIS Lookup + crt.sh SSL certs7 lookalike domains, HIGH
🔒 SSL certificate issuance velocity (7-day / 30-day)crt.sh Certificate Transparency4 certs last 7 days
📋 WHOIS registration velocity (90-day window)WHOIS Lookup3 new registrations
⭐ Review authenticity score (0-100)Multi-Review Analyzer + TrustpilotScore: 74, SUSPICIOUS
📅 Review burst detection (48-hour window)Multi-Review Analyzer + Trustpilot9 reviews in 48 hrs
💬 Social sentiment breakdown (positive/negative/neutral)Bluesky Social Search12 negative of 41 posts
🔍 Negative keyword mentions (scam, fraud, boycott, etc.)Bluesky Social Search5 threat-keyword posts
📜 Historical snapshot count (Wayback Machine)Wayback Machine23 snapshots
🔄 Content changes + major revision countWayback Machine + Website Change Monitor5 major revisions
⚠️ Narrative drift levelWebsite Change Monitor + WaybackMODERATE_DRIFT
📢 Immediate action recommendationsComposite scoring engine2 critical actions

Why use Brand Narrative Intelligence MCP Server?

Brand protection teams spend hours each week manually checking domain registrar databases, scrolling review platforms, and piecing together social sentiment from separate tools. An analyst running a brand health check for a mid-size company might spend a full day collecting data that is already 48 hours stale by the time it gets to a decision-maker.

This MCP server automates the entire brand intelligence workflow. Connect it to Claude or any MCP-compatible agent once, then ask natural-language questions like "Is our brand being impersonated right now?" or "Are we under a fake review attack?" and receive a scored, sourced, actionable JSON report in under two minutes.

  • Scheduling — run daily DEFCON checks via Apify Schedules to keep threat assessments current
  • API access — trigger brand assessments from Python, JavaScript, or any HTTP client programmatically
  • Proxy rotation — all underlying actors use Apify's built-in proxy infrastructure to avoid blocks
  • Monitoring — get Slack or email alerts when threat levels escalate or impersonation spikes appear
  • Integrations — connect results to Zapier, Make, webhooks, or push directly to Jira/Slack channels

Features

  • DEFCON 1-5 composite threat rating — weighted composite of 4 independent scoring models: Brand Threat 30%, Impersonation 25%, Review Authenticity 25%, Narrative Drift 20%
  • SSL certificate velocity analysis — monitors crt.sh certificate transparency logs for spikes in lookalike domain certificates; 3+ certs in 7 days triggers an active campaign signal
  • WHOIS registration velocity — flags domains registered within 90 days with privacy protection or low-cost registrars (Namecheap, Tucows) as suspicious
  • 48-hour review burst detection — sliding window algorithm detects coordinated fake review campaigns by identifying clusters of 5+ reviews within any 48-hour period
  • Bimodal rating distribution analysis — detects manipulated review profiles where 80%+ of reviews are 1-star or 5-star with fewer than 20% in the middle range
  • Duplicate review content detection — calculates duplication ratio across review corpora; flags reviews sharing near-identical text as potential sock puppet activity
  • Single-review account detection — scores reviewer profiles with 0-1 total reviews as potential fake accounts contributing to sock puppet campaigns
  • 10 negative keyword threat detectors — social posts containing "scam", "fraud", "fake", "terrible", "avoid", "lawsuit", "boycott", "worst", "ripoff", or "deceptive" are classified as threat-signal mentions
  • Narrative drift scoring — compares Wayback Machine digest hashes across historical snapshots to detect major content revisions (not just minor edits)
  • Immediate action generation — the composite report engine generates prioritized action items: domain takedown requests, fake review platform reports, crisis protocol triggers, CA certificate revocations
  • Parallel data collection — all underlying actors run concurrently via Promise.allSettled, so a full 8-source report completes in 30-90 seconds rather than 8-10 minutes sequentially
  • Graceful partial results — if any single data source fails, the server returns partial results from remaining sources rather than failing the entire request
  • Spending limit enforcement — each tool checks Actor.charge() event limits before executing, preventing runaway costs in automated agent loops

Use cases for brand threat detection

Brand protection team operations

A brand protection analyst at a consumer goods company monitors dozens of domains daily for trademark infringement. They connect this MCP to Claude and ask "Check if anyone is impersonating our brand this week" every morning. The detect_impersonation_domains tool queries crt.sh certificate transparency logs and WHOIS registrations in parallel, returning a risk-scored list of suspicious domains within 60 seconds. Certificate velocity spikes — 3 or more new SSL certs issued in 7 days for variations of the brand domain — are flagged immediately as active impersonation campaigns warranting takedown requests.

PR and crisis communications

A communications director needs early warning when brand sentiment shifts negative before it becomes a news story. They configure a scheduled daily call to monitor_social_sentiment and assess_brand_threat_level. The server scans Bluesky posts for the 10 threat-signal keywords and combines that data with brand protection alerts to assign a DEFCON level. When the level drops to DEFCON 2 or 1, a webhook triggers an alert to the crisis comms Slack channel.

Competitive intelligence and astroturfing detection

Marketing teams defending against competitor astroturfing campaigns use analyze_review_authenticity to monitor their own review profiles across Trustpilot and other platforms. The review burst detector identifies when 9 negative reviews appear within a 48-hour window — a pattern consistent with coordinated attack rather than organic customer feedback. The bimodal distribution check further distinguishes natural J-curve review profiles from manufactured polarization.

Legal evidence gathering for trademark cases

IP attorneys building trademark infringement cases use investigate_domain_registration and detect_content_manipulation to gather timestamped digital evidence. WHOIS registration records document when lookalike domains were registered. Wayback Machine snapshots provide court-admissible historical records of infringing content. The narrative drift tracker identifies when infringing sites changed their content to evade detection.

M&A brand due diligence

Deal teams evaluating acquisition targets run generate_brand_threat_report as part of due diligence. A brand with active impersonation campaigns, a fake review authenticity score above 60, and significant narrative drift represents undisclosed reputation risk that can affect deal valuation. The comprehensive report combines all 8 data sources and 4 scoring models into a single artifact that goes into the deal room data room.

AI agent brand monitoring workflows

Developers building autonomous brand monitoring agents integrate this MCP into multi-step agent workflows. The agent runs a daily threat assessment, and if the DEFCON level is 3 or higher, it automatically calls additional targeted tools — detect_impersonation_domains for domain details and analyze_review_authenticity for review intelligence — before generating a natural-language summary report and emailing it to the brand manager.

How to connect this MCP server

Step 1 — Get your Apify API token

Sign in at apify.com and copy your API token from Settings > Integrations.

Step 2 — Add to your MCP client

Choose the connection method for your client below.

Step 3 — Start querying

Ask your AI agent: "Assess the brand threat level for Acme Corp at acmecorp.com." The agent calls assess_brand_threat_level with the brand name and domain and returns a DEFCON-rated threat assessment in seconds.

Step 4 — Review results and act

Results include the DEFCON level, individual dimension scores, all evidence signals, and generated action recommendations. Export to JSON, CSV, or Excel from the Apify Dataset tab for documentation.

MCP tools

ToolPriceDescription
assess_brand_threat_level$0.045Composite DEFCON 1-5 brand threat assessment across up to 6 sources: brand protection alerts, social sentiment, review attacks, and impersonation domains.
detect_impersonation_domains$0.045WHOIS + crt.sh parallel analysis for SSL certificate velocity spikes and new domain registrations indicating active squatting campaigns.
analyze_review_authenticity$0.045Rating distribution analysis, 48-hour burst detection, duplicate content ratio, and single-review account scoring across Trustpilot and multi-platform reviews.
monitor_social_sentiment$0.045Bluesky social scan + brand protection alerts with positive/negative/neutral classification and 10-keyword threat signal detection.
track_narrative_drift$0.045Wayback Machine historical snapshot comparison + live website change monitoring; assigns STABLE through NARRATIVE_SHIFT drift levels.
investigate_domain_registration$0.045Deep WHOIS investigation: registrar, registration dates, privacy protection status, and DNS history for a specific domain.
detect_content_manipulation$0.045Compares Wayback Machine historical content with current site; quantifies removed vs. added content and revision frequency.
generate_brand_threat_report$0.045Full composite report across all 8 data sources and 4 scoring models: DEFCON rating, immediate actions, monitoring recommendations.

How to use this MCP server

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "brand-narrative": {
      "url": "https://brand-narrative-intelligence-mcp.apify.actor/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_APIFY_TOKEN"
      }
    }
  }
}

Cursor / Windsurf / Cline

Add the same URL and Authorization header in your IDE's MCP server settings panel. The server is compatible with any client that supports the MCP Streamable HTTP transport.

Programmatic HTTP

curl -X POST "https://brand-narrative-intelligence-mcp.apify.actor/mcp" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_APIFY_TOKEN" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
      "name": "assess_brand_threat_level",
      "arguments": {
        "brand": "Acme Corp",
        "domain": "acmecorp.com"
      }
    },
    "id": 1
  }'

Input parameters

Each MCP tool accepts the following parameters:

ToolParameterTypeRequiredDescription
assess_brand_threat_levelbrandstringYesBrand or company name
assess_brand_threat_leveldomainstringNoPrimary brand domain — enables WHOIS + crt.sh checks
detect_impersonation_domainsdomainstringYesPrimary brand domain to check for impersonation
detect_impersonation_domainsbrandstringNoBrand name for broader certificate search
analyze_review_authenticitybrandstringYesBrand or business name
analyze_review_authenticityurlstringNoBusiness URL for direct review lookup
monitor_social_sentimentbrandstringYesBrand name to monitor
monitor_social_sentimentkeywordsstring[]NoAdditional keywords to track alongside brand name
track_narrative_drifturlstringYesWebsite URL to track
track_narrative_driftbrandstringNoBrand name for context labeling
investigate_domain_registrationdomainstringYesDomain name to investigate
detect_content_manipulationurlstringYesURL to check for manipulation
detect_content_manipulationlookbackDaysnumberNoDays to look back (default: 90)
generate_brand_threat_reportbrandstringYesBrand name
generate_brand_threat_reportdomainstringNoPrimary brand domain
generate_brand_threat_reporturlstringNoBrand website URL

Output example

The generate_brand_threat_report tool returns a full composite report:

{
  "brand": "Pinnacle Industries",
  "compositeScore": 58,
  "overallThreat": "DEFCON_2",
  "brandThreat": {
    "score": 64,
    "defconLevel": 2,
    "impersonationDomains": 6,
    "negativeSentiment": 4,
    "reviewAttacks": 11,
    "socialMentions": 37,
    "signals": [
      "8 brand protection alerts — active threat landscape",
      "6 recently registered lookalike domains/certificates — impersonation risk",
      "41% negative reviews — potential coordinated attack or serious quality issues",
      "4 negative social mentions with threat keywords (scam, fraud, boycott, etc.)"
    ]
  },
  "impersonation": {
    "score": 61,
    "suspiciousDomains": 4,
    "recentCertificates": 5,
    "newRegistrations": 3,
    "riskLevel": "HIGH",
    "signals": [
      "4 SSL certificates issued in last 7 days — active impersonation campaign likely",
      "3 domains registered in last 90 days — coordinated squatting",
      "4 domains with privacy protection — identity concealment"
    ]
  },
  "reviewAuthenticity": {
    "score": 72,
    "totalReviews": 48,
    "suspiciousReviews": 14,
    "reviewBurstDetected": true,
    "authenticityLevel": "LIKELY_FAKE",
    "signals": [
      "Review burst: 9 reviews within 48-hour window — coordinated campaign likely",
      "83% of reviews are 1 or 5 stars — bimodal distribution suggests manipulation",
      "7 reviews from single-review accounts — potential sock puppets"
    ]
  },
  "narrativeDrift": {
    "score": 34,
    "snapshotCount": 18,
    "contentChanges": 6,
    "majorRevisions": 2,
    "driftLevel": "MODERATE_DRIFT",
    "signals": [
      "2 major content revisions in Wayback Machine — narrative shift detected",
      "6 content changes tracked — frequently evolving messaging",
      "18 Wayback Machine snapshots — rich historical record"
    ]
  },
  "allSignals": [
    "8 brand protection alerts — active threat landscape",
    "6 recently registered lookalike domains/certificates — impersonation risk",
    "41% negative reviews — potential coordinated attack or serious quality issues",
    "4 negative social mentions with threat keywords (scam, fraud, boycott, etc.)",
    "4 SSL certificates issued in last 7 days — active impersonation campaign likely",
    "3 domains registered in last 90 days — coordinated squatting",
    "Review burst: 9 reviews within 48-hour window — coordinated campaign likely",
    "2 major content revisions in Wayback Machine — narrative shift detected"
  ],
  "immediateActions": [
    "File domain takedown requests for impersonating domains immediately",
    "Report fake review campaign to platform trust & safety teams",
    "Activate crisis communications protocol — brand under active attack"
  ],
  "monitoringRecommendations": [
    "Increase brand monitoring frequency to daily",
    "Set up domain monitoring alerts for new lookalike registrations",
    "Enable review platform alerts for burst detection",
    "Set up website change monitoring for competitor messaging shifts"
  ]
}

Output fields

FieldTypeDescription
brandstringBrand name as provided in the request
compositeScorenumberWeighted composite score 0-100: Threat 30%, Impersonation 25%, Review 25%, Drift 20%
overallThreatstringDEFCON_1 through DEFCON_5 — overall brand threat classification
brandThreat.scorenumberBrand threat sub-score 0-100
brandThreat.defconLevelnumberDEFCON level 1-5 for brand threat dimension only
brandThreat.impersonationDomainsnumberCount of recently registered lookalike domains/certificates
brandThreat.negativeSentimentnumberCount of social posts containing threat-signal keywords
brandThreat.reviewAttacksnumberCount of 1-2 star or negative-sentiment reviews
brandThreat.socialMentionsnumberTotal social posts found
brandThreat.signalsstring[]Human-readable evidence signals for brand threat dimension
impersonation.scorenumberImpersonation risk score 0-100
impersonation.suspiciousDomainsnumberDomains with privacy protection or low-cost registrars
impersonation.recentCertificatesnumberSSL certificates issued in last 30 days
impersonation.newRegistrationsnumberDomains registered in last 90 days
impersonation.riskLevelstringCLEAR, LOW, MODERATE, HIGH, or CRITICAL
impersonation.signalsstring[]Evidence signals for impersonation dimension
reviewAuthenticity.scorenumberFake review risk score 0-100 (higher = more fake)
reviewAuthenticity.totalReviewsnumberTotal reviews analyzed across all platforms
reviewAuthenticity.suspiciousReviewsnumberReviews matching generic/suspicious content patterns
reviewAuthenticity.reviewBurstDetectedbooleanTrue if 5+ reviews found within any 48-hour window
reviewAuthenticity.authenticityLevelstringAUTHENTIC, MOSTLY_AUTHENTIC, SUSPICIOUS, LIKELY_FAKE, or CAMPAIGN_DETECTED
reviewAuthenticity.signalsstring[]Evidence signals for review authenticity dimension
narrativeDrift.scorenumberNarrative drift score 0-100
narrativeDrift.snapshotCountnumberNumber of Wayback Machine snapshots found
narrativeDrift.contentChangesnumberContent changes detected between sequential snapshots
narrativeDrift.majorRevisionsnumberRevisions with differing digest hashes (major content changes)
narrativeDrift.driftLevelstringSTABLE, MINOR_DRIFT, MODERATE_DRIFT, MAJOR_DRIFT, or NARRATIVE_SHIFT
narrativeDrift.signalsstring[]Evidence signals for narrative drift dimension
allSignalsstring[]Deduplicated union of all signals across all 4 dimensions
immediateActionsstring[]Prioritized actions generated by the composite scoring engine
monitoringRecommendationsstring[]Recommended monitoring frequency and alert configurations

How much does it cost to run brand threat detection?

This MCP server uses pay-per-event pricing — you pay $0.045 per tool call. Platform compute costs are included. The generate_brand_threat_report tool also charges $0.045 but runs all 8 data sources in a single call.

ScenarioTool callsCost per callTotal cost
Quick impersonation check1$0.045$0.045
Daily DEFCON assessment1$0.045$0.045
Full brand threat report1$0.045$0.045
Weekly monitoring (7 days)7$0.045$0.315
Daily monitoring (30 days)30$0.045$1.35

You can set a maximum spending limit per run to control costs. The actor stops when your budget is reached — useful for automated agent loops that might otherwise call tools repeatedly.

Apify's free tier includes $5 of monthly platform credits — enough for over 100 individual tool calls or daily full reports for an entire month at no cost. Compare this to enterprise brand monitoring platforms like Brandwatch or Mention at $108-$500/month — most users of this MCP spend under $5/month with no subscription commitment.

How Brand Narrative Intelligence MCP Server works

Phase 1 — Parallel data collection

When a tool is called, the server dispatches concurrent requests to all relevant Apify actors using Promise.allSettled. For generate_brand_threat_report, all 8 actors run simultaneously: brand-protection-monitor, bluesky-social-search, multi-review-analyzer, trustpilot-review-analyzer, whois-domain-lookup, crt-sh-search, wayback-machine-search, and website-change-monitor. Each actor call allocates 256 MB and has a 120-second timeout. Failed actor calls return empty arrays rather than propagating errors, ensuring partial results are always returned.

Phase 2 — Four independent scoring models

The scoring engine in scoring.ts runs 4 independent models against the collected data:

Brand Threat Scorer (0-100, contributes 30% to composite): Aggregates brand protection alert severity (critical = 8 pts, medium = 4 pts, low = 2 pts, max 30), impersonation domain count (max 25), negative review ratio (max 25), and threat-keyword social mentions across 10 keywords (4 pts each, max 20).

Impersonation Detector (0-100, contributes 25%): Scores SSL certificate velocity (4 pts per recent cert + 6 pts per 7-day cert, max 40), WHOIS registration analysis with privacy flag weighting (7 pts per new registration + 3 pts per suspicious domain, max 35), and raw domain volume as a baseline (max 25).

Review Authenticity Scorer (0-100, contributes 25%): Calculates bimodal distribution score (extreme ratio × 20 + polarization bonus, max 30), sliding 48-hour burst window (3 pts per burst member, max 25), content pattern analysis with duplicate ratio (max 25), and single-review account profiling (2 pts each, max 20).

Narrative Drift Tracker (0-100, contributes 20%): Combines Wayback Machine change frequency (3 pts per change + 5 pts per major revision, max 35), website change monitor signals (3 pts per change + 6 pts per significant change, max 35), and historical depth scoring (max 30).

Phase 3 — Composite scoring and action generation

The composite score uses fixed weights: Threat × 0.30 + Impersonation × 0.25 + Review × 0.25 + Drift × 0.20. The DEFCON level maps score ranges: 80+ = DEFCON_1, 60-79 = DEFCON_2, 40-59 = DEFCON_3, 20-39 = DEFCON_4, 0-19 = DEFCON_5. The action generator then examines specific threshold conditions — impersonation CRITICAL/HIGH triggers takedown filing; CAMPAIGN_DETECTED triggers review platform reporting; DEFCON 1-2 triggers crisis protocol; 5+ recent certificates triggers CA revocation request.

Phase 4 — MCP transport and standby mode

The server runs on Apify's Standby mode, listening persistently on the actor standby port. Each POST to /mcp creates a fresh McpServer instance connected via StreamableHTTPServerTransport. The transport closes when the HTTP response completes. When not in standby mode (direct actor run), the server starts briefly, logs a health check message, and exits after 1 second.

Tips for best results

  1. Always provide domain alongside brand for the fullest coverage. Without a domain, assess_brand_threat_level skips the WHOIS and crt.sh checks, which are responsible for 25% of the composite score.

  2. Use generate_brand_threat_report for initial assessments, targeted tools for ongoing monitoring. The comprehensive report is ideal for onboarding a new brand or running quarterly audits. Daily or weekly monitoring is more cost-effective with targeted single-dimension tools.

  3. Set a spending limit when running in automated agent loops. An agent that calls tools in a loop based on DEFCON level changes can accumulate costs quickly. Set maxTotalChargeUsd in your Apify run configuration to cap spending.

  4. Combine with Website Tech Stack Detector for impersonation site profiling. When detect_impersonation_domains returns suspicious domains, run the tech stack detector against them to identify if they share the same hosting provider or analytics IDs as the real brand site — a strong indicator of coordinated fraud.

  5. Integrate allSignals into ticketing workflows. The allSignals array in the threat report is designed to be parsed directly into Jira, Linear, or PagerDuty incident titles. Each signal is a self-contained evidence statement.

  6. For legal evidence, prioritize investigate_domain_registration and detect_content_manipulation. WHOIS registration timestamps and Wayback Machine digests are admissible in trademark proceedings. Export these tool results as JSON and preserve them with their run timestamps in the Apify Dataset.

  7. Track DEFCON level trends over time, not just point-in-time values. A brand consistently at DEFCON 4 with a gradual trend toward DEFCON 3 needs attention before it reaches DEFCON 2. Store daily composite scores in a spreadsheet or time-series database via the Apify API for trend analysis.

Combine with other Apify actors

ActorHow to combine
Website Contact ScraperExtract contact details from impersonation domains identified by detect_impersonation_domains to identify the operators behind squatting campaigns
Trustpilot Review AnalyzerRun directly for deeper per-review analysis when analyze_review_authenticity flags LIKELY_FAKE or CAMPAIGN_DETECTED
Multi-Review AnalyzerExpand review coverage beyond Trustpilot when running standalone review authenticity investigations
Website Change MonitorSchedule persistent monitoring of specific brand pages and competitor pages detected during narrative drift analysis
Website Tech Stack DetectorProfile impersonation domains for shared infrastructure — same hosting, analytics IDs, or CDN providers indicate coordinated campaigns
WHOIS Domain LookupDeep-dive individual suspicious domains identified by the impersonation detector for registrant history and DNS records
Website Content to MarkdownConvert brand and competitor web pages to clean markdown for LLM-based narrative comparison and messaging analysis

Limitations

  • Bluesky social coverage only. Social sentiment monitoring covers Bluesky exclusively. Twitter/X, Reddit, LinkedIn, and Facebook are not included. For comprehensive social monitoring, you would need to integrate additional platform-specific scrapers.
  • Certificate transparency covers past issuances, not blocked certs. The crt.sh search shows issued certificates; revoked certificates may still appear. The system cannot identify certificates that were blocked before issuance.
  • WHOIS privacy obscures registrant identity. When domains use WHOIS privacy protection (Namecheap, Tucows, etc.), registrant names and emails are replaced with proxy information. The system flags this as suspicious but cannot identify the actual registrant.
  • Review platform coverage depends on upstream actors. Review collection relies on multi-review-analyzer and trustpilot-review-analyzer. Platforms not covered by those actors — Google Reviews, Yelp, G2, Capterra — are not included in authenticity scoring.
  • Wayback Machine coverage varies by domain age and crawl frequency. Newly registered domains and low-traffic sites may have few or no Wayback Machine snapshots, limiting historical narrative analysis.
  • The scoring models use heuristics, not machine learning. Threat scores are based on rule-based algorithms with fixed thresholds. Edge cases and unusual attack patterns may not score accurately. Treat scores as decision-support inputs, not definitive verdicts.
  • No real-time alerting built in. The server responds to queries; it does not proactively push alerts. Use Apify Schedules and Webhooks to build a polling-based alert system.
  • Response times depend on upstream actor performance. Typical calls complete in 30-90 seconds. If any upstream actor experiences elevated latency, the overall response time increases accordingly.

Integrations

  • Zapier — trigger brand threat assessments from Zapier workflows and route DEFCON 1-2 alerts to email or Slack
  • Make — build brand monitoring pipelines that run daily, parse DEFCON levels, and create tasks in Asana or Notion
  • Google Sheets — export daily composite scores and signals to a tracking spreadsheet for trend analysis
  • Apify API — call tools directly from Python or JavaScript applications for custom brand protection platforms
  • Webhooks — receive instant notifications when a scheduled brand monitoring run returns DEFCON 1 or DEFCON 2
  • LangChain / LlamaIndex — use brand threat report outputs as structured context for LLM-based narrative analysis and incident summary generation

Troubleshooting

  • Tool returns empty signals despite active impersonation. The crt.sh and WHOIS checks require a domain parameter (e.g., "acmecorp.com"), not a brand name. Passing only a brand name skips the certificate and registration checks. Always include domain for full impersonation coverage.

  • Review authenticity score is 0 despite many reviews. The analyze_review_authenticity tool needs the brand name to match the business listing on Trustpilot and multi-platform review sites. If the brand uses a trading name different from its legal name, try both variants. Alternatively, pass the url parameter pointing directly to the review page.

  • Composite score seems low for a known threat. The composite weighting (Threat 30%, Impersonation 25%, Review 25%, Drift 20%) means a severe problem in one dimension may not push the composite score into DEFCON territory on its own. Check individual dimension scores — a CRITICAL impersonation risk level is actionable regardless of the composite score.

  • Run takes longer than 120 seconds. Each underlying actor call has a 120-second timeout. If the overall response exceeds this, one or more actors is likely timing out. The Promise.allSettled pattern returns partial results — check the response to see which data sources returned empty arrays, indicating a timeout.

  • Spending limit reached error in automated agent. Set a maxTotalChargeUsd cap in your run configuration before deploying in an autonomous agent context. The server enforces limits per event but cannot prevent an agent from issuing many sequential tool calls.

Responsible use

  • This server only accesses publicly available data including SSL certificate transparency logs, public WHOIS records, public review platforms, and publicly archived web content.
  • Certificate transparency log data (crt.sh) is published openly by certificate authorities as a requirement of the CA/Browser Forum Baseline Requirements.
  • WHOIS data accessed is limited to publicly queryable registration records. Do not use registrant contact data for unsolicited outreach.
  • Comply with GDPR, CCPA, and applicable data protection regulations when processing brand data involving individual reviewers or social media users.
  • Review authenticity scores are probabilistic indicators, not legal findings. Do not make defamatory claims against competitors or reviewers based solely on these scores.
  • For guidance on web scraping legality, see Apify's guide.

FAQ

How does brand threat detection work for a brand without many online reviews? The system scores each dimension independently. Brands with few reviews receive a low review authenticity score by default (not enough data to flag manipulation). The DEFCON level is still meaningful if impersonation or social signals are present. For young brands, the impersonation and certificate transparency dimensions are most actionable.

How accurate is the fake review detection in this MCP server? The review authenticity score uses four heuristics: bimodal rating distribution, 48-hour burst detection, duplicate content ratio, and single-review account profiling. These patterns are well-documented in academic research on review manipulation. The system correctly flags coordinated campaigns in most cases but cannot distinguish individual disgruntled customers from coordinated attacks without additional context.

How many data sources does the brand threat report cover? The generate_brand_threat_report tool queries all 8 data sources in a single call: brand protection monitoring, Bluesky social, multi-platform reviews, Trustpilot, WHOIS domain lookup, crt.sh certificate transparency, Wayback Machine, and website change monitoring. All 8 run in parallel.

Can I use this MCP server to monitor competitor brands, not just my own? Yes. All tools accept any brand name or domain. The data sources used are all publicly available and there are no restrictions on which brand you monitor. Review platform coverage depends on whether the brand appears in supported review databases.

How is this different from Brandwatch or Mention for brand monitoring? Traditional brand monitoring tools charge $108-$500/month for subscription access to proprietary monitoring databases. This MCP server is pay-per-query with no subscription and integrates directly into AI agent workflows via MCP. It combines sources (certificate transparency, WHOIS, review authenticity) that are not typically included in standard social listening tools.

Does brand narrative drift detection work for newly launched websites? The narrative drift analysis requires historical Wayback Machine snapshots and recent website change data. For sites under 6 months old with few archived snapshots, drift analysis produces minimal signals. The certificate transparency and WHOIS impersonation checks remain fully functional for any domain age.

Can I schedule daily brand monitoring with this MCP server? Yes. Use Apify Schedules to call the actor on a schedule (daily, weekly, or custom cron). The scheduled run can call assess_brand_threat_level for a fast daily check and trigger a full generate_brand_threat_report only when the DEFCON level changes. Use Webhooks to push alerts when specific thresholds are crossed.

Is it legal to use certificate transparency logs and WHOIS data for brand protection? Yes. Certificate transparency logs are publicly published by all certificate authorities as a requirement of the CA/Browser Forum standards. WHOIS data is publicly queryable by design. Both are commonly used for security research, brand protection, and trademark enforcement. See Apify's web scraping legality guide for broader context.

What does DEFCON 1 mean and what should I do? DEFCON 1 indicates a composite brand threat score of 80 or higher — the most severe threat level, indicating active, multi-vector brand attack. The immediate actions generated will include crisis protocol activation, domain takedown filings, fake review platform reports, and possibly certificate authority revocation requests. Treat DEFCON 1 as requiring same-day response.

How is the composite score weighted across the four dimensions? The composite score is: Brand Threat × 30% + Impersonation Detection × 25% + Review Authenticity × 25% + Narrative Drift × 20%. A DEFCON 1 composite score requires sustained high scores across multiple dimensions, not just one extreme outlier. This weighting reflects the typical relative severity and organizational impact of each threat type.

Can I integrate this MCP server with my existing SIEM or security platform? The server returns structured JSON via standard HTTP. You can call it from any platform that can make HTTP requests, parse the JSON output, and route immediateActions and allSignals fields into your SIEM, ticketing system, or incident management platform. The Apify API also supports programmatic access from Python and JavaScript.

What happens if one of the 8 data sources fails during a report? The server uses Promise.allSettled for parallel data collection. If any individual actor call fails or times out, that source returns an empty array and the remaining sources continue. The scoring models treat empty arrays as zero contribution from that dimension. The report will be delivered with a note that some sources returned no data.

Help us improve

If you encounter issues, you can help us debug faster by enabling run sharing in your Apify account:

  1. Go to Account Settings > Privacy
  2. Enable Share runs with public Actor creators

This lets us see your run details when something goes wrong, so we can fix issues faster. Your data is only visible to the actor developer, not publicly.

Support

Found a bug or have a feature request? Open an issue in the Issues tab on this actor's page. For custom solutions or enterprise integrations, reach out through the Apify platform.

How it works

01

Configure

Set your parameters in the Apify Console or pass them via API.

02

Run

Click Start, trigger via API, webhook, or set up a schedule.

03

Get results

Download as JSON, CSV, or Excel. Integrate with 1,000+ apps.

Use cases

Sales Teams

Build targeted lead lists with verified contact data.

Marketing

Research competitors and identify outreach opportunities.

Data Teams

Automate data collection pipelines with scheduled runs.

Developers

Integrate via REST API or use as an MCP tool in AI workflows.

Ready to try Brand Narrative Intelligence MCP Server?

Start for free on Apify. No credit card required.

Open on Apify Store